Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Processor > AI Processor

AI Processor Semiconductor IPs

The AI Processor category within our semiconductor IP catalog is dedicated to state-of-the-art technologies that empower artificial intelligence applications across various industries. AI processors are specialized computing engines designed to accelerate machine learning tasks and perform complex algorithms efficiently. This category includes a diverse collection of semiconductor IPs that are built to enhance both performance and power efficiency in AI-driven devices.

AI processors play a critical role in the emerging world of AI and machine learning, where fast processing of vast datasets is crucial. These processors can be found in a range of applications from consumer electronics like smartphones and smart home devices to advanced robotics and autonomous vehicles. By facilitating rapid computations necessary for AI tasks such as neural network training and inference, these IP cores enable smarter, more responsive, and capable systems.

In this category, developers and designers will find semiconductor IPs that provide various levels of processing power and architectural designs to suit different AI applications, including neural processing units (NPUs), tensor processing units (TPUs), and other AI accelerators. The availability of such highly specialized IPs ensures that developers can integrate AI functionalities into their products swiftly and efficiently, reducing development time and costs.

As AI technology continues to evolve, the demand for robust and scalable AI processors increases. Our semiconductor IP offerings in this category are designed to meet the challenges of rapidly advancing AI technologies, ensuring that products are future-ready and equipped to handle the complexities of tomorrow’s intelligence-driven tasks. Explore this category to find cutting-edge solutions that drive innovation in artificial intelligence systems today.

All semiconductor IP
156
IPs available

Akida Neural Processor IP

The Akida Neural Processor IP by BrainChip is a versatile AI solution that melds neural processing capabilities with scalable digital architecture, delivering high performance with minimal power consumption. At its core, this processor is engineered using principles from neuromorphic computing to address the demands of AI workloads with precision and speed. By enabling efficient computations with sparse data, the Akida Neural Processor optimizes sparse data, weights, and activations, making it especially suitable for AI applications that demand real-time processing with low latency. It provides a flexible solution for implementing neural networks with varying complexities and is adaptable to a wide array of use cases from audio processing to visual recognition. The IP core’s configurable framework supports the execution of complex neural models on edge devices, effectively running sophisticated neural algorithms like Convolutional Neural Networks (CNNs) without the need for complementary computing resources. This standalone operation capability reduces dependency on external CPUs, driving down power consumption and liberating devices from constant network connections.

BrainChip
AI Processor, Coprocessor, CPU, Digital Video Broadcast, Network on Chip, Platform Security, Vision Processor
View Details

Akida 2nd Generation

The Akida 2nd Generation processor further advances BrainChip's AI capabilities with enhanced programmability and efficiency for complex neural network operations. Building on the principles of its predecessor, this generation is optimized for 8-, 4-, and 1-bit weights and activations, offering more robust activation functions and support for advanced temporal and spatial neural networks. A standout feature of the Akida 2nd Generation is its enhanced teaching capability, which includes learning directly on the chip. This enables the system to perform one-shot and few-shot learning, significantly boosting its ability to adapt to new tasks without extensive reprogramming. Its architecture supports more sophisticated machine learning models such as Convolutional Neural Networks (CNNs) and Spatio-Temporal Event-Based Neural Networks, optimizing them for energy-efficient application at the edge. The processor's design reduces the necessity for host CPU involvement, thus minimizing communication overhead and conserving energy. This makes it particularly suitable for real-time data processing applications where quick and efficient data handling is crucial. With event-based hardware that accelerates processing, the Akida 2nd Generation is designed for scalability, providing flexible solutions across a wide range of AI-driven tasks.

BrainChip
AI Processor, CPU, Digital Video Broadcast, GPU, Input/Output Controller, IoT Processor, Multiprocessor / DSP, Network on Chip, Security Protocol Accelerators, Vision Processor
View Details

KL730 AI SoC

The KL730 AI SoC is an advanced powerhouse, utilizing third-generation NPU architecture to deliver up to 8 TOPS of efficient computing. This architecture excels in both CNN and transformer applications, optimizing DDR bandwidth usage. Its robust video processing features include 4K 60FPS video output, with exceptional performance in noise reduction, dynamic range, and low-light scenarios. With versatile application support ranging from intelligent security to autonomous driving, the KL730 stands out by delivering exceptional processing capabilities.

Kneron
TSMC
28nm
2D / 3D, A/D Converter, AI Processor, Amplifier, Audio Interfaces, Camera Interface, Clock Generator, CPU, CSC, GPU, Image Conversion, JPEG, USB, VGA, Vision Processor
View Details

Metis AIPU PCIe AI Accelerator Card

The Metis AIPU PCIe AI Accelerator Card by Axelera AI is designed for developers seeking top-tier performance in vision applications. Powered by a single Metis AIPU, this PCIe card delivers up to 214 TOPS, handling demanding AI tasks with ease. It is well-suited for high-performance AI inference, featuring two configurations: 4GB and 16GB memory options. The card benefits from the Voyager SDK, which enhances the developer experience by simplifying the deployment of applications and extending the card's capabilities. This accelerator PCIe card is engineered to run multiple AI models and support numerous parallel neural networks, enabling significant processing power for advanced AI applications. The Metis PCIe card performs at an industry-leading level, achieving up to 3,200 frames per second for ResNet-50 tasks and offering exceptional scalability. This makes it an excellent choice for applications demanding high throughput and low latency, particularly in computer vision fields.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

AI Camera Module

The AI Camera Module is an innovative solution designed to bring cutting-edge AI capabilities to camera systems. Known for its enhanced image quality and AI-based processing, this module integrates seamlessly with a wide array of systems to deliver superior performance in real-time scenarios. It is equipped to handle complex image processing tasks, making it invaluable for applications ranging from security to AI-driven analytics. By incorporating the latest AI advancements into its operation, this module facilitates heightened awareness and analysis capabilities across various sectors. Altek's AI Camera Module emphasizes high-resolution image capture, ensuring that every detail is accurately recorded and processed for precise analysis. This technology not only supports high-definition imaging but also optimizes power consumption, making it suitable for integration into IoT and edge computing environments. Such adaptations are crucial for systems requiring constant, real-time image processing while retaining high operational efficiency. The module's design also promotes adaptability, allowing for custom configurations that meet specific client requirements. Its capability to integrate AI functionalities directly into the camera hardware enhances its appeal in industries focused on automation, surveillance, and smart analytics. This product affirms Altek's role in pioneering technological advancements that align with current and future demands for intelligent, efficient, and scalable solutions.

Altek Corporation
2D / 3D, AI Processor, Audio Interfaces, GPU, Image Conversion, IoT Processor, JPEG, Receiver/Transmitter, SATA, Vision Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor is T-Head's flagship ARM-based server chip that represents the pinnacle of their technological expertise. Designed with a pioneering architecture, it is crafted for high efficiency and superior performance metrics. This processor is built using a 2.5D packaging method, integrating two dies and boasting a substantial 60 billion transistors. The core of the Yitian 710 consists of 128 high-performance Armv9 CPU cores, each accompanied by advanced memory configurations that streamline instruction and data caching processes. Each CPU integrates 64KB of L1 instruction cache, 64KB of L1 data cache, and 1MB of L2 cache, supplemented by a robust 128MB system-level cache on the chip. To support expansive data operations, the processor is equipped with an 8-channel DDR5 memory system, enabling peak memory bandwidth of up to 281GB/s. Its I/O subsystem is formidable, featuring 96 PCIe 5.0 channels capable of achieving dual-direction bandwidth up to 768GB/s. With its multi-layered design, the Yitian 710 Processor is positioned as a leading solution for cloud services, data analytics, and AI operations.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

xcore.ai

xcore.ai stands as a cutting-edge processor that brings sophisticated intelligence, connectivity, and computation capabilities to a broad range of smart products. Designed to deliver optimal performance for applications in consumer electronics, industrial control, and automotive markets, it efficiently handles complex processing tasks with low power consumption and rapid execution speeds. This processor facilitates seamless integration of AI capabilities, enhancing voice processing, audio interfacing, and real-time analytics functions. It supports various interfacing options to accommodate different peripheral and sensor connections, thus providing flexibility in design and deployment across multiple platforms. Moreover, the xcore.ai ensures robust performance in environments requiring precise control and high data throughput. Its compatibility with a wide array of software tools and libraries enables developers to swiftly create and iterate applications, reducing the time-to-market and optimizing the design workflows.

XMOS Semiconductor
21 Categories
View Details

Veyron V2 CPU

Veyron V2 represents the next generation of Ventana's high-performance RISC-V CPU. It significantly enhances compute capabilities over its predecessor, designed specifically for data center, automotive, and edge deployment scenarios. This CPU maintains compatibility with the RVA23 RISC-V specification, making it a powerful alternative to the latest ARM and x86 counterparts within similar domains. Focusing on seamless integration, the Veyron V2 offers clean, portable RTL implementations with a standardized interface, optimizing its use for custom SoCs with high-core counts. With a robust 512-bit vector unit, it efficiently supports workloads requiring both INT8 and BF16 precision, making it highly suitable for AI and ML applications. The Veyron V2 is adept in handling cloud-native and virtualized workloads due to its full architectural virtualization support. The architectural advancements offer significant performance-per-watt improvements, and advanced cache and virtualization features ensure a secure and reliable computing environment. The Veyron V2 is available as both a standalone IP and a complete hardware platform, facilitating diverse integration pathways for customers aiming to harness Ventana’s innovative RISC-V solutions.

Ventana Micro Systems
TSMC
16nm, 28nm
AI Processor, CPU, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

The Tianqiao-70 is a low-power RISC-V CPU designed for commercial-grade applications where power efficiency is paramount. Suitable for mobile and desktop applications, artificial intelligence, as well as various other technology sectors, this processor excels in maintaining high performance while minimizing power consumption. Its design offers great adaptability to meet the requirements of different operational environments.

StarFive Technology
AI Processor, CPU, Multiprocessor / DSP, Processor Cores
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Chimera GPNPU

Chimera GPNPU provides a groundbreaking architecture, melding the efficiency of neural processing units with the flexibility and programmability of processors. It supports a full range of AI and machine learning workloads autonomously, eliminating the need for supplementary CPUs or GPUs. The processor is future-ready, equipped to handle new and emerging AI models with ease, thanks to its C++ programmability. What makes Chimera stand out is its ability to manage a diverse array of workloads within a singular processor framework that combines matrix, vector, and scalar operations. This harmonization ensures maximum performance for applications across various market sectors, such as automotive, mobile devices, and network edge systems. These capabilities are designed to streamline the AI development process and facilitate high-performance inference tasks, crucial for modern gadget ecosystems. The architecture is fully synthesizable, allowing it to be implemented in any process technology, from current to advanced nodes, adjusting to desired performance targets. The adoption of a hybrid Von Neuman and 2D SIMD matrix design supports a broad suite of DSP operations, providing a comprehensive toolkit for complex graph and AI-related processing.

Quadric
15 Categories
View Details

AUTOSAR & Adaptive AUTOSAR Solutions

KPIT Technologies offers comprehensive AUTOSAR solutions that are pivotal for the development of modern, adaptive automotive systems. Emphasizing middleware integration and E/E architecture transformation, their solutions simplify the complexities of implementing adaptive AUTOSAR platforms, enabling streamlined application development and expeditious vehicle deployment. With extensive experience in traditional and adaptive AUTOSAR ecosystems, KPIT assists OEMs in navigating the challenges associated with software-defined vehicles. Their expertise facilitates the separation of hardware and software components, which is crucial for the future of vehicle digital transformation. KPIT's middleware development capabilities enhance vehicle systems' robustness and scalability, allowing for seamless integration across various automotive applications and ensuring compliance with industry standards. By fostering strategic partnerships and investing in cutting-edge technology solutions, KPIT ensures that its clients can confidently transition to and maintain advanced AUTOSAR platforms. The company's commitment to innovation and excellence positions it as a trusted partner for automakers striving to stay ahead in the competitive automotive landscape by embracing the shift towards fully software-defined vehicles.

KPIT Technologies
AI Processor, AMBA AHB / APB/ AXI, Platform Security, Security Protocol Accelerators, W-CDMA
View Details

Jotunn8 AI Accelerator

The Jotunn 8 is heralded as the world's most efficient AI inference chip, designed to maximize AI model deployment with lightning-fast speeds and scalability. This powerhouse is crafted to efficiently operate within modern data centers, balancing critical factors such as high throughput, low latency, and optimization of power use, all while maintaining a sustainable infrastructure. With the Jotunn 8, AI investments reach their full potential through high-performance inference solutions that significantly reduce operational costs while committing to environmental sustainability. Its ultra-low latency feature is crucial for real-time applications such as chatbots and fraud detection systems. Not only does it deliver high throughput needed for demanding services like recommendation engines, but it also proves cost-efficient, aiming to lower the cost per inference crucial for businesses operating at a large scale. Additionally, the Jotunn 8 boasts performance per watt efficiency, a major factor considering that power is a significant operational expense and a driver of the carbon footprint. By implementing the Jotunn 8, businesses can ensure their AI models deliver maximum impact while staying competitive in the growing real-time AI services market. This chip lays down a new foundation for scalable AI, enabling organizations to optimize their infrastructures without compromising on performance.

VSORA
13 Categories
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

ADAS and Autonomous Driving

KPIT Technologies offers advanced ADAS and autonomous driving solutions designed to accelerate the widespread adoption of Level 3+ autonomy. The company addresses key challenges such as safety, feature development limitations, and validation fragmentation by integrating robust safety protocols and conducting thorough testing across various driving scenarios. Their solutions enhance the intelligence and reliability of autonomous systems by leveraging AI-driven decision-making, which goes beyond basic perception capabilities. KPIT's comprehensive validation frameworks and simulation environments ensure continuous and thorough validation of autonomous driving frameworks. By integrating AI-based perception and planning with system engineering and functional safety practices, KPIT empowers automakers to produce vehicles that are safe, reliable, and paving the way for autonomous mobility. Their strategic partnerships and domain expertise make KPIT a leader in automating vehicle development processes, ensuring readiness for the challenges of scaling autonomous vehicles. Through these innovations, KPIT continues to address the dynamic challenges of autonomous driving, providing automakers with the tools needed to develop increasingly advanced and autonomous vehicles well-positioned for future success.

KPIT Technologies
AI Processor, Building Blocks, Other, W-CDMA
View Details

Akida IP

The Akida IP platform is a revolutionary neural processor inspired by the workings of the human brain to achieve unparalleled cognitive capabilities and energy efficiency. This self-contained neural processor utilizes a scalable architecture that can be configured from 1 to 128 nodes, each capable of supporting 128 MAC operations. It allows for the execution of complex neural network operations with minimal power and latency, making it ideal for edge AI applications in vision, audio, and sensor fusion. The Akida IP supports multiple data formats including 4-, 2-, and 1-bit weights and activations, enabling the seamless execution of various neural networks across multiple layers. Its convolutional and fully-connected neural processors can perform multi-layered executions independently of a host CPU, enhancing flexibility in diverse applications. Additionally, its event-based hardware acceleration significantly reduces computation and communication loads, preserving host CPU resources and optimizing overall system efficiency. Silicon-proven, the Akida platform provides a cost-effective and secure solution due to its on-chip learning capabilities, supporting one-shot and few-shot learning methods. By maintaining sensitive data on-chip, the system offers improved security and privacy. Its extensive configurability ensures adaptability for post-silicon applications, making Akida an intelligent and scalable choice for developers. It is especially suited for implementations that require real-time processing and sophisticated AI functionalities at the edge.

BrainChip
AI Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Vision Processor
View Details

KL520 AI SoC

The KL520 AI SoC introduces edge AI with efficiency in size and power, setting a standard in the market for such technologies. Featuring a dual ARM Cortex M4 CPU, it serves as a versatile AI co-processor, supporting an array of smart devices. It’s designed for compatibility with various sensor technologies, enabling powerful 3D sensing capabilities.

Kneron
TSMC
28nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Receiver/Transmitter, Vision Processor
View Details

RV12 RISC-V Processor

The RV12 RISC-V Processor is a highly adaptable single-core CPU that adheres to the RV32I and RV64I specifications of the RISC-V instruction set, aimed at the embedded systems market. This processor supports a variety of standard and custom configurations, making it suitable for diverse application needs. Its inherent flexibility allows it to be implemented efficiently in both FPGA and ASIC environments, ensuring that it meets the performance and resource constraints typical of embedded applications. Designed with an emphasis on configurability, the RV12 Processor can be tailored to include only the necessary components, optimizing both area and power consumption. It comes with comprehensive documentation and verification testbenches, providing a complete solution for developers looking to integrate a RISC-V CPU into their design. Whether for educational purposes or commercial deployment, the RV12 stands out for its robust design and adaptability, making it an ideal choice for modern embedded system solutions.

Roa Logic BV
AI Processor, CPU, Cryptography Software Library, IoT Processor, Microcontroller, Processor Cores
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator from EdgeCortix is a sophisticated solution designed to propel generative AI to new frontiers with impressive energy efficiency. This advanced accelerator provides unparalleled performance with high flexibility for a wide variety of applications, leveraging EdgeCortix's dedicated Dynamic Neural Accelerator architecture. SAKURA-II is optimized for real-time, low-latency AI inference on the edge, tackling demanding generative AI tasks efficiently in constrained environments. The accelerator boasts up to 60 TOPS (Tera Operations Per Second) INT8 performance, allowing it to process large neural networks with complex parameters such as Llama 2 and Stable Diffusion effectively. It supports applications across vision, language, audio, and beyond, by utilizing robust DRAM capabilities and enhanced data throughput. This allows it to outperform other solutions while maintaining a low power consumption profile typically around 8 watts. Designed for integration into small silicon spaces, SAKURA-II caters to the needs of highly efficient AI models, providing dynamic capabilities to meet the stringent requirements of next-gen applications. Thus, the SAKURA-II AI Accelerator stands out as a top choice for developers seeking seamless deployment of cutting-edge AI applications at the edge, underscoring EdgeCortix's leadership in energy-efficient AI processing.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

KL630 AI SoC

With cutting-edge NPU architecture, the KL630 AI SoC pushes the boundaries of performance efficiency and low energy consumption. It stands as a pioneering solution supporting Int4 precision and transformer neural networks, offering noteworthy performance for diverse applications. Anchored by an ARM Cortex A5 CPU, it boasts compute efficiency and energy savings, making it ideal for various edge devices.

Kneron
TSMC
28nm
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, USB, VGA, Vision Processor
View Details

Talamo SDK

The Talamo Software Development Kit (SDK) is a dynamic and powerful environment tailored for the creation and deployment of advanced neuromorphic AI applications. By integrating seamlessly with PyTorch, Talamo provides a familiar and effective workflow for developers, enabling the construction of robust AI models that exploit the capabilities of spiking neural processors. This SDK extends PyTorch's standard functionalities, offering the necessary infrastructure for building and training spiking neural networks (SNNs) with ease. Talamo's architecture allows developers without specialized knowledge in SNNs to begin building applications that are optimized for neuromorphic processors. It provides compiled models that are specifically mapped to the versatile computing architecture of the Spiking Neural Processor. Furthermore, an architecture simulator offered by Talamo facilitates rapid hardware emulation, enabling quicker validation and development cycles for new applications. Designed to support developers in creating end-to-end application pipelines, Talamo allows the integration of custom functions and neural networks within a comprehensive framework. By removing the need for deep expertise in neuromorphic computing, Talamo empowers a larger population of developers to harness brain-inspired AI models, fostering innovation and accelerating the deployment of intelligent systems across various sectors.

Innatera Nanosystems
AI Processor, Multiprocessor / DSP, Vision Processor
View Details

EW6181 GPS and GNSS Silicon

EW6181 is an IP solution crafted for applications demanding extensive integration levels, offering flexibility by being licensable in various forms such as RTL, gate-level netlist, or GDS. Its design methodology focuses on delivering the lowest possible power consumption within the smallest footprint. The EW6181 effectively extends battery life for tags and modules due to its efficient component count and optimized Bill of Materials (BoM). Additionally, it is backed by robust firmware ensuring highly accurate and reliable location tracking while offering support and upgrades. The IP is particularly suitable for challenging application environments where precision and power efficiency are paramount, making it adaptable across different technology nodes given the availability of its RF frontend.

etherWhere Corporation
TSMC
7nm
3GPP-5G, AI Processor, Bluetooth, CAN, CAN XL, CAN-FD, Fibre Channel, FlexRay, GPS, Optical/Telecom, Photonics, RF Modules, USB, W-CDMA
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

aiWare

aiWare is engineered as a high-performance neural processing unit tailored for automotive AI applications, delivering exceptional power efficiency and computational capability across a broad spectrum of neural network tasks. Its design centers around achieving the utmost efficiency in AI inference, providing flexibility and scalability for various levels of autonomous driving, from basic L2 assistance systems to complex L4 self-driving operations. The aiWare architecture exemplifies leading-edge NPU efficiencies, reaching up to 98% across diverse neural network workloads like CNNs and RNNs, making it a premier choice for AI tasks in the automotive sector. It boasts an industry-leading 1024 TOPS capability, making it suitable for multi-sensor and multi-camera setups required by advanced autonomous vehicle systems. The NPU's hardware determinism aids in achieving high ISO 26262 ASIL B certification standards, ensuring it meets the rigorous safety specifications essential in automotive applications. Incorporating an easy-to-integrate RTL design and a comprehensive SDK, aiWare simplifies system integration and accelerates development timelines for automotive manufacturers. Its highly optimized dataflow and minimal external memory traffic significantly enhance system power economy, providing crucial benefits in reducing operational costs for deployed automotive AI solutions. Vibrant with efficiency, aiWare assures OEMs the capabilities needed to handle modern automotive workloads while maintaining minimal system constraints.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, FlexRay, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

Polar ID Biometric Security System

Polar ID is a groundbreaking biometric security solution designed for smartphones, providing a secure and convenient face unlock feature. Employing advanced meta-optic technology, Polar ID captures the polarization signature of a human face, offering an additional layer of security that easily identifies human tissue and foils sophisticated 3D mask attempts. This technology enables ultra-secure facial recognition in diverse environments, from daylight to complete darkness, without compromising on the user experience. Unlike traditional facial recognition systems, Polar ID operates using a simple, compact design that eliminates the need for multiple optical modules. Its unique capability to function in any lighting condition, including bright sunlight or total darkness, distinguishes it from conventional systems that struggle under such scenarios. Furthermore, the high resolution and precision of Polar ID ensure reliable performance even when users have their face partially obscured by sunglasses or masks. With its cost-effectiveness and small form factor, Polar ID is set to disrupt the mobile device market by making secure biometric authentication accessible to a broader range of smartphones, not just high-end models. By simplifying the integration of facial recognition technology, Polar ID empowers mobile devices to replace less secure, inconvenient fingerprint sensors, thus broadening the reach and applicability of facial biometrics in consumer electronics.

Metalenz Inc.
13 Categories
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is engineered to propel artificial intelligence tasks to new heights with its cutting-edge architecture. This accelerator enhances machine learning tasks by speeding up neural network processing, making it a key player in the burgeoning AI sector. Its innovative design is optimized for low latency and high throughput, facilitating real-time AI application performance and enabling advanced machine learning model implementations. Harnessing an extensive array of computing cores, the Hanguang 800 ensures parallel processing capabilities that significantly reduce training times for large-scale AI models. Its application scope covers diverse sectors, including autonomous driving, smart city infrastructure, and intelligent robotics, underscoring its versatility and adaptability. Built with energy efficiency in mind, this AI accelerator prioritizes minimal power consumption, making it ideal for data centers looking to maximize computational power without overextending their energy footprint. By integrating seamlessly with existing frameworks, the Hanguang 800 offers a ready-to-deploy solution for enterprises seeking to enhance their AI-driven services and operations.

T-Head Semiconductor
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

SiFive Intelligence X280

The SiFive Intelligence X280 is designed to address the burgeoning needs of AI and machine learning at the edge. Emphasizing a software-first methodology, this family of processors is crafted to offer scalable vector and matrix compute capabilities. By integrating broad vector processing features and high-bandwidth interfaces, it can adapt to the ever-evolving landscape of AI workloads, providing both high performance and efficient scalability. Built on the RISC-V foundation, the X280 features comprehensive vector compute engines that cater to modern AI demands, making it a powerful tool for edge computing applications where space and energy efficiency are critical. Its versatility allows it to seamlessly manage diverse AI tasks, from low-latency inferences to complex machine learning models, thanks to its support for RISC-V Vector Extensions (RVV). The X280 family is particularly robust for applications requiring rapid AI deployment and adaptation like IoT devices and smart infrastructure. Through extensive compatibility with machine learning frameworks such as TensorFlow Lite, it ensures ease of deployment, enhanced by its focus on energy-efficient inference solutions and support for legacy systems, making it a comprehensive solution for future AI technologies.

SiFive, Inc.
AI Processor, CPU, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

KL530 AI SoC

The KL530 is Kneron's state-of-the-art AI chip with a unique NPU architecture, leading the market in INT4 precision and transformers. Designed for higher efficiency, it features lower power consumption while maintaining robust performance. The chip supports various AI models and configurations, making it adaptable across AIoT and other technology landscapes.

Kneron
TSMC
28nm
AI Processor, Camera Interface, Clock Generator, CPU, CSC, GPU, Peripheral Controller, Vision Processor
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

SCR9 Processor Core

Poised to deliver exceptional performance in advanced applications, the SCR9 processor core epitomizes modern processing standards with its 12-stage dual-issue out-of-order pipeline and hypervisor support. Its inclusion of a vector processing unit (VPU) positions it as essential for high-performance computing tasks that require extensive parallel data processing. Suitable for high-demand environments such as enterprise data systems, AI workloads, and computationally intensive mobile applications, the SCR9 core is tailored to address high-throughput demands while maintaining reliability and accuracy. With support for symmetric multiprocessing (SMP) of up to 16 cores, this core stands as a configurable powerhouse, enabling developers to maximize processing efficiency and throughput. The SCR9's capabilities are bolstered by Syntacore’s dedication to supporting developers with comprehensive tools and documentation, ensuring efficient design and implementation. Through its blend of sophisticated features and support infrastructure, the SCR9 processor core paves the way for advancing technological innovation across numerous fields, establishing itself as a robust solution in the rapidly evolving landscape of high-performance computing.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Ultra-Low-Power 64-Bit RISC-V Core

The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic is engineered to deliver exceptional energy efficiency while maintaining high performance. This core is specifically designed to operate at 1GHz while consuming a mere 10mW of power, making it ideal for today's power-conscious applications. Utilizing advanced design techniques, this processor achieves its high performance at lower voltages, ensuring reduced power consumption without sacrificing speed. Constructed with a focus on optimizing processing capabilities, this RISC-V core is built to cater to demanding environments where energy efficiency is critical. Whether used as a standalone processor or integrated into larger systems, its low power requirements and robust performance make it highly versatile. This core also supports scalable processing with its architecture, accommodating a broad spectrum of applications from IoT devices to performance-intensive computing tasks, aligning with industry standards for modern electronic products.

Micro Magic, Inc.
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

C100 IoT Control and Interconnection Chip

The Chipchain C100 is a pioneering solution in IoT applications, providing a highly integrated single-chip design that focuses on low power consumption without compromising performance. Its design incorporates a powerful 32-bit RISC-V CPU which can reach speeds up to 1.5GHz. This processing power ensures efficient and capable computing for diverse IoT applications. This chip stands out with its comprehensive integrated features including embedded RAM and ROM, making it efficient in both processing and computing tasks. Additionally, the C100 comes with integrated Wi-Fi and multiple interfaces for transmission, broadening its application potential significantly. Other notable features of the C100 include an ADC, LDO, and a temperature sensor, enabling it to handle a wide array of IoT tasks more seamlessly. With considerations for security and stability, the Chipchain C100 facilitates easier and faster development in IoT applications, proving itself as a versatile component in smart devices like security systems, home automation products, and wearable technology.

Shenzhen Chipchain Technologies Co., Ltd.
TSMC
7nm LPP, 16nm, 28nm
20 Categories
View Details

SCR7 Application Core

Pushing the envelope of application processing, the SCR7 application core integrates a 12-stage dual-issue out-of-order pipeline for high-performance computing tasks. It is equipped with advanced cache coherency and a robust memory subsystem ideal for modern applications demanding exceptional compute power and scalability. This application core serves large-scale computing environments, addressing needs within sectors such as data centers, enterprise solutions, and AI-enhanced applications. Supporting symmetric multiprocessing (SMP) with configurations up to eight cores, the SCR7 ensures smooth and simultaneous execution of complex tasks, significantly improving throughput and system efficiency. Syntacore complements this architecture with a rich toolkit that facilitates development across diverse platforms, enhancing its adaptability to specific commercial needs. The SCR7 embodies the future of application processing with its ability to seamlessly integrate into existing infrastructures while delivering outperforming results rooted in efficient architectural design and robust support systems.

Syntacore
AI Processor, CPU, IoT Processor, Microcontroller, Processor Core Independent, Processor Cores
View Details

WiseEye2 AI Solution

Himax's WiseEye2 is an innovative AI solution designed to meet the rising demands of AI-driven applications in edge devices. This powerful processor merges ultralow power consumption with advanced sensor fusion capabilities, making it ideal for a new era of smart technologies. WiseEye2 facilitates the integration of a wide array of functions into IoT devices, creating intelligent systems that efficiently utilize power and provide real-time data processing. WiseEye2's architecture integrates sophisticated AI algorithms, enabling enhanced pattern recognition and data analytics on-device. This makes it particularly effective in applications ranging from smart homes and security systems to office automation and industrial uses, where rapid and efficient data processing is crucial. The solution's low power requirement does not compromise its ability to deliver high efficiency and processing power, making it a suitable choice for battery-operated devices. By facilitating high-level image and voice recognition capabilities whilst minimizing energy consumption, WiseEye2 empowers a range of devices to become smarter and more interactive.

Himax Technologies, Inc.
AI Processor, Vision Processor
View Details

Dynamic Neural Accelerator II Architecture

Dynamic Neural Accelerator II (DNA-II) by EdgeCortix enhances the processing capabilities of AI hardware through its state-of-the-art, reconfigurable architecture. This versatile IP core is tailored for edge applications, enabling seamless execution of complex AI tasks both in convolutional and transformer network contexts. With runtime configurability, DNA-II offers unparalleled efficiency, allowing optimized interconnects between compute units to maximize parallel processing. The DNA-II architecture leverages proprietary technologies to reconfigure data paths dynamically, thereby reducing on-chip memory bandwidth and achieving higher operability than standard approaches. Designed to be interfaced with various host processors, DNA-II is adaptable for multiple system-on-chip (SoC) implementations demanding high parallelism and low latency. It's a pivotal part of the SAKURA-II ecosystem, contributing significantly to its generative AI capabilities. A key advantage of DNA-II is its support for scaling up performance starting with 1K MACs, which facilitates customization across different application scales and requirements. Supported by the MERA software stack, DNA-II optimizes computation and resource allocation efficiently, making it ideal for any developer looking to enhance edge AI solutions with powerful, innovative IP.

EdgeCortix Inc.
AI Processor, Processor Core Independent
View Details

RAIV General Purpose GPU

The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

KL720 AI SoC

Optimized for performance-to-power, the KL720 AI SoC is a formidable choice for high-end applications demanding power efficiency. It supports extensive real-world use cases such as smart TVs and AI glasses, featuring a powerful architecture designed for seamless 4K video and complex AI processes, including facial recognition and gaming interfaces.

Kneron
TSMC
28nm
2D / 3D, AI Processor, Audio Interfaces, AV1, Camera Interface, CPU, GPU, Image Conversion, TICO, Vision Processor
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 represents a significant leap in neuromorphic microcontroller technology, blending ultra-low power consumption with advanced spiking neural network capabilities. This microcontroller stands as a complete solution for processing sensor data with unprecedented efficiency and speed, bringing intelligence directly to the sensor. Incorporating a nimble RISC-V processor core alongside its spiking neural network engine, the T1 is engineered for seamless integration into next-generation AI applications. Within a tightly constrained power envelope, it excels at signal processing tasks that are crucial for battery-operated, latency-sensitive devices. The T1's architecture allows for fast, sub-1mW pattern recognition, enabling real-time sensory data processing akin to the human brain's capabilities. This microcontroller facilitates complex event-driven processing with remarkable efficiency, reducing the burden on application processors by offloading sensor data processing tasks. It is an enabler of groundbreaking developments in wearables, ambient intelligence, and smart devices, particularly in scenarios where power and response time are critical constraints. With flexible interface support, including QSPI, I2C, UART, and more, the T1 is designed for easy integration into existing systems. Its compact package size further enhances its suitability for embedded applications, while its comprehensive Evaluation Kit (EVK) supports developers in accelerating application development. The EVK provides extensive performance profiling tools, enabling the exploration of the T1's multifaceted processing capabilities. Overall, the T1 stands at the forefront of bringing brain-inspired intelligence to the edge, setting a new standard for smart sensor technology.

Innatera Nanosystems
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

This H.264 FPGA Encoder and CODEC Micro Footprint Core is engineered to achieve minimal latency and compact size when deployed in FPGA environments. It is customizable and ITAR compliant, providing robust 1080p60 H.264 Baseline support on a single core. Known for its remarkable speed and small footprint, this core adapts to various configurations, including complete H.264 encoders and I-Frame Only variations, supporting custom pixel depths and unique resolutions. The core's design focuses on reducing latency to a mere 1 millisecond at 1080p30, setting a high industry standard for performance. Flexibility in deployment allows this core to meet bespoke requirements, offering significant value for customer-specific applications. It stands as a versatile solution for applications demanding high-speed video processing while maintaining compliance with industry standards. Supporting a variety of FPGA platforms, the core is especially valuable in environments where space and power constraints are crucial. Its adaptability, combined with A2e's integration capabilities, ensures seamless incorporation into existing systems, bolstering performance and development efficiency.

A2e Technologies
AI Processor, AMBA AHB / APB/ AXI, Arbiter, Audio Controller, H.264, H.265, Multiprocessor / DSP, Other, TICO, USB, Wireless Processor
View Details

Veyron V1 CPU

The Veyron V1 is a high-performance RISC-V CPU aimed at data centers and similar applications that require robust computing power. It integrates with various chiplet and IP cores, making it a versatile choice for companies looking to create customized solutions. The Veyron V1 is designed to offer competitive performance against x86 and ARM counterparts, providing a seamless transition between different node process technologies. This CPU benefits from Ventana's innovation in RISC-V technology, where efforts are placed on providing an extensible architecture that facilitates domain-specific acceleration. With capabilities stretching from hyperscale computing to edge applications, the Veyron V1 supports extensive instruction sets for high-throughput operations. It also boasts leading-edge chiplet interfaces, opening up numerous opportunities for rapid productization and cost-effective deployment. Ventana's emphasis on open standards ensures that the Veyron V1 remains an adaptable choice for businesses aiming at bespoke solutions. Its compatibility with system IP and its provision in multiple platform formats—including chiplets—enable businesses to leverage the latest technological advancements in RISC-V. Additionally, the ecosystem surrounding the Veyron series ensures support for both modern software frameworks and cross-platform integration.

Ventana Micro Systems
TSMC
10nm, 16nm
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

NeuroMosAIc Studio

NeuroMosAIc Studio is a comprehensive software platform designed to maximize AI processor utilization through intuitive model conversion, mapping, simulation, and profiling. This advanced software suite supports Edge AI models by optimizing them for specific application needs. It offers precision analysis, network compression, and quantization tools to streamline the process of deploying AI models across diverse hardware setups. The platform is notably adept at integrating multiple AI functions and facilitating edge training processes. With tools like the NMP Compiler and Simulator, it allows developers to optimize functions at different stages, from quantization to training. The Studio's versatility is crucial for developers seeking to enhance AI solutions through customized model adjustments and optimization, ensuring high performance across AI systems. NeuroMosAIc Studio is particularly valuable for its edge training support and comprehensive optimization capabilities, paving the way for efficient AI deployment in various sectors. It offers a robust toolkit for AI model developers aiming to extract the maximum performance from hardware in dynamic environments.

AiM Future
AI Processor, CPU, IoT Processor
View Details

CTAccel Image Processor on Intel Agilex FPGA

The CTAccel Image Processor on Intel Agilex FPGA is designed to handle high-performance image processing by capitalizing on the robust capabilities of Intel's Agilex FPGAs. These FPGAs, leveraging the 10 nm SuperFin process technology, are ideal for applications demanding high performance, power efficiency, and compact sizes. Featuring advanced DSP blocks and high-speed transceivers, this IP thrives in accelerating image processing tasks that are typically computational-intensive when executed on CPUs. One of the main advantages is its ability to significantly enhance image processing throughput, achieving up to 20 times the speed while maintaining reduced latency. This performance prowess is coupled with low power consumption, leading to decreased operational and maintenance costs due to fewer required server instances. Additionally, the solution is fully compatible with mainstream image processing software, facilitating seamless integration and leveraging existing software investments. The adaptability of the FPGA allows for remote reconfiguration, ensuring that the IP can be tailored to specific image processing scenarios without necessitating a server reboot. This ease of maintenance, combined with a substantial boost in compute density, underscores the IP's suitability for high-demand image processing environments, such as those encountered in data centers and cloud computing platforms.

CTAccel Ltd.
Intel Foundry
12nm
AI Processor, DLL, Graphics & Video Modules, Image Conversion, JPEG, JPEG 2000, Processor Core Independent, Vision Processor
View Details

NPU

The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.

OPENEDGES Technology, Inc.
AI Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent
View Details

Ncore Cache Coherent Interconnect

The Ncore Cache Coherent Interconnect from Arteris provides a quintessential solution for handling multi-core SoC design complications, facilitating heterogeneous coherency and efficient caching. It is distinguished by its high throughput, ensuring reliable and high-performance system-on-chips (SoCs). Ncore's configurable fabric offers designers the ability to establish a multi-die, multi-protocol coherent interconnect where emerge cutting-edge technologies like RISC-V can seamlessly integrate. This IP’s adaptability and scalable design unlock broader performance trajectories, whether for small embedded systems or extensive multi-billion transistor architectures. Ncore's strength lies in its ability to offer ISO 26262 ASIL D readiness, enabling designers to adhere to stringent automotive safety standards. Furthermore, its coupling with Magillem™ automation enhances the potential for rapid IP integration, simplifying multi-die designs and compressing development timelines. In addressing modern computational demands, Ncore is reinforced by robust quality of service parameters, secure power management, and seamless integration capabilities, making it an imperative asset in constructing scalable system architectures. By streamlining memory operations and optimizing data flow, it provides bandwidth that supports both high-end automotive and complex consumer electronics, fostering innovation and market excellence.

Arteris
15 Categories
View Details

RISC-V Core IP

The RISC-V Core IP by AheadComputing Inc. exemplifies cutting-edge processor technology, particularly in the realm of 64-bit application processing. Designed for superior IPC (Instructions Per Cycle) performance, this core is engineered to enhance per-core computing capabilities, catering to high-performance computing needs. It stands as a testament to AheadComputing's commitment to achieving the pinnacle of processor speed, setting new industry standards. This processor core is instrumental for various applications requiring robust processing power. It allows for seamless performance in a multitude of environments, whether in consumer electronics, enterprise solutions, or advanced computational fields. The innovation behind this IP reflects the deep expertise and forward-thinking approach of AheadComputing's experienced team. Furthermore, the RISC-V Core IP supports diverse computing needs by enabling adaptable and scalable solutions. AheadComputing leverages the open-source RISC-V architecture to offer customizable computing power, ensuring that their solutions are both versatile and future-ready. This IP is aimed at delivering efficiency and power optimization, supporting sophisticated applications with precision.

AheadComputing Inc.
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series represents a family of processor cores that bring advanced customization to the forefront of embedded designs. These cores are optimized for power and performance, striking a fine balance that suits an array of applications, from sensor controllers in IoT devices to sophisticated automotive systems. Their modular design allows developers to tailor instructions and performance levels directly to their needs, providing a flexible platform that enhances both existing and new applications. Featuring high degrees of configurability, the BK Core Series facilitates designers in achieving superior performance and efficiency. By supporting a broad spectrum of operating requirements, including low-power and high-performance scenarios, these cores stand out in the processor IP marketplace. The series is verified through industry-leading practices, ensuring robust and reliable operation in various application environments. Codasip has made it straightforward to use and adapt the BK Core Series, with an emphasis on simplicity and productivity in customizing processor architecture. This ease of use allows for swift validation and deployment, enabling quicker time to market and reducing costs associated with custom hardware design.

Codasip
AI Processor, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

aiData

aiData functions as a crucial backbone for automated driving systems, providing a fully automated data pipeline tailored for ADAS and autonomous driving (AD) applications. This pipeline streamlines the Machine Learning Operations (MLOps) workflow, from data collection to curation and annotation, enhancing the development process by minimizing manual intervention. By leveraging AI-driven processes, aiData significantly reduces the resources required for data preparation and validation, making high-quality data more accessible for training sophisticated AI models. One of the key features of aiData is its comprehensive versioning system, which ensures complete transparency and traceability throughout the data lifecycle. This feature is pivotal for maintaining high standards in data quality, allowing developers to track changes and updates efficiently. Furthermore, aiData includes advanced tools for annotating data, supported by AI algorithms, which enable rapid and accurate labeling of both moving and static objects. This capability is particularly beneficial for creating dynamic and contextually-rich datasets needed for training robust AD systems. Beyond data preparation, aiData facilitates seamless integration with existing data infrastructure, supporting both on-premises and cloud-based deployment to cater to varying security and collaboration needs. As automotive companies face growing data requirements, aiData's scalable and modular architecture ensures that it can adapt to evolving project demands, offering invaluable support in the rapid deployment and validation of ADAS technologies.

aiMotive
AI Processor, Audio Interfaces, Digital Video Broadcast, Embedded Memories, H.264, Processor Core Dependent
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt