Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Platform Level IP > Multiprocessor / DSP

Multiprocessor and DSP Semiconductor IP

In the realm of semiconductor IP, the Multiprocessor and Digital Signal Processor (DSP) category plays a crucial role in enhancing the processing performance and efficiency of a vast array of modern electronic devices. Semiconductor IPs in this category are designed to support complex computational tasks, enabling sophisticated functionalities in consumer electronics, automotive systems, telecommunications, and more. With the growing need for high-performance processing in a compact and energy-efficient form, multiprocessor and DSP IPs have become integral to product development across industries.

The multiprocessor IPs are tailored to provide parallel processing capabilities, which significantly boost the computational power required for intensive applications. By employing multiple processing cores, these IPs allow for the concurrent execution of multiple tasks, leading to faster data processing and improved system performance. This is especially vital in applications such as gaming consoles, smartphones, and advanced driver-assistance systems (ADAS) in vehicles, where seamless and rapid processing is essential.

Digital Signal Processors are specialized semiconductor IPs used to perform mathematical operations on signals, allowing for efficient processing of audio, video, and other types of data streams. DSPs are indispensable in applications where real-time data processing is critical, such as noise cancellation in audio devices, image processing in cameras, and signal modulation in communication systems. By providing dedicated hardware structures optimized for these tasks, DSP IPs deliver superior performance and lower power consumption compared to general-purpose processors.

Products in the multiprocessor and DSP semiconductor IP category range from core subsystems and configurable processors to specialized accelerators and integrated solutions that combine processing elements with other essential components. These IPs are designed to help developers create cutting-edge solutions that meet the demands of today’s technology-driven world, offering flexibility and scalability to adapt to different performance and power requirements. As technology evolves, the importance of multiprocessor and DSP IPs will continue to grow, driving innovation and efficiency across various sectors.

All semiconductor IP

Akida 2nd Generation

The second-generation Akida platform builds upon the foundation of its predecessor with enhanced computational capabilities and increased flexibility for a broader range of AI and machine learning applications. This version supports 8-bit weights and activations in addition to the flexible 4- and 1-bit operations, making it a versatile solution for high-performance AI tasks. Akida 2 introduces support for programmable activation functions and skip connections, further enhancing the efficiency of neural network operations. These capabilities are particularly advantageous for implementing sophisticated machine learning models that require complex, interconnected processing layers. The platform also features support for Spatio-Temporal and Temporal Event-Based Neural Networks, advancing its application in real-time, on-device AI scenarios. Built as a silicon-proven, fully digital neuromorphic solution, Akida 2 is designed to integrate seamlessly with various microcontrollers and application processors. Its highly configurable architecture offers post-silicon flexibility, making it an ideal choice for developers looking to tailor AI processing to specific application needs. Whether for low-latency video processing, real-time sensor data analysis, or interactive voice recognition, Akida 2 provides a robust platform for next-generation AI developments.

BrainChip
11 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

Axelera AI's Metis AIPU PCIe AI Accelerator Card is engineered to deliver top-tier inference performance in AI tasks aimed at heavy computational loads. This PCIe card is designed with the industry’s highest standards, offering exceptional processing power packaged onto a versatile PCIe form factor, ideal for integration into various computing systems including workstations and servers.<br><br>Equipped with a quad-core Metis AI Processing Unit (AIPU), the card delivers unmatched capabilities for handling complex AI models and extensive data streams. It efficiently processes multiple camera inputs and supports independent parallel neural network operations, making it indispensable for dynamic fields such as industrial automation, surveillance, and high-performance computing.<br><br>The card's performance is significantly enhanced by the Voyager SDK, which facilitates a seamless AI model deployment experience, allowing developers to focus on model logic and innovation. It offers extensive compatibility with mainstream AI frameworks, ensuring flexibility and ease of integration across diverse use cases. With a power-efficient design, this PCIe AI Accelerator Card bridges the gap between traditional GPU solutions and today's advanced AI demands.

Axelera AI
13 Categories
View Details

Universal Chiplet Interconnect Express (UCIe)

Universal Chiplet Interconnect Express, or UCIe, is a forward-looking interconnect technology that enables high-speed data exchanges between various chiplets. Developed to support a modular approach in chip design, UCIe enhances flexibility and scalability, allowing manufacturers to tailor systems to specific needs by integrating multiple functions into a single package. The architecture of UCIe facilitates seamless data communication, crucial in achieving high-performance levels in integrated circuits. It is designed to support multiple configurations and implementations, ensuring compatibility across different designs and maximizing interoperability. UCIe is pivotal in advancing the chiplet strategy, which is becoming increasingly important as devices require more complex and diverse functionalities. By enabling efficient and quick interchip communication, UCIe supports innovation in the semiconductor field, paving the way for the development of highly efficient and sophisticated systems.

EXTOLL GmbH
GLOBALFOUNDRIES, Samsung, TSMC, UMC
22nm, 28nm
AMBA AHB / APB/ AXI, D2D, Gen-Z, Multiprocessor / DSP, Network on Chip, Processor Core Independent, USB, V-by-One, VESA
View Details

Yitian 710 Processor

The Yitian 710 Processor is a groundbreaking component in processor technology, designed with cutting-edge architecture to enhance computational efficiency. This processor is tailored for cloud-native environments, offering robust support for high-demand computing tasks. It is engineered to deliver significant improvements in performance, making it an ideal choice for data centers aiming to optimize their processing power and energy efficiency. With its advanced features, the Yitian 710 stands at the forefront of processor innovation, ensuring seamless integration with diverse technology platforms and enhancing the overall computing experience across industries.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

Chimera GPNPU

Quadric's Chimera GPNPU is an adaptable processor core designed to respond efficiently to the demand for AI-driven computations across multiple application domains. Offering up to 864 TOPS, this licensable core seamlessly integrates into system-on-chip designs needing robust inference performance. By maintaining compatibility with all forms of AI models, including cutting-edge large language models and vision transformers, it ensures long-term viability and adaptability to emerging AI methodologies. Unlike conventional architectures, the Chimera GPNPU excels by permitting complete workload management within a singular execution environment, which is vital in avoiding the cumbersome and resource-intensive partitioning of tasks seen in heterogeneous processor setups. By facilitating a unified execution of matrix, vector, and control code, the Chimera platform elevates software development ease, and substantially improves code maintainability and debugging processes. In addition to high adaptability, the Chimera GPNPU capitalizes on Quadric's proprietary Compiler infrastructure, which allows developers to transition rapidly from model conception to execution. It transforms AI workflows by optimizing memory utilization and minimizing power expenditure through smart data storage strategies. As AI models grow increasingly complex, the Chimera GPNPU stands out for its foresight and capability to unify AI and DSP tasks under one adaptable and programmable platform.

Quadric
16 Categories
View Details

eSi-3250

Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

xcore.ai

xcore.ai by XMOS is a groundbreaking solution designed to bring intelligent functionality to the forefront of semiconductor applications. It enables powerful real-time execution of AI, DSP, and control functionalities, all on a single, programmable chip. The flexibility of its architecture allows developers to integrate various computational tasks efficiently, making it a fitting choice for projects ranging from smart audio devices to automated industrial systems. With xcore.ai, XMOS provides the technology foundation necessary for swift deployment and scalable application across different sectors, delivering high performance in demanding environments.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module from Axelera AI provides an exceptional balance of performance and size, perfectly suited for edge AI applications. Designed for high-performance tasks, this module is powered by a single Metis AI Processing Unit (AIPU), which offers cutting-edge inference capabilities. With this M.2 card module, developers can easily integrate AI processing power into compact devices.<br><br>This module accommodates demanding AI workloads, enabling applications to perform complex computations with efficiency. Thanks to its low power consumption and versatile integration capabilities, it opens new possibilities for use in edge devices that require robust AI processing power. The Metis AIPU M.2 module supports a wide range of AI models and pipelines, facilitated by Axelera's Voyager SDK software platform which ensures seamless deployment and optimization of AI models.<br><br>The module's versatile design allows for streamlined concurrent multi-model processing, significantly boosting the device's AI capabilities without the need for external data centers. Additionally, it supports advanced quantization techniques, providing users with increased prediction accuracy for high-stakes applications.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CAN, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, VGA, Vision Processor, WMV
View Details

Talamo SDK

The Talamo Software Development Kit (SDK) is a comprehensive toolset designed to streamline the development and deployment of neuromorphic AI applications. Leveraging a PyTorch-integrated environment, Talamo simplifies the creation of powerful AI models for deployment on the Spiking Neural Processor. It provides developers with a user-friendly workflow, reducing the complexity usually associated with spiking neural networks. This SDK facilitates the construction of end-to-end application pipelines through a familiar PyTorch framework. By grounding development in this standard workflow, Talamo removes the need for deep expertise in spiking neural networks, offering pre-built models that are ready to use. The SDK also includes capabilities for compiling and mapping trained models onto the processor's hardware, ensuring efficient integration and utilization of computing resources. Moreover, Talamo supports an architecture simulator which allows developers to emulate hardware performance during the design phase. This feature enables rapid prototyping and iterative design, which is crucial for optimizing applications for performance and power efficiency. Thus, Talamo not only empowers developers to build sophisticated AI solutions but also ensures these solutions are practical for deployment across various devices and platforms.

Innatera Nanosystems
All Foundries
All Process Nodes
AI Processor, Content Protection Software, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

aiWare

The aiWare Neural Processing Unit (NPU) is an advanced hardware solution engineered for the automotive sector, highly regarded for its efficiency in neural network acceleration tailored for automated driving technologies. This NPU is designed to handle a broad scope of AI applications, including complex neural network models like CNNs and RNNs, offering scalability across diverse performance tiers from L2 to more demanding L4 systems. With its industry-leading efficiency, the aiWare hardware IP achieves up to 98% effectiveness over various automotive neural networks. It supports vast sensor configurations typical in automotive contexts, maintaining reliable performance under rigorous conditions validated by ISO 26262 ASIL B certification. aiWare is not only power-efficient but designed with a scalable architecture, providing up to 1024 TOPS, ensuring that it meets the demands of high-performance processing requirements. Furthermore, aiWare is meticulously crafted to facilitate integration into safety-critical environments, deploying high determinism in its operations. It minimizes external memory dependencies through an innovative dataflow approach, maximizing on-chip memory utilization and minimizing system power. Featuring extensive documentation for integration and customization, aiWare stands out as a crucial component for OEMs and Tier1s looking to optimize advanced driver-assist functionalities.

aiMotive
12 Categories
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator by EdgeCortix is an advanced processor designed for energy-efficient, real-time AI inferencing. It supports complex generative AI models such as Llama 2 and Stable Diffusion with an impressive power envelope of just 8 watts, making it ideal for applications requiring swift, on-the-fly Batch=1 AI processing. While maintaining critical performance metrics, it can simultaneously run multiple deep neural network models, facilitated by its unique DNA core. The SAKURA-II stands out with its high utilization of AI compute resources, robust memory bandwidth, and sizable DRAM capacity options of up to 32GB, all in a compact form factor. With market-leading energy efficiency, the SAKURA-II supports diverse edge AI applications, from vision and language to audio, thanks to hardware-accelerated arbitrary activation functions and advanced power management features. Designed for ARM and other platforms, the SAKURA-II can be easily integrated into existing systems for deploying AI models and leveraging low power for demanding workloads. EdgeCortix's AI Accelerator excels with innovative features like sparse computing to optimize DRAM bandwidth and real-time data streaming for Batch=1 operations, ensuring fast and efficient AI computations. It offers unmatched adaptability in power management, enabling ultra-high efficiency modes for processing complex AI tasks while maintaining high precision and low latency operations.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Ultra-Low-Power 64-Bit RISC-V Core

The Ultra-Low-Power 64-Bit RISC-V Core offered by Micro Magic is engineered to cater to high-performance applications while maintaining a low power profile. Operating at just 10mW at 1GHz, this core highlights Micro Magic's commitment to energy-efficient design without compromising on speed. Leveraging design techniques that allow operation at lower voltages, the core achieves remarkable performance metrics, making it suitable for advanced computing needs. The core operates at 5GHz under optimal conditions, showcasing its ability to handle demanding processing tasks. This makes it particularly valuable for applications where both speed and power efficiency are critical, such as portable and embedded systems. Micro Magic's implementation supports seamless integration into various computing infrastructures, accommodating diverse requirements of modern technology solutions. Moreover, the architectural design harnesses the strengths of RISC-V's open and flexible standards, ensuring that users benefit from both adaptability and performance. As part of Micro Magic's standout offerings, this core is poised to make significant impacts in high-demand environments, providing a blend of economy, speed, and reliability.

Micro Magic, Inc.
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

Crafted to deliver significant power savings, the Tianqiao-70 is a low-power RISC-V CPU that excels in commercial-grade scenarios. This 64-bit CPU core is primarily designed for applications where power efficiency is critical, such as mobile devices and computationally intensive IoT solutions. The core's architecture is specifically optimized to perform under stringent power budgets without compromising on the processing power needed for complex tasks. It provides an efficient solution for scenarios that demand reliable performance while maintaining a low energy footprint. Through its refined design, the Tianqiao-70 supports a broad spectrum of applications, including personal computing, machine learning, and mobile communications. Its versatility and power-awareness make it a preferred choice for developers focused on sustainable and scalable computing architectures.

StarFive Technology
AI Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

NMP-750

The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

NMP-750

The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

Ceva-SensPro2 - Vision AI DSP

The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)

Ceva, Inc.
DSP Core, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator revolutionizes computing with its Intelligent Compute Architecture (ICA), delivering unparalleled performance and efficiency for HPC and AI applications. This innovative product leverages real-time adaptability, enabling it to optimize hardware configurations dynamically to match the specific demands of various software workloads. Its standout feature is the elimination of domain-specific languages, offering a universal solution for scientific and technical computing. Equipped with a robust developer toolchain that supports popular languages like C, C++, FORTRAN, and OpenMP, the Maverick-2 seamlessly integrates into existing workflows. This minimizes the need for code rewrites while maximizing developer productivity. By providing extensive support for emerging technologies such as CUDA and HIP/ROCm, Maverick-2 ensures that it remains a viable and potent solution for current and future computing challenges. Built on TSMC's advanced 5nm process, the accelerator incorporates HBM3E memory and high-bandwidth PCIe Gen 5 interfaces, supporting demanding computations with remarkable efficiency. The Maverick-2 achieves a significant power performance advantage, making it ideal for data centers and research facilities aiming for greater sustainability without sacrificing computational power.

Next Silicon Ltd.
TSMC
5nm
11 Categories
View Details

Digital PreDistortion (DPD) Solution

Digital Predistortion (DPD) is a sophisticated technology crafted to optimize the power efficiency of RF power amplifiers. The flagship product, FlexDPD, presents a complete, adaptable sub-system that can be customized to any ASIC or FPGA/SoC platform. Thanks to its scalability, it is compatible with various device vendors. Designed for high performance, this DPD solution significantly boosts RF efficiencies by counteracting signal distortion, ensuring clear and effective transmission. The core of the DPD solution lies in its adaptability to a broad range of systems including 5G, multi-carrier platforms, and O-RAN frameworks. It's built to handle transmission bandwidths exceeding 1 GHz, making it a versatile and future-proof technology. This capability not only enhances system robustness but also offers a seamless integration pathway for next-generation communication standards. Additionally, Systems4Silicon’s DPD solution is field-tested, ensuring reliability in real-world applications. The solution is particularly beneficial for projects that demand high signal integrity and efficiency, providing a tangible advantage in competitive markets. Its compatibility with both ASIC and FPGA implementations offers flexibility and choice to partners, significantly reducing development time and cost.

Systems4Silicon
3GPP-5G, CAN-FD, Coder/Decoder, Ethernet, HDLC, MIL-STD-1553, Modulation/Demodulation, Multiprocessor / DSP, PLL, RapidIO
View Details

NPU

The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.

OPENEDGES Technology, Inc.
AI Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

The H.264 FPGA Encoder and CODEC Micro Footprint Cores are versatile, ITAR-compliant solutions providing high-performance video compression tailored for FPGAs. These H.264 cores leverage industry-leading technology to offer 1080p60 H.264 Baseline support in a compact design, presenting one of the fastest and smallest FPGA cores available. Customizable features allow for unique pixel depths and resolutions, with particular configurations including an encoder, CODEC, and I-Frame only encoder options, making this IP adaptable to varied video processing needs. Designed with precision, these cores introduce significant latency improvements, such as achieving 1ms latency at 1080p30. This capability not only enhances real-time video processing but also optimizes integration with existing electronic systems. Licensing options are flexible, offering a cost-effective evaluation license to accommodate different project scopes and needs. Customization possibilities further extend to unique resolution and pixel depth requirements, supporting diverse application needs in fields like surveillance, broadcasting, and multimedia solutions. The core’s design ensures it can seamlessly integrate into a variety of platforms, including challenging and sophisticated FPGA applications, all while keeping development timelines and budgets in focus.

A2e Technologies
AI Processor, AMBA AHB / APB/ AXI, Arbiter, Audio Controller, DVB, H.264, H.265, HDMI, Multiprocessor / DSP, Other, TICO, USB, Wireless Processor
View Details

RISC-V Core IP

The RISC-V Core IP developed by AheadComputing Inc. stands out in the field of 64-bit application processors. Designed to deliver exceptional per-core performance, this processor is engineered with the highest standards to maximize the Instructions Per Cycle (IPC) efficiency. AheadComputing's RISC-V Core IP is continuously refined to address the growing demands of high-performance computing applications. The innovative architecture of this core allows for seamless execution of complex algorithms while achieving superior speed and efficiency. This design is crucial for applications that require fast data processing and real-time computational capabilities. By integrating advanced power management techniques, the RISC-V Core IP ensures energy efficiency without sacrificing performance, making it suitable for a wide range of electronic devices. Anticipating future computing needs, AheadComputing's RISC-V Core IP incorporates state-of-the-art features that support scalability and adaptability. These features ensure that the IP remains relevant as technology evolves, providing a solid foundation for developing next-generation computing solutions. Overall, it embodies AheadComputing’s commitment to innovation and performance excellence.

AheadComputing Inc.
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

Digital Radio (GDR)

The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.

GIRD Systems, Inc.
3GPP-5G, 3GPP-LTE, 802.11, Coder/Decoder, CPRI, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Independent
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is an advanced microcontroller engineered for highly efficient always-on sensing tasks. Integrating a low-power spiking neural network engine with a RISC-V processor core, the T1 provides a compact solution for rapid sensor data processing. Its design supports next-generation AI applications and signal processing while maintaining a minimal power footprint. The processor excels in scenarios requiring both high power efficiency and fast response. By employing a tightly-looped spiking neural network algorithm, the T1 can execute complex pattern recognition and signal processing tasks directly on-device. This autonomy enables battery-powered devices to operate intelligently and independently of cloud-based services, ideal for portable or remote applications. A notable feature includes its low-power operation, making it suitable for use in portable devices like wearables and IoT-enabled gadgets. Embedded with a RISC-V CPU and 384KB of SRAM, the T1 can interface with a variety of sensors through diverse connectivity options, enhancing its versatility in different environments.

Innatera Nanosystems
UMC
28nm
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

eSi-3200

The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

NMP-350

The NMP-350 is an endpoint accelerator designed to deliver the lowest power and cost efficiency in its class. Ideal for applications such as driver authentication and health monitoring, it excels in automotive, AIoT/sensors, and wearable markets. The NMP-350 offers up to 1 TOPS performance with 1 MB of local memory, and is equipped with a RISC-V or Arm Cortex-M 32-bit CPU. It supports multiple use-cases, providing exceptional value for integrating AI capabilities into various devices. NMP-350's architectural design ensures optimal energy consumption, making it particularly suited to Industry 4.0 applications where predictive maintenance is crucial. Its compact nature allows for seamless integration into systems requiring minimal footprint yet substantial computational power. With support for multiple data inputs through AXI4 interfaces, this accelerator facilitates enhanced machine automation and intelligent data processing. This product is a testament to AiM Future's expertise in creating efficient AI solutions, providing the building blocks for smart devices that need to manage resources effectively. The combination of high performance with low energy requirements makes it a go-to choice for developers in the field of AI-enabled consumer technology.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

AI Inference Platform

Designed to cater to AI-specific needs, SEMIFIVE’s AI Inference Platform provides tailored solutions that seamlessly integrate advanced technologies to optimize performance and efficiency. This platform is engineered to handle the rigorous demands of AI workloads through a well-integrated approach combining hardware and software innovations matched with AI acceleration features. The platform supports scalable AI models, delivering exceptional processing capabilities for tasks involving neural network inference. With a focus on maximizing throughput and efficiency, it facilitates real-time processing and decision-making, which is crucial for applications such as machine learning and data analytics. SEMIFIVE’s platform simplifies AI implementation by providing an extensive suite of development tools and libraries that accelerate design cycles and enhance comprehensive system performance. The incorporation of state-of-the-art caching mechanisms and optimized data flow ensures the platform’s ability to handle large datasets efficiently.

SEMIFIVE
Samsung
5nm, 12nm, 14nm
AI Processor, Cell / Packet, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

SCR9 Processor Core

Syntacore’s SCR9 processor core stands out as a powerful force in handling high-performance computing tasks with its dual-issue out-of-order 12-stage pipeline. This core is engineered for environments that demand peak computational ability and robust pipeline execution, crucial for data-intense tasks such as AI and ML, enterprise applications, and network processing. The architecture is tailored to support extensive multicore and heterogeneous configurations, providing valuable tools for developers aiming to maximize workload efficiency and processing speed. The inclusion of a vector processing unit (VPU) underscores its capability to handle large datasets and complex calculations, while maintaining system integrity and coherence through its comprehensive cache management. With support for hypervisor functionalities and scalable Linux environments, the SCR9 continues to be a key strategic element in expanding the horizons of RISC-V-based applications. Syntacore’s extensive library of development resources further enriches the usability of this core, ensuring that its implementation remains smooth and effective across diverse technological landscapes.

Syntacore
2D / 3D, AI Processor, Coprocessor, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

NeuroMosAIc Studio

NeuroMosAIc Studio serves as a comprehensive software platform that simplifies the process of developing and deploying AI models. Designed to optimize edge AI applications, this platform assists users through model conversion, mapping, and simulation, ensuring optimal use of resources and efficiency. It offers capabilities like network quantization and compression, allowing developers to push the limits in terms of performance while maintaining compact model sizes. The studio also supports precision adjustments, providing deep insights into hardware optimization, and aiding in the generation of precise outputs tailored to specific application needs. AiM Future's NeuroMosAIc Studio boosts the efficiency of training stages and quantization, ultimately facilitating the delivery of high-quality AI solutions for both existing and emerging technologies. It's an indispensable tool for those looking to enhance AI capabilities in embedded systems without compromising on power or performance.

AiM Future
AI Processor, CPU, IoT Processor, Multiprocessor / DSP
View Details

NMP-550

Tailored for high efficiency, the NMP-550 accelerator advances performance in the fields of automotive, mobile, AR/VR, and more. Designed with versatility in mind, it finds applications in driver monitoring, video analytics, and security through its robust capabilities. Offering up to 6 TOPS of processing power, it includes up to 6 MB of local memory and a choice of RISC-V or Arm Cortex-M/A 32-bit CPU. In environments like drones, robotics, and medical devices, the NMP-550's enhanced computational skills allow for superior machine learning and AI functions. This is further supported by its ability to handle comprehensive data streams efficiently, making it ideal for tasks such as image analytics and fleet management. The NMP-550 exemplifies how AiM Future harnesses cutting-edge technology to develop powerful processors that meet contemporary demands for higher performance and integration into a multitude of smart technologies.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Receiver/Transmitter
View Details

RISCV SoC - Quad Core Server Class

The RISCV SoC developed by Dyumnin Semiconductors is engineered with a 64-bit quad-core server-class RISCV CPU, aiming to bridge various application needs with an integrated, holistic system design. Each subsystem of this SoC, from AI/ML capabilities to automotive and multimedia functionalities, is constructed to deliver optimal performance and streamlined operations. Designed as a reference model, this SoC enables quick adaptation and deployment, significantly reducing the time-to-market for clients. The AI Accelerator subsystem enhances AI operations with its collaboration of a custom central processing unit, intertwined with a specialized tensor flow unit. In the multimedia domain, the SoC boasts integration capabilities for HDMI, Display Port, MIPI, and other advanced graphic and audio technologies, ensuring versatile application across various multimedia requirements. Memory handling is another strength of this SoC, with support for protocols ranging from DDR and MMC to more advanced interfaces like ONFI and SD/SDIO, ensuring seamless connectivity with a wide array of memory modules. Moreover, the communication subsystem encompasses a broad spectrum of connectivity protocols, including PCIe, Ethernet, USB, and SPI, crafting an all-rounded solution for modern communication challenges. The automotive subsystem, offering CAN and CAN-FD protocols, further extends its utility into automotive connectivity.

Dyumnin Semiconductors
28 Categories
View Details

Codasip L-Series DSP Core

The Codasip L-Series DSP Core stands out for its ability to handle computationally intensive algorithms with high efficiency, targeting applications that require significant digital signal processing capabilities. The L-Series is tailored for precision tasks such as audio processing and complex mathematical computations where performance and accuracy are imperative. This series benefits from a versatile architecture that can be customized to enhance specific signal processing needs, powered by the Codasip Studio. Modifications can be made at both the architectural and ISA levels to ensure the processor aligns perfectly with the workload's demands, enhancing performance while maintaining a compact footprint. Furthermore, the L-Series DSP cores are equipped to deliver powerful processing potential while ensuring power efficiency, essential for battery-operated devices or environments with power constraints. This series is optimal for developers seeking to implement DSP solutions in various domains, leveraging RISC-V's open standard benefits coupled with Codasip's customization tools.

Codasip
AI Processor, Audio Processor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Cores
View Details

iCan PicoPop® System on Module

The iCan PicoPop® is a highly compact System on Module (SOM) based on the Zynq UltraScale+ MPSoC from Xilinx, suited for high-performance embedded applications in aerospace. Known for its advanced signal processing capabilities, it is particularly effective in video processing contexts, offering efficient data handling and throughput. Its compact size and performance make it ideal for integration into sophisticated systems where space and performance are critical.

OXYTRONIC
12 Categories
View Details

Network Protocol Accelerator Platform

This platform stands out for its ability to offload and accelerate network protocol processing at an impressive speed of up to 100 Gbps using FPGA technology. The Network Protocol Accelerator Platform is designed to enhance network-related tasks, providing distinct performance advantages by leveraging MLE's patented technology. This IP is highly suitable for those requiring efficient data processing in high-speed networking applications, offering scalable solutions from point-to-point connections to complex network systems. The platform's innovation lies in its ability to seamlessly manage a wide array of network protocols, making communication between devices efficient and effective. With its high-speed capability, the platform aids in reducing data processing time significantly. The robustness of this platform ensures that data integrity is maintained across various network tasks, including data acceleration and offloading critical network processes. Furthermore, this platform is particularly useful for industries like telecommunications and data centers where processing large volumes of data rapidly is crucial. The ability to upgrade and maintain such technology provides users with flexibility and adaptability in response to changing network demands. With its broad applicability, the Network Protocol Accelerator Platform remains a strategic asset for enhancing operational efficiency in digital infrastructure management.

Missing Link Electronics
AMBA AHB / APB/ AXI, ATM / Utopia, Cell / Packet, Ethernet, MIL-STD-1553, Multiprocessor / DSP, Optical/Telecom, RapidIO, Safe Ethernet, SATA, USB, V-by-One
View Details

Trifecta-GPU

Trifecta-GPU design offers an exceptional computational power utilizing the NVIDIA RTX A2000 embedded GPU. With a focus on modular test and measurement, and electronic warfare markets, this GPU is capable of delivering 8.3 FP32 TFLOPS compute performance. It is tailored for advanced signal processing and machine learning, making it indispensable for modern, software-defined signal processing applications. This GPU is a part of the COTS PXIe/CPCIe modular family, known for its flexibility and ease of use. The NVIDIA GPU integration means users can expect robust performance for AI inference applications, facilitating quick deployment in various scenarios requiring advanced data processing. Incorporating the latest in graphical performance, the Trifecta-GPU supports a broad range of applications, from high-end computing tasks to graphics-intensive processes. It is particularly beneficial for those needing a reliable and powerful GPU for modular T&M and EW projects.

RADX Technologies, Inc.
AI Processor, CPU, DSP Core, GPU, Multiprocessor / DSP, Peripheral Controller, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

TUNGA

TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.

Calligo Technologies
AI Processor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

RISC-V Processor Core

The RISC-V Processor Core from Fraunhofer IPMS is engineered for flexibility and versatility in addressing a plethora of computational tasks. Leveraging the open-source RISC-V architecture, this processor core is suitable for a wide range of applications, from consumer electronics to specialized industrial use cases. By offering a broad canvas for customization, it enables manufacturers to tailor the processor to specific market needs, aligning with industry trends of adaptable hardware design. This core supports high-performance computations while maintaining energy efficiency, which is imperative for modern applications that demand rigorous processing without compromising power efficiency. Its structure allows for easy integration into various system environments, providing manufacturers with the advantage of implementing advanced features rapidly. The RISC-V Processor Core is particularly valuable in research and prototyping scenarios, where its open and modular design accelerates innovation and development cycles. This adaptability ensures that developers can keep pace with the rapid technological evolution in areas like IoT, edge computing, and AI, offering a robust foundation for next-generation computing solutions.

Fraunhofer Institute for Photonic Microsystems (IPMS)
CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Prodigy Universal Processor

The Prodigy Universal Processor by Tachyum is a versatile chip that merges the capabilities of CPUs, GPGPUs, and TPUs into a single architecture. This innovation is designed to cater to the needs of AI, HPC, and hyperscale data centers by delivering improved performance, energy efficiency, and server utilization. The chip functions as a general-purpose processor, facilitating various applications from hyperscale data centers to high-performance computing and private clouds. It boasts a seamless integration model, allowing existing software packages to run flawlessly on its uniquely designed instruction set architecture. By providing up to 18.5x increased performance and enhanced performance per watt, Prodigy stands out in the industry, tackling common issues like high power consumption and limited processor performance that currently hamper data centers. It comprises a coherent multiprocessor architecture that supports a wide range of AI and computing workloads, ultimately transforming data centers into universal computing hubs. The design not only aims to lower the total cost of ownership but also contributes to reducing carbon emissions through decreased energy requirements. Prodigy’s architecture supports a diverse range of SKUs tailored to specific markets, making it adaptable to various applications. Its flexibility and superior performance capabilities position it as a significant player in advancing sustainable, energy-efficient computational solutions worldwide. The processor's ability to handle complex AI tasks with minimal energy use underlines Tachyum's commitment to pioneering green technology in the semiconductor industry.

Tachyum Inc.
13 Categories
View Details

IP Platform for Low-Power IoT

These customizable and power-efficient IP platforms are designed to accelerate the time-to-market for IoT products. Each platform includes essential building blocks for smart and secure IoT devices. They are available with ARM and RISC-V processors, supporting a range of applications such as beacons, smart sensors, and connected audio. Pre-validated and ready for integration, these platforms are the backbone for IoT device development, ensuring that prototypes transition smoothly to production with minimal power requirements and maximum efficiency.

Low Power Futures
13 Categories
View Details

TT-Ascalon™

TT-Ascalon™ stands out as a high-performance RISC-V CPU solution from Tenstorrent, tailored for general-purpose control and expansive computing tasks. This processor is distinguished by its scalable out-of-order architecture, which is co-designed and optimized with Tenstorrent's proprietary Tensix IP. The TT-Ascalon™ is engineered to deliver peak performance while maintaining the efficiency of area and power, crucial for modern computational demands. Built on the RISC-V RVA23 profile, TT-Ascalon™ provides a compelling combination of computational speed and energy efficiency, making it suitable for a wide range of applications from data centers to embedded systems. Its superscalar design facilitates the concurrent execution of multiple instructions, enhancing computing throughput and optimizing performance for demanding workloads. The processor’s architecture is further tailored to enable seamless integration into various systems. By complementing its high-efficiency design with comprehensive compatibility, TT-Ascalon™ ensures that users can implement sophisticated computing solutions that evolve with technological advancements and industry needs. This adaptability makes it an ideal choice for enterprises aiming to future-proof their technological infrastructure. Supporting a suite of developer tools and open-source initiatives, the TT-Ascalon™ allows users to freely innovate and tailor their computing solutions. This openness, combined with the processor’s unmatched performance, positions it as a vital component for those looking to maximize their computing efficiency and capabilities.

Tenstorrent
TSMC
22nm, 22nm FDX
AI Processor, CPU, Error Correction/Detection, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

SiFive Essential

The SiFive Essential family provides a comprehensive range of embedded processor cores that can be tailored to various application needs. This series incorporates silicon-proven, pre-defined CPU cores with a focus on scalability and configurability, ranging from simple 32-bit MCUs to advanced 64-bit processors capable of running embedded RTOS and full-fledged operating systems like Linux. SiFive Essential empowers users with the flexibility to customize the design for specific performance, power, and area requirements. The Essential family introduces significant advancements in processing capabilities, allowing users to design processors that meet precise application needs. It features a rich set of options for interface customizations, providing seamless integration into broader SoC designs. Moreover, the family supports an 8-stage pipeline architecture and, in some configurations, offers dual-issue superscalar capabilities for enhanced processing throughput. For applications where security and traceability are crucial, the Essential family includes WorldGuard technology, which ensures comprehensive protection across the entire SoC, safeguarding against unauthorized access. The flexible design opens up various use cases, from IoT devices and microcontrollers to real-time control applications and beyond.

SiFive, Inc.
Building Blocks, Content Protection Software, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

ISELED Technology

ISELED Technology is an innovative solution for automotive lighting, integrating smart RGB LED control and communication capabilities into compact, efficient modules. These modules support precise color calibration and temperature compensation, leveraging a digital communication protocol to ensure consistent lighting quality. The system is engineered to facilitate seamless integration into automotive lighting applications, enhancing aesthetic appeal and operational efficiency.

INOVA Semiconductors GmbH
Audio Interfaces, LIN, Multiprocessor / DSP, Other, Power Management, Receiver/Transmitter, Safe Ethernet, Sensor, Temperature Sensor
View Details

RISC-V CPU IP NX Class

The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.

Nuclei System Technology
Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

SiFive Performance

The SiFive Performance family is dedicated to offering high-throughput, low-power processor solutions, suitable for a wide array of applications from data centers to consumer devices. This family includes a range of 64-bit, out-of-order cores configured with options for vector computations, making it ideal for tasks that demand significant processing power alongside efficiency. Performance cores provide unmatched energy efficiency while accommodating a breadth of workload requirements. Their architecture supports up to six-wide out-of-order processing with tailored options that include multiple vector engines. These cores are designed for flexibility, enabling various implementations in consumer electronics, network storage solutions, and complex multimedia processing. The SiFive Performance family facilitates a mix of high performance and low power usage, allowing users to balance the computational needs with power consumption effectively. It stands as a testament to SiFive’s dedication to enabling flexible tech solutions by offering rigorous processing capabilities in compact, scalable packages.

SiFive, Inc.
CPU, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

ARM M-Class Based ASICs

A robust platform offering a full spectrum of ARM Cortex-M microprocessors, perfect for integration across a broad scope of systems. These ASICs are finely tuned to accommodate various applications, demonstrating commendable performance in areas such as IoT, industrial automation, and consumer electronics. Known for their reliability and scalability, these ASICs enhance system capabilities by providing customizable features that match exclusive client criteria.

ASIC North
AI Processor, Building Blocks, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

eSi-3264

The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

Neural Network Accelerator

The Neural Network Accelerator from Gyrus AI is a state-of-the-art processing solution tailored for executing neural networks efficiently. Leverage this IP to achieve high-performance computing with streamlined power consumption. It features a noteworthy capability of operating at 30 TOPS/W, drastically reducing clock cycles by 10-30x compared to traditional processors. This advancement supports various neural network structures, ensuring high operational efficiency while minimizing energy demands.\n\nThe architecture of the Neural Network Accelerator is optimized for low memory usage, resulting in significantly lower power needs, which in turn reduces operational costs. Its design focuses on achieving optimal die area usage, ensuring over 80% utilization for different model structures, which supports compact and effective chip designs. This enhances the scalability and flexibility required for varied applications in edge computing.\n\nAccompanied by advanced software tools, this IP supports seamless integration into existing systems, facilitating the straightforward execution of neural networks. The tools offer robust support, helping run complex models with ease, boosting both performance and resource efficiency. This makes it ideal for companies looking to enhance their AI processing capabilities on edge devices. Its cutting-edge technology enables enterprises to maintain competitive advantages in AI-driven markets.

Gyrus AI
AI Processor, Coprocessor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor, Vision Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt