All IPs > Platform Level IP > Multiprocessor / DSP
In the realm of semiconductor IP, the Multiprocessor and Digital Signal Processor (DSP) category plays a crucial role in enhancing the processing performance and efficiency of a vast array of modern electronic devices. Semiconductor IPs in this category are designed to support complex computational tasks, enabling sophisticated functionalities in consumer electronics, automotive systems, telecommunications, and more. With the growing need for high-performance processing in a compact and energy-efficient form, multiprocessor and DSP IPs have become integral to product development across industries.
The multiprocessor IPs are tailored to provide parallel processing capabilities, which significantly boost the computational power required for intensive applications. By employing multiple processing cores, these IPs allow for the concurrent execution of multiple tasks, leading to faster data processing and improved system performance. This is especially vital in applications such as gaming consoles, smartphones, and advanced driver-assistance systems (ADAS) in vehicles, where seamless and rapid processing is essential.
Digital Signal Processors are specialized semiconductor IPs used to perform mathematical operations on signals, allowing for efficient processing of audio, video, and other types of data streams. DSPs are indispensable in applications where real-time data processing is critical, such as noise cancellation in audio devices, image processing in cameras, and signal modulation in communication systems. By providing dedicated hardware structures optimized for these tasks, DSP IPs deliver superior performance and lower power consumption compared to general-purpose processors.
Products in the multiprocessor and DSP semiconductor IP category range from core subsystems and configurable processors to specialized accelerators and integrated solutions that combine processing elements with other essential components. These IPs are designed to help developers create cutting-edge solutions that meet the demands of today’s technology-driven world, offering flexibility and scalability to adapt to different performance and power requirements. As technology evolves, the importance of multiprocessor and DSP IPs will continue to grow, driving innovation and efficiency across various sectors.
The Akida 2nd Generation represents a leap forward in the realm of AI processing, enhancing upon its predecessor with greater flexibility and improved efficiency. This advanced neural processor core is tailored for modern applications demanding real-time response and ultra-low power consumption, making it ideal for compact and battery-operated devices. Akida 2nd Generation supports various programming configurations, including 8-, 4-, and 1-bit weights and activations, thus providing developers with the versatility to optimize performance versus power consumption to meet specific application needs. Its architecture is fully digital and silicon-proven, ensuring reliable deployment across diverse hardware setups. With features such as programmable activation functions and support for sophisticated neural network models, Akida 2nd Generation enables a broad spectrum of AI tasks. From object detection in cameras to sophisticated audio sensing, this iteration of the Akida processor is built to handle the most demanding edge applications while sustaining BrainChip's hallmark efficiency in processing power per watt.
Addressing the need for high-performance AI processing, the Metis AIPU PCIe AI Accelerator Card from Axelera AI offers an outstanding blend of speed, efficiency, and power. Designed to boost AI workloads significantly, this PCIe card leverages the prowess of the Metis AI Processing Unit (AIPU) to deliver unparalleled AI inference capabilities for enterprise and industrial applications. The card excels in handling complex AI models and large-scale data processing tasks, significantly enhancing the efficiency of computational tasks within various edge settings. The Metis AIPU embedded within the PCIe card delivers high TOPs (Tera Operations Per Second), allowing it to execute multiple AI tasks concurrently with remarkable speed and precision. This makes it exceptionally suitable for applications such as video analytics, autonomous driving simulations, and real-time data processing in industrial environments. The card's robust architecture reduces the load on general-purpose processors by offloading AI tasks, resulting in optimized system performance and lower energy consumption. With easy integration capabilities supported by the state-of-the-art Voyager SDK, the Metis AIPU PCIe AI Accelerator Card ensures seamless deployment of AI models across various platforms. The SDK facilitates efficient model optimization and tuning, supporting a wide range of neural network models and enhancing overall system capabilities. Enterprises leveraging this card can see significant improvements in their AI processing efficiency, leading to faster, smarter, and more efficient operations across different sectors.
The Yitian 710 Processor is a landmark server chip released by T-Head Semiconductor, representing a breakthrough in high-performance computing. This chip is designed with cutting-edge architecture that utilizes advanced Armv9 structure, accommodating a range of demanding applications. Engineered by T-Head's dedicated research team, Yitian 710 integrates high efficiency and bandwidth properties into a unique 2.5D package, housing two dies and a staggering 60 billion transistors. The Yitian 710 encompasses 128 Armv9 high-performance cores, each equipped with 64KB L1 instruction cache, 64KB L1 data cache, and 1MB L2 cache, further amplified by a collective on-chip system cache of 128MB. These configurations enable optimal data processing and retrieval speeds, making it suitable for data-intensive tasks. Furthermore, the memory subsystem stands out with its 8-channel DDR5 support, reaching peak bandwidths of 281GB/s. In terms of connectivity, the Yitian 710's I/O system includes 96 PCIe 5.0 channels with a bidirectional theoretical total bandwidth of 768GB/s, streamlining high-speed data transfer critical for server operations. Its architecture is not only poised to meet the current demands of data centers and cloud services but also adaptable for future advancements in AI inference and multimedia processing tasks.
The Universal Chiplet Interconnect Express (UCIe) by EXTOLL is a cutting-edge interconnect framework designed to revolutionize chip-to-chip communication within heterogeneous systems. This product exemplifies the shift towards chiplet architecture, a modular approach enabling enhanced performance and flexibility in semiconductor designs. UCIe offers an open and customizable platform that supports a wide range of technology nodes, particularly excelling in the 12nm to 28nm range. This adaptability ensures it can meet the diverse needs of modern semiconductor applications, providing a bridge that enhances integration across various chiplet components. Such capabilities make it ideal for applications requiring high bandwidth and low latency. The design of UCIe focuses on minimizing power consumption while maximizing data throughput, aligning with EXTOLL’s objective of delivering eco-efficient technology. It empowers manufacturers to forge robust connections between chiplets, allowing optimized performance and scalability in data-intensive environments like data centers and advanced consumer electronics.
aiWare is a high-performance NPU designed to meet the rigorous demands of automotive AI inference, providing a scalable solution for ADAS and AD applications. This hardware IP core is engineered to handle a wide array of AI workloads, including the most advanced neural network structures like CNNs, LSTMs, and RNNs. By integrating cutting-edge efficiency and scalability, aiWare delivers industry-leading neural processing power tailored to automobile-grade specifications.\n\nThe NPU's architecture emphasizes hardware determinism and offers ISO 26262 ASIL-B certification, ensuring that aiWare meets stringent automotive safety standards. Its efficient design also supports up to 256 effective TOPS per core, and can scale to handle thousands of TOPS through multicore integration, minimizing power consumption effectively. The aiWare's system-level optimizations reduce reliance on external memory by leveraging local memory for data management, boosting performance efficiency across varied input data sizes and complexities.\n\naiWare’s development toolkit, aiWare Studio, is distinguished by its innovative ability to optimize neural network execution without the need for manual intervention by software engineers. This empowers ai engineers to focus on refining NNs for production, significantly accelerating iteration cycles. Coupled with aiMotive's aiDrive software suite, aiWare provides an integrated environment for creating highly efficient automotive AI applications, ensuring seamless integration and rapid deployment across multiple vehicle platforms.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
Chimera GPNPU is engineered to revolutionize AI/ML computational capabilities on single-core architectures. It efficiently handles matrix, vector, and scalar code, unifying AI inference and traditional C++ processing under one roof. By alleviating the need for partitioning AI workloads between different processors, it streamlines software development and drastically speeds up AI model adaptation and integration. Ideal for SoC designs, the Chimera GPNPU champions an architecture that is both versatile and powerful, handling complex parallel workloads with a single unified binary. This configuration not only boosts software developer productivity but also ensures an enduring flexibility capable of accommodating novel AI model architectures on the horizon. The architectural fabric of the Chimera GPNPU seamlessly blends the high matrix performance of NPUs with C++ programmability found in traditional processors. This core is delivered in a synthesizable RTL form, with scalability options ranging from a single-core to multi-cluster designs to meet various performance benchmarks. As a testament to its adaptability, the Chimera GPNPU can run any AI/ML graph from numerous high-demand application areas such as automotive, mobile, and home digital appliances. Developers seeking optimization in inference performance will find the Chimera GPNPU a pivotal tool in maintaining cutting-edge product offerings. With its focus on simplifying hardware design, optimizing power consumption, and enhancing programmer ease, this processor ensures a sustainable and efficient path for future AI/ML developments.
SAKURA-II AI Accelerator represents EdgeCortix's latest advancement in edge AI processing, offering unparalleled energy efficiency and extensive capabilities for generative AI tasks. This accelerator is designed to manage demanding AI models, including Llama 2, Stable Diffusion, DETR, and ViT, within a slim power envelope of about 8W. With capabilities extending to multi-billion parameter models, SAKURA-II meets a wide range of edge applications in vision, language, and audio. The SAKURA-II's architecture maximizes AI compute efficiency, delivering more than twice the utilization of competitive solutions. It boasts remarkable DRAM bandwidth, essential for large language and vision models, while maintaining low power consumption. The hardware supports real-time Batch=1 processing, demonstrating its edge in performance even in constrained environments, making it a choice solution for diverse industrial AI applications. With 60 TOPS (INT8) and 30 TFLOPS (BF16) in performance metrics, this accelerator is built to exceed expectations in demanding conditions. It features robust memory configurations supporting up to 32GB of DRAM, ideal for processing intricate AI workloads. By leveraging sparse computing techniques, SAKURA-II optimizes its memory and bandwidth usage effectively, ensuring reliable performance across all deployed applications.
xcore.ai is XMOS Semiconductor's innovative programmable chip designed for advanced AI, DSP, and I/O applications. It enables developers to create highly efficient systems without the complexity typical of multi-chip solutions, offering capabilities that integrate AI inference, DSP tasks, and I/O control seamlessly. The chip architecture boasts parallel processing and ultra-low latency, making it ideal for demanding tasks in robotics, automotive systems, and smart consumer devices. It provides the toolset to deploy complex algorithms efficiently while maintaining robust real-time performance. With xcore.ai, system designers can leverage a flexible platform that supports the rapid prototyping and development of intelligent applications. Its performance allows for seamless execution of tasks such as voice recognition and processing, industrial automation, and sensor data integration. The adaptable nature of xcore.ai makes it a versatile solution for managing various inputs and outputs simultaneously, while maintaining high levels of precision and reliability. In automotive and industrial applications, xcore.ai supports real-time control and monitoring tasks, contributing to smarter, safer systems. For consumer electronics, it enhances user experience by enabling responsive voice interfaces and high-definition audio processing. The chip's architecture reduces the need for exterior components, thus simplifying design and reducing overall costs, paving the way for innovative solutions where technology meets efficiency and scalability.
The Metis AIPU M.2 Accelerator Module by Axelera AI is a compact and powerful solution designed for AI inference at the edge. This module delivers remarkable performance, comparable to that of a PCIe card, all while fitting into the streamlined M.2 form factor. Ideal for demanding AI applications that require substantial computational power, the module enhances processing efficiency while minimizing power usage. With its robust infrastructure, it is geared toward integrating into applications that demand high throughput and low latency, making it a perfect fit for intelligent vision applications and real-time analytics. The AIPU, or Artificial Intelligence Processing Unit, at the core of this module provides industry-leading performance by offloading AI workloads from traditional CPU or GPU setups, allowing for dedicated AI computation that is faster and more energy-efficient. This not only boosts the capabilities of the host systems but also drastically reduces the overall energy consumption. The module supports a wide range of AI applications, from facial recognition and security systems to advanced industrial automation processes. By utilizing Axelera AI’s innovative software solutions, such as the Voyager SDK, the Metis AIPU M.2 Accelerator Module enables seamless integration and full utilization of AI models and applications. The SDK offers enhancements like compatibility with various industry tools and frameworks, thus ensuring a smooth deployment process and quick time-to-market for advanced AI systems. This product represents Axelera AI’s commitment to revolutionizing edge computing with streamlined, effective AI acceleration solutions.
The Talamo SDK from Innatera serves as a comprehensive software development toolkit designed to maximize the capabilities of its Spiking Neural Processor (SNP) lineup. Tailored for developers and engineers, Talamo offers in-depth access to configure and deploy neuromorphic processing solutions effectively. The SDK supports the development of applications that utilize Spiking Neural Networks (SNNs) for diverse sensory processing tasks. Talamo provides a user-friendly interface that simplifies the integration of neural processing capabilities into a wide range of devices and systems. By leveraging the toolkit, developers can customize applications for specific use cases such as real-time audio analysis, touch-free interactions, and biometric data processing. The SDK comes with pre-built models and a model zoo, which helps in rapidly prototyping and deploying sensor-driven solutions. This SDK stands out by offering enhanced tools for developing low-latency, energy-efficient applications. By harnessing the temporal processing strength of SNNs, Talamo allows for the robust development of applications that can operate under strict power and performance constraints, enabling the creation of intelligent systems that can autonomously process data in real-time.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
Micro Magic's Ultra-Low-Power 64-Bit RISC-V Core is a highly efficient design that operates with remarkably low power consumption, requiring only 10mW at 1GHz. This core exemplifies Micro Magic’s commitment to power efficiency, as it integrates advanced techniques to maintain high performance even at lower voltages. The core is engineered for applications where energy conservation is crucial, making it ideal for modern, power-sensitive devices. The architectural design of this RISC-V core utilizes innovative technology to ensure high-speed processing capabilities while minimizing power draw. This balance is achieved through precise engineering and the use of state-of-the-art design methodologies that reduce operational overhead without compromising performance. As a result, this core is particularly suited for applications in portable electronics, IoT devices, and other areas where low-power operation is a necessity. Micro Magic's experience in developing high-speed, low-power solutions is evident in this core's design, ensuring that it delivers reliable performance under various operational conditions. The Ultra-Low-Power 64-Bit RISC-V Core represents a significant advancement in processor efficiency, providing a robust solution for designers looking to enhance their products' capabilities while maintaining a low power footprint.
The Maverick-2 Intelligent Compute Accelerator represents the pinnacle of Next Silicon's innovative approach to computational resources. This state-of-the-art accelerator leverages the Intelligent Compute Architecture for software-defined adaptability, enabling it to autonomously tailor its real-time operations across various HPC and AI workloads. By optimizing performance using insights gained through real-time telemetry, Maverick-2 ensures superior computational efficiency and reduced power consumption, making it an ideal choice for demanding computational environments.\n\nMaverick-2 brings transformative performance enhancements to large-scale scientific research and data-heavy industries by dispensing with the need for codebase modifications or specialized software stacks. It supports a wide range of familiar development tools and frameworks, such as C/C++, FORTRAN, and Kokkos, simplifying the integration process for developers and reducing time-to-discovery significantly.\n\nEngineered with advanced features like high bandwidth memory (HBM3E) and built on TSMC's 5nm process technology, this accelerator provides not only unmatched adaptability but also an energy-efficient, eco-friendly computing solution. Whether embedded in single-die PCIe cards or dual-die OCP Accelerator Modules, the Maverick-2 is positioned as a future-proof solution capable of evolving with technological advancements in AI and HPC.
StarFive's Tianqiao-70 is engineered to deliver superior performance in a power-efficient package. This 64-bit RISC-V CPU core is designed for commercial-grade applications, where consistent and reliable performance is mandatory, yet energy consumption must be minimized. The core's architecture integrates low power design principles without compromising its ability to execute complex instructions efficiently. It is particularly suited for mobile applications, desktop clients, and intelligent gadgets requiring sustained battery life. The Tianqiao-70's design focuses on extending the operational life of devices by ensuring minimal power draw during both active and idle states. It supports an array of advanced features that cater to the latest computational demands. As an ideal solution for devices that combine portability with intensive processing demands, the Tianqiao-70 offers an optimal balance of performance and energy conservation. Its capability to adapt to various operating environments makes it a versatile option for developers looking to maximize efficiency and functionality.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
Systems4Silicon's DPD solution enhances power efficiency in RF power amplifiers by using advanced predistortion techniques. This technology is part of a comprehensive subsystem known as FlexDPD, which is adaptive and scalable, independent of any particular hardware platform. It supports multiple radio standards, including 5G and O-RAN, and is ready for deployment on either ASICs or FPGA platforms. Engineered for field performance, it offers a perfect balance of reliability and adaptability across numerous applications, meeting broad technical requirements.
The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.
The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.
The SEMIFIVE AI Inference Platform is engineered to facilitate rapid development and deployment of AI inference solutions within custom silicon environments. Utilizing seamless integration with silicon-proven IPs, this platform delivers a high-performance framework optimized for AI and machine learning tasks. By providing a strategic advantage in cost reduction and efficiency, the platform decreases time-to-market challenges through pre-configured model layers and extensive IP libraries tailored for AI applications. It also offers enhanced scalability through its support for various computational and network configurations, making it adaptable to both high-volume and specialized market segments. This platform supports complex AI workloads on scalable AI engines, ensuring optimized performance in data-intensive operations. The integration of advanced processors and memory solutions within the platform further enhances processing efficiency, positioning it as an ideal solution for enterprises focusing on breakthroughs in AI technologies.
The Neural Processing Unit (NPU) from OPENEDGES is geared towards advancing AI applications, providing a dedicated processing unit for neural network computations. Engineered to alleviate the computational load from CPUs and GPUs, this NPU optimizes AI workloads, enhancing deep learning tasks and inference processes. Capable of accelerating neural network inference, the NPU supports various machine learning frameworks and is compatible with industry-standard AI models. Its architecture focuses on delivering high throughput for deep learning operations while maintaining low power consumption, making it suitable for a range of applications from mobile devices to data centers. This NPU integrates seamlessly with existing AI frameworks, supporting scalability and flexibility in design. Its dedicated resource management ensures swift data processing and execution, thereby translating into superior AI performance and efficiency in multitude application scenarios.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
A2e's H.264 FPGA Encoder and CODEC Micro Footprint Cores provide a customizable solution targeting FPGAs. Known for its small size and rapid execution, the core supports 1080p60 H.264 Baseline with a singular core, making it one of the industry's swiftest and most efficient FPGA offerings. The core is compliant with ITAR, offering options to adjust pixel depths and resolutions according to specific needs. Its high-performance capability includes offering a latency of just 1ms at 1080p30, which is crucial for applications demanding rapid processing speeds. This licensable core is ideal for developers needing robust video compression capabilities in a compact form factor. The H.264 cores can be finely tuned to meet unique project specifications, enabling developers to implement varied pixel resolutions and depths, further enhancing the core's versatility for different application requirements. With a licensable evaluation option available, prospective users can explore the core’s functionalities before opting for full integration. This flexibility makes it suitable for projects demanding customizable compression solutions without the burden of full-scale initial commitment. Furthermore, A2e provides comprehensive integration and custom design services, allowing these cores to be seamlessly absorbed into existing systems or developed into new solutions. This support ensures minimized risk and accelerated project timelines, allowing developers to focus on innovation and efficiency in their video-centric applications.
The RISC-V Core from AheadComputing is a state-of-the-art application processor, designed to drive next-generation computing solutions. Built on an open-source architecture, this processor core emphasizes enhanced instruction per cycle (IPC) performance, setting the stage for highly efficient computing capabilities. As part of the company's commitment to delivering world-leading performance, the RISC-V Core provides a reliable backbone for advanced computing tasks across various applications. This core's design harnesses the power of 64-bit architecture, providing significant improvements in data handling and processing speed. The focus on 64-bit processing facilitates better computational tasks, ensuring robust performance in data-intensive applications. With AheadComputing's emphasis on superior compute solutions, the RISC-V Core exemplifies their commitment to power, performance, and flexibility. As a versatile computing component, the RISC-V Core suits a range of applications from consumer electronics to enterprise-level computing. It is designed to integrate seamlessly into diverse systems, meeting complex computational demands with finesse. This core stands out in the industry, underpinned by AheadComputing's dedication to pushing the boundaries of what a processor can achieve.
The Spiking Neural Processor T1 is an ultra-low power processor developed specifically for enhancing sensor capabilities at the edge. By leveraging advanced Spiking Neural Networks (SNNs), the T1 efficiently deciphers patterns in sensor data with minimal latency and power usage. This processor is especially beneficial in real-time applications, such as audio recognition, where it can discern speech from audio inputs with sub-millisecond latency and within a strict power budget, typically under 1mW. Its mixed-signal neuromorphic architecture ensures that pattern recognition functions can be continually executed without draining resources. In terms of processing capabilities, the T1 resembles a dedicated engine for sensor tasks, offering functionalities like signal conditioning, filtering, and classification independent of the main application processor. This means tasks traditionally handled by general-purpose processors can now be offloaded to the T1, conserving energy and enhancing performance in always-on scenarios. Such functionality is crucial for pervasive sensing tasks across a range of industries. With an architecture that balances power and performance impeccably, the T1 is prepared for diverse applications spanning from audio interfaces to the rapid deployment of radar-based touch-free interactions. Moreover, it supports presence detection systems, activity recognition in wearables, and on-device ECG processing, showcasing its versatility across various technological landscapes.
The Codasip L-Series DSP Core is designed to handle demanding signal processing tasks, offering an exemplary balance of computational power and energy efficiency. This DSP core is particularly suitable for applications involving audio processing and sensor data fusion, where performance is paramount. Codasip enriches this product with their extensive experience in RISC-V architectures, ensuring robust and optimized performance.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
Targeted at high-end applications, the SCR9 processor core boasts a 12-stage dual-issue out-of-order pipeline, adding vector processing units (VPUs) to manage intensive computational tasks. It offers hypervisor support, making it suitable for diverse enterprise-grade applications. Configured for up to 16 cores, it exhibits excellent memory management and cache coherency required for state-of-the-art computing platforms such as HPC, AI, and machine learning environments. This core embodies efficiency and performance, catering to industries that leverage high-throughput data processing.
NeuroMosAIc Studio serves as a comprehensive software platform that simplifies the process of developing and deploying AI models. Designed to optimize edge AI applications, this platform assists users through model conversion, mapping, and simulation, ensuring optimal use of resources and efficiency. It offers capabilities like network quantization and compression, allowing developers to push the limits in terms of performance while maintaining compact model sizes. The studio also supports precision adjustments, providing deep insights into hardware optimization, and aiding in the generation of precise outputs tailored to specific application needs. AiM Future's NeuroMosAIc Studio boosts the efficiency of training stages and quantization, ultimately facilitating the delivery of high-quality AI solutions for both existing and emerging technologies. It's an indispensable tool for those looking to enhance AI capabilities in embedded systems without compromising on power or performance.
The NMP-350 is an endpoint accelerator designed to deliver the lowest power and cost efficiency in its class. Ideal for applications such as driver authentication and health monitoring, it excels in automotive, AIoT/sensors, and wearable markets. The NMP-350 offers up to 1 TOPS performance with 1 MB of local memory, and is equipped with a RISC-V or Arm Cortex-M 32-bit CPU. It supports multiple use-cases, providing exceptional value for integrating AI capabilities into various devices. NMP-350's architectural design ensures optimal energy consumption, making it particularly suited to Industry 4.0 applications where predictive maintenance is crucial. Its compact nature allows for seamless integration into systems requiring minimal footprint yet substantial computational power. With support for multiple data inputs through AXI4 interfaces, this accelerator facilitates enhanced machine automation and intelligent data processing. This product is a testament to AiM Future's expertise in creating efficient AI solutions, providing the building blocks for smart devices that need to manage resources effectively. The combination of high performance with low energy requirements makes it a go-to choice for developers in the field of AI-enabled consumer technology.
The RISCV SoC - Quad Core Server Class is engineered for high-performance applications requiring robust processing capabilities. Designed around the RISC-V architecture, this SoC integrates four cores to offer substantial computing power. It's ideal for server-class operations, providing both performance efficiency and scalability. The RISCV architecture allows for open-source compatibility and flexible customization, making it an excellent choice for users who demand both power and adaptability. This SoC is engineered to handle demanding workloads efficiently, making it suitable for various server applications.
The iCan PicoPop® is a highly compact System on Module (SOM) based on the Zynq UltraScale+ MPSoC from Xilinx, suited for high-performance embedded applications in aerospace. Known for its advanced signal processing capabilities, it is particularly effective in video processing contexts, offering efficient data handling and throughput. Its compact size and performance make it ideal for integration into sophisticated systems where space and performance are critical.
ISELED represents an avant-garde approach to automotive interior lighting, integrating smart RGB LEDs with advanced drivers into a single unit. This technology supports instantaneous color calibration and temperature management, vastly improving lighting quality without the need for complex external controls. Designed for seamless integration into vehicle interiors, ISELED offers low power consumption and adaptability through its digital communication protocol, enabling precise control and coordination of lighting arrays for enhanced aesthetic and functional applications in automotive settings.
The TT-Ascalon™ is a high-performance RISC-V CPU designed for general-purpose control, emphasizing power and area efficiency. This processor features an Out-of-Order, superscalar architecture that adheres to the RISC-V RVA23 profile, co-developed with Tenstorrent's own Tensix IP for optimized performance. TT-Ascalon™ is highly scalable, suitable for various high-demand applications that benefit from robust computational capabilities. It's engineered to deliver unmatched performance while maintaining energy efficiency, making it ideal for operations that require reliability without compromising on speed and power efficiency.
The Prodigy Universal Processor by Tachyum is a groundbreaking innovation in the realm of computing, marked as the world's first processor that merges General Purpose Computing, High-Performance Computing, Artificial Intelligence, and various other AI disciplines into a single compact chip. This processor promises to revolutionize hyperscale data centers with its unprecedented processing capabilities and efficiency, pushing the boundaries of current computational power. With its superior performance per watt, Prodigy minimizes energy consumption while maximizing data processing abilities. Offering up to 21 times higher performance compared to its contemporaries, Prodigy stands out by providing a coherent multiprocessor architecture that simplifies the programming environment. It aims to overcome challenges like high power use and server underutilization, which have long plagued modern data centers. By addressing these core issues, it allows enterprises to manage workloads more effectively and sustainably. Furthermore, Prodigy's emulation platform broadens the scope of testing and evaluation, enabling developers to optimize their applications for better performance and low power consumption. With native support for the Prodigy instruction set architecture, the processor seamlessly integrates existing software packages, promising a smooth transition and robust application support. Through the integration of this versatile processor, Tachyum is leading the charge toward a sustainable technological future.
The SiFive Essential family is designed to deliver high customization for processors across varying applications, from standalone MCUs to deeply embedded systems. This family of processor cores provides a versatile solution, meeting diverse market needs with an optimal combination of power, area, and performance. Within this lineup, users can tailor processors for specific market requirements, ranging from simple MCUs to fully-featured, Linux-capable designs. With features such as high configurability, SiFive Essential processors offer flexible design points, allowing scaling from basic 2-stage pipelines to advanced dual-issue superscalar configurations. This adaptability makes SiFive Essential suitable for a wide variety of use cases in microcontrollers, IoT devices, and control plane processing. Additionally, their innovation is proven by billions of units shipped worldwide, highlighting their reliability and versatility. The Essential cores also provide advanced integration options within SoCs, enabling smooth interface and optimized performance. This includes pre-integrated trace and debug features, ensuring efficient development and deployment in diverse applications.
TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.
The Trifecta-GPU offers cutting-edge graphics processing capabilities designed for high-efficiency computing needs. This PXIe/CPCIe module excels in handling intensive tasks across various applications, including signal processing, modular test and measurement, and electronic warfare systems. Built to deliver robust performance, it incorporates advanced GPU technology to ensure rapid data throughput and high computational capability. With a focus on versatility, the Trifecta-GPU seamlessly integrates with existing hardware setups, aiding in the enhancement of system performance through its powerful data handling skills. It is particularly well-suited for environments that demand precise data analysis and execution speed, such as AI and machine learning inference tasks. Its inclusion in RADX's product lineup signifies its importance in providing comprehensive solutions tailored for demanding industrial and research applications. Moreover, this module supports various applications, empowered by its substantial memory bandwidth, and possesses innovative architecture designed to optimize processing power. The Trifecta-GPU is an integral component within RADX’s lineup designed to offer flexibility and power efficiency in equal measure, making it well-suited for future-tech applications that necessitate high-performance standards.
The RISC-V Processor Core provides a foundation for developing customizable, open-standard applications, making it a popular choice for modern computing needs. Benefiting from the RISC-V architecture's flexibility, this core can be tailored to meet specific processing requirements across various embedded systems. Industries dealing with complex design challenges find this open standard not only cost-effective but also powerful in fostering innovation. Optimized for efficiency, the RISC-V Processor Core enables the execution of robust software environments and applications, supporting tasks ranging from simple control functions to more demanding compute-heavy operations. This versatility extends to the seamless integration of additional custom IPs, allowing designers to enhance functionality without performance trade-offs. In high-performance computing environments, the RISC-V Processor Core is praised for its energy-efficient computing capabilities and reduced power consumption, characteristics that are vital in creating sustainable and environmentally friendly tech solutions. Its adaptability into various system-on-chip (SoC) designs makes it integral to the development of a broad spectrum of devices, from consumer electronics to industrial automation systems.
The IP Platform for Low-Power IoT is engineered to accelerate product development with highly integrated, customizable solutions specifically tailored for IoT applications. It consists of pre-validated IP platforms that serve as comprehensive building blocks for IoT devices, featuring ARM and RISC-V processor compatibility. Built for ultra-low power consumption, these platforms support smart and secure application needs, offering a scalable approach for different market requirements. Whether it's for beacons, active RFID, or connected audio devices, these platforms are ideal for various IoT applications demanding rapid development and integration. The solutions provided within this platform are not only power-efficient but also ready for AI implementation, enabling smart, AI-ready IoT systems. With FPGA evaluation mechanisms and comprehensive integration support, the IP Platform for Low-Power IoT ensures a seamless transition from concept to market-ready product.
Tailored for high efficiency, the NMP-550 accelerator advances performance in the fields of automotive, mobile, AR/VR, and more. Designed with versatility in mind, it finds applications in driver monitoring, video analytics, and security through its robust capabilities. Offering up to 6 TOPS of processing power, it includes up to 6 MB of local memory and a choice of RISC-V or Arm Cortex-M/A 32-bit CPU. In environments like drones, robotics, and medical devices, the NMP-550's enhanced computational skills allow for superior machine learning and AI functions. This is further supported by its ability to handle comprehensive data streams efficiently, making it ideal for tasks such as image analytics and fleet management. The NMP-550 exemplifies how AiM Future harnesses cutting-edge technology to develop powerful processors that meet contemporary demands for higher performance and integration into a multitude of smart technologies.
The xcore-200 chip from XMOS is a pivotal component for audio processing, delivering unrivaled performance for real-time, multichannel streaming applications. Tailored for professional and high-resolution consumer audio markets, xcore-200 facilitates complex audio processing with unparalleled precision and flexibility. This chip hosts XMOS's adept capabilities in deterministic and parallel processing, crucial for achieving zero-latency outputs in applications such as voice amplification systems, high-definition audio playback, and multipoint conferencing. Its architecture supports complex I/O operations, ensuring that all audio inputs and outputs are managed efficiently without sacrificing audio quality. The xcore-200 is crafted to handle large volumes of data effortlessly while maintaining the highest levels of integrity and clarity in audio outputs. It provides superior processing power to execute intensive tasks such as audio mixing, effects processing, and real-time equalization, crucial for both consumer electronics and professional audio gear. Moreover, xcore-200 supports a flexible integration into various systems, enhancing the functionality of audio interfaces, smart soundbars, and personalized audio solutions. It also sustains the robust performance demands needed in embedded AI implementations, thereby extending its utility beyond traditional audio systems. The xcore-200 is a testament to XMOS's dedication to pushing the boundaries of what's possible in audio engineering, blending high-end audio performance with cutting-edge processing power.
The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.
SiFive Performance family processors are specifically engineered to deliver outstanding performance and efficiency across a wide range of applications. These processors cater to diverse market demands, including data centers, consumer electronics, and AI-driven workloads. They feature high-performance, 64-bit out-of-order cores with optional vector engines, making them ideal for heavy-duty tasks requiring maximum throughput and scalability. The series incorporates a variety of architectural features that optimize performance and energy efficiency. It includes cores scalable from three-wide to six-wide, supporting up to 256-bit vector operations, which are particularly advantageous for AI and multimedia processing applications. This optimal balance ensures that each core offers superior compute density and power efficiency. Additionally, the SiFive Performance series emphasizes flexibility, allowing users to mix and match cores to achieve the desired balance between performance and power consumption. This makes the series a perfect fit for both performance-intensive and power-sensitive applications, enabling developers to create customized solutions tailored to their specific needs.
A robust platform offering a full spectrum of ARM Cortex-M microprocessors, perfect for integration across a broad scope of systems. These ASICs are finely tuned to accommodate various applications, demonstrating commendable performance in areas such as IoT, industrial automation, and consumer electronics. Known for their reliability and scalability, these ASICs enhance system capabilities by providing customizable features that match exclusive client criteria.
The UX Class RISC-V CPU IP epitomizes Nuclei's commitment to potent processing solutions suited for data centers and network environments. Equipped with a 64-bit architecture with integrated MMU capabilities, it is tailored for embedding into Linux-operated systems that demand high operational efficiency and reliability. The UX Class supports extensive data handling and computational tasks, ensuring seamless performance even under the rigors of data-intensive environments.
The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!