All IPs > Platform Level IP > Multiprocessor / DSP
In the realm of semiconductor IP, the Multiprocessor and Digital Signal Processor (DSP) category plays a crucial role in enhancing the processing performance and efficiency of a vast array of modern electronic devices. Semiconductor IPs in this category are designed to support complex computational tasks, enabling sophisticated functionalities in consumer electronics, automotive systems, telecommunications, and more. With the growing need for high-performance processing in a compact and energy-efficient form, multiprocessor and DSP IPs have become integral to product development across industries.
The multiprocessor IPs are tailored to provide parallel processing capabilities, which significantly boost the computational power required for intensive applications. By employing multiple processing cores, these IPs allow for the concurrent execution of multiple tasks, leading to faster data processing and improved system performance. This is especially vital in applications such as gaming consoles, smartphones, and advanced driver-assistance systems (ADAS) in vehicles, where seamless and rapid processing is essential.
Digital Signal Processors are specialized semiconductor IPs used to perform mathematical operations on signals, allowing for efficient processing of audio, video, and other types of data streams. DSPs are indispensable in applications where real-time data processing is critical, such as noise cancellation in audio devices, image processing in cameras, and signal modulation in communication systems. By providing dedicated hardware structures optimized for these tasks, DSP IPs deliver superior performance and lower power consumption compared to general-purpose processors.
Products in the multiprocessor and DSP semiconductor IP category range from core subsystems and configurable processors to specialized accelerators and integrated solutions that combine processing elements with other essential components. These IPs are designed to help developers create cutting-edge solutions that meet the demands of today’s technology-driven world, offering flexibility and scalability to adapt to different performance and power requirements. As technology evolves, the importance of multiprocessor and DSP IPs will continue to grow, driving innovation and efficiency across various sectors.
The Akida 2nd Generation processor further advances BrainChip's AI capabilities with enhanced programmability and efficiency for complex neural network operations. Building on the principles of its predecessor, this generation is optimized for 8-, 4-, and 1-bit weights and activations, offering more robust activation functions and support for advanced temporal and spatial neural networks. A standout feature of the Akida 2nd Generation is its enhanced teaching capability, which includes learning directly on the chip. This enables the system to perform one-shot and few-shot learning, significantly boosting its ability to adapt to new tasks without extensive reprogramming. Its architecture supports more sophisticated machine learning models such as Convolutional Neural Networks (CNNs) and Spatio-Temporal Event-Based Neural Networks, optimizing them for energy-efficient application at the edge. The processor's design reduces the necessity for host CPU involvement, thus minimizing communication overhead and conserving energy. This makes it particularly suitable for real-time data processing applications where quick and efficient data handling is crucial. With event-based hardware that accelerates processing, the Akida 2nd Generation is designed for scalability, providing flexible solutions across a wide range of AI-driven tasks.
The Metis AIPU PCIe AI Accelerator Card by Axelera AI is designed for developers seeking top-tier performance in vision applications. Powered by a single Metis AIPU, this PCIe card delivers up to 214 TOPS, handling demanding AI tasks with ease. It is well-suited for high-performance AI inference, featuring two configurations: 4GB and 16GB memory options. The card benefits from the Voyager SDK, which enhances the developer experience by simplifying the deployment of applications and extending the card's capabilities. This accelerator PCIe card is engineered to run multiple AI models and support numerous parallel neural networks, enabling significant processing power for advanced AI applications. The Metis PCIe card performs at an industry-leading level, achieving up to 3,200 frames per second for ResNet-50 tasks and offering exceptional scalability. This makes it an excellent choice for applications demanding high throughput and low latency, particularly in computer vision fields.
Panmnesia's CXL 3.1 Switch is an integral component designed to facilitate high-speed, low-latency data transfers across multiple connected devices. It is architected to manage resource allocation seamlessly in AI and high-performance computing environments, supporting broad bandwidth, robust data throughput, and efficient power consumption, creating a cohesive foundation for scalable AI infrastructures. Its integration with advanced protocols ensures high system compatibility.
Universal Chiplet Interconnect Express (UCIe) is a cutting-edge technology designed to enhance chiplet-based system integrations. This innovative interconnect solution supports seamless data exchange across heterogeneous chiplets, promoting a highly efficient and scalable architecture. UCIe is expected to revolutionize system efficiencies by enabling a smoother and more integrated communication framework. By employing this technology, developers can leverage its superior power efficiency and adaptability to different mainstream technology nodes. It makes it possible to construct complex systems with reduced energy consumption while ensuring performance integrity. UCIe plays a pivotal role in accelerating the transition to the chiplet paradigm, ensuring systems are not only up to current standards but also adaptable for future advancements. Its robust framework facilitates improved interconnect strategies, crucial for next-generation semiconductor products.
The Yitian 710 Processor is T-Head's flagship ARM-based server chip that represents the pinnacle of their technological expertise. Designed with a pioneering architecture, it is crafted for high efficiency and superior performance metrics. This processor is built using a 2.5D packaging method, integrating two dies and boasting a substantial 60 billion transistors. The core of the Yitian 710 consists of 128 high-performance Armv9 CPU cores, each accompanied by advanced memory configurations that streamline instruction and data caching processes. Each CPU integrates 64KB of L1 instruction cache, 64KB of L1 data cache, and 1MB of L2 cache, supplemented by a robust 128MB system-level cache on the chip. To support expansive data operations, the processor is equipped with an 8-channel DDR5 memory system, enabling peak memory bandwidth of up to 281GB/s. Its I/O subsystem is formidable, featuring 96 PCIe 5.0 channels capable of achieving dual-direction bandwidth up to 768GB/s. With its multi-layered design, the Yitian 710 Processor is positioned as a leading solution for cloud services, data analytics, and AI operations.
xcore.ai stands as a cutting-edge processor that brings sophisticated intelligence, connectivity, and computation capabilities to a broad range of smart products. Designed to deliver optimal performance for applications in consumer electronics, industrial control, and automotive markets, it efficiently handles complex processing tasks with low power consumption and rapid execution speeds. This processor facilitates seamless integration of AI capabilities, enhancing voice processing, audio interfacing, and real-time analytics functions. It supports various interfacing options to accommodate different peripheral and sensor connections, thus providing flexibility in design and deployment across multiple platforms. Moreover, the xcore.ai ensures robust performance in environments requiring precise control and high data throughput. Its compatibility with a wide array of software tools and libraries enables developers to swiftly create and iterate applications, reducing the time-to-market and optimizing the design workflows.
The Tianqiao-70 is a low-power RISC-V CPU designed for commercial-grade applications where power efficiency is paramount. Suitable for mobile and desktop applications, artificial intelligence, as well as various other technology sectors, this processor excels in maintaining high performance while minimizing power consumption. Its design offers great adaptability to meet the requirements of different operational environments.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
Chimera GPNPU provides a groundbreaking architecture, melding the efficiency of neural processing units with the flexibility and programmability of processors. It supports a full range of AI and machine learning workloads autonomously, eliminating the need for supplementary CPUs or GPUs. The processor is future-ready, equipped to handle new and emerging AI models with ease, thanks to its C++ programmability. What makes Chimera stand out is its ability to manage a diverse array of workloads within a singular processor framework that combines matrix, vector, and scalar operations. This harmonization ensures maximum performance for applications across various market sectors, such as automotive, mobile devices, and network edge systems. These capabilities are designed to streamline the AI development process and facilitate high-performance inference tasks, crucial for modern gadget ecosystems. The architecture is fully synthesizable, allowing it to be implemented in any process technology, from current to advanced nodes, adjusting to desired performance targets. The adoption of a hybrid Von Neuman and 2D SIMD matrix design supports a broad suite of DSP operations, providing a comprehensive toolkit for complex graph and AI-related processing.
The Jotunn 8 is heralded as the world's most efficient AI inference chip, designed to maximize AI model deployment with lightning-fast speeds and scalability. This powerhouse is crafted to efficiently operate within modern data centers, balancing critical factors such as high throughput, low latency, and optimization of power use, all while maintaining a sustainable infrastructure. With the Jotunn 8, AI investments reach their full potential through high-performance inference solutions that significantly reduce operational costs while committing to environmental sustainability. Its ultra-low latency feature is crucial for real-time applications such as chatbots and fraud detection systems. Not only does it deliver high throughput needed for demanding services like recommendation engines, but it also proves cost-efficient, aiming to lower the cost per inference crucial for businesses operating at a large scale. Additionally, the Jotunn 8 boasts performance per watt efficiency, a major factor considering that power is a significant operational expense and a driver of the carbon footprint. By implementing the Jotunn 8, businesses can ensure their AI models deliver maximum impact while staying competitive in the growing real-time AI services market. This chip lays down a new foundation for scalable AI, enabling organizations to optimize their infrastructures without compromising on performance.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
The SAKURA-II AI Accelerator from EdgeCortix is a sophisticated solution designed to propel generative AI to new frontiers with impressive energy efficiency. This advanced accelerator provides unparalleled performance with high flexibility for a wide variety of applications, leveraging EdgeCortix's dedicated Dynamic Neural Accelerator architecture. SAKURA-II is optimized for real-time, low-latency AI inference on the edge, tackling demanding generative AI tasks efficiently in constrained environments. The accelerator boasts up to 60 TOPS (Tera Operations Per Second) INT8 performance, allowing it to process large neural networks with complex parameters such as Llama 2 and Stable Diffusion effectively. It supports applications across vision, language, audio, and beyond, by utilizing robust DRAM capabilities and enhanced data throughput. This allows it to outperform other solutions while maintaining a low power consumption profile typically around 8 watts. Designed for integration into small silicon spaces, SAKURA-II caters to the needs of highly efficient AI models, providing dynamic capabilities to meet the stringent requirements of next-gen applications. Thus, the SAKURA-II AI Accelerator stands out as a top choice for developers seeking seamless deployment of cutting-edge AI applications at the edge, underscoring EdgeCortix's leadership in energy-efficient AI processing.
The Talamo Software Development Kit (SDK) is a dynamic and powerful environment tailored for the creation and deployment of advanced neuromorphic AI applications. By integrating seamlessly with PyTorch, Talamo provides a familiar and effective workflow for developers, enabling the construction of robust AI models that exploit the capabilities of spiking neural processors. This SDK extends PyTorch's standard functionalities, offering the necessary infrastructure for building and training spiking neural networks (SNNs) with ease. Talamo's architecture allows developers without specialized knowledge in SNNs to begin building applications that are optimized for neuromorphic processors. It provides compiled models that are specifically mapped to the versatile computing architecture of the Spiking Neural Processor. Furthermore, an architecture simulator offered by Talamo facilitates rapid hardware emulation, enabling quicker validation and development cycles for new applications. Designed to support developers in creating end-to-end application pipelines, Talamo allows the integration of custom functions and neural networks within a comprehensive framework. By removing the need for deep expertise in neuromorphic computing, Talamo empowers a larger population of developers to harness brain-inspired AI models, fostering innovation and accelerating the deployment of intelligent systems across various sectors.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The SiFive Intelligence X280 is designed to address the burgeoning needs of AI and machine learning at the edge. Emphasizing a software-first methodology, this family of processors is crafted to offer scalable vector and matrix compute capabilities. By integrating broad vector processing features and high-bandwidth interfaces, it can adapt to the ever-evolving landscape of AI workloads, providing both high performance and efficient scalability. Built on the RISC-V foundation, the X280 features comprehensive vector compute engines that cater to modern AI demands, making it a powerful tool for edge computing applications where space and energy efficiency are critical. Its versatility allows it to seamlessly manage diverse AI tasks, from low-latency inferences to complex machine learning models, thanks to its support for RISC-V Vector Extensions (RVV). The X280 family is particularly robust for applications requiring rapid AI deployment and adaptation like IoT devices and smart infrastructure. Through extensive compatibility with machine learning frameworks such as TensorFlow Lite, it ensures ease of deployment, enhanced by its focus on energy-efficient inference solutions and support for legacy systems, making it a comprehensive solution for future AI technologies.
Dillon Engineering's 2D FFT core delivers robust performance for transforming two-dimensional data sets into the frequency domain with high precision and efficiency. By leveraging both internal and external memory between dual FFT engines, this core optimizes the data processing pipeline, ensuring fast and reliable results even as data complexity increases. Ideal for applications that handle image processing and data matrix transformations, the 2D FFT core navigates data bandwidth constraints with ease, maintaining throughput even for larger data sets. This core's design maximizes data accuracy and minimizes processing delays, crucial for applications requiring precise image recognition and analysis. Thanks to the adaptable nature provided by Dillon's ParaCore Architect, this IP core is easily customized for various FPGA and ASIC environments. Its flexibility and robust processing capabilities make the 2D FFT core a key component for cutting-edge applications in fields where data translation and processing are critical.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic is engineered to deliver exceptional energy efficiency while maintaining high performance. This core is specifically designed to operate at 1GHz while consuming a mere 10mW of power, making it ideal for today's power-conscious applications. Utilizing advanced design techniques, this processor achieves its high performance at lower voltages, ensuring reduced power consumption without sacrificing speed. Constructed with a focus on optimizing processing capabilities, this RISC-V core is built to cater to demanding environments where energy efficiency is critical. Whether used as a standalone processor or integrated into larger systems, its low power requirements and robust performance make it highly versatile. This core also supports scalable processing with its architecture, accommodating a broad spectrum of applications from IoT devices to performance-intensive computing tasks, aligning with industry standards for modern electronic products.
ISELED is an innovative technology that revolutionizes automotive interior lighting by integrating all necessary hardware functions for fully software-defined lighting. It features smart RGB LEDs which are pre-calibrated by manufacturers, ensuring consistent color temperature and exceptional lighting quality. This technology simplifies the integration process by allowing users to send simple digital commands to control the color output of the LEDs without needing additional complex setups for color mixing and temperature compensation. ISELED is equipped to handle synchronous lighting displays and dynamic effects across vehicle interiors. The connectivity aspect of ISELED is enhanced by its ILaS protocol, allowing direct cable connections between lighting systems and enabling efficient power conversion. This makes it suitable for applications requiring resilience in communication, despite potential power failures on the board. With capabilities for bridging data over Ethernet, ISELED supports centralized control and synchronization from a vehicle's ECU.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The Network Protocol Accelerator Platform (NPAP) is engineered to accelerate network protocol processing and offload tasks at speeds reaching up to 100 Gbps when implemented on FPGAs, and beyond in ASICs. This platform offers patented and patent-pending technologies that provide significant performance boosts, aiding in efficient network management. With its support for multiple protocols like TCP, UDP, and IP, it meets the demands of modern networking environments effectively, ensuring low latency and high throughput solutions for critical infrastructure. NPAP facilitates the construction of function accelerator cards (FACs) that support 10/25/50/100G speeds, effectively handling intense data workloads. The stunning capabilities of NPAP make it an indispensable tool for businesses needing to process vast amounts of data with precision and speed, thereby greatly enhancing network operations. Moreover, the NPAP emphasizes flexibility by allowing integration with a variety of network setups. Its capability to streamline data transfer with minimal delay supports modern computational demands, paving the way for optimized digital communication in diverse industries.
The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.
The Spiking Neural Processor T1 represents a significant leap in neuromorphic microcontroller technology, blending ultra-low power consumption with advanced spiking neural network capabilities. This microcontroller stands as a complete solution for processing sensor data with unprecedented efficiency and speed, bringing intelligence directly to the sensor. Incorporating a nimble RISC-V processor core alongside its spiking neural network engine, the T1 is engineered for seamless integration into next-generation AI applications. Within a tightly constrained power envelope, it excels at signal processing tasks that are crucial for battery-operated, latency-sensitive devices. The T1's architecture allows for fast, sub-1mW pattern recognition, enabling real-time sensory data processing akin to the human brain's capabilities. This microcontroller facilitates complex event-driven processing with remarkable efficiency, reducing the burden on application processors by offloading sensor data processing tasks. It is an enabler of groundbreaking developments in wearables, ambient intelligence, and smart devices, particularly in scenarios where power and response time are critical constraints. With flexible interface support, including QSPI, I2C, UART, and more, the T1 is designed for easy integration into existing systems. Its compact package size further enhances its suitability for embedded applications, while its comprehensive Evaluation Kit (EVK) supports developers in accelerating application development. The EVK provides extensive performance profiling tools, enabling the exploration of the T1's multifaceted processing capabilities. Overall, the T1 stands at the forefront of bringing brain-inspired intelligence to the edge, setting a new standard for smart sensor technology.
This H.264 FPGA Encoder and CODEC Micro Footprint Core is engineered to achieve minimal latency and compact size when deployed in FPGA environments. It is customizable and ITAR compliant, providing robust 1080p60 H.264 Baseline support on a single core. Known for its remarkable speed and small footprint, this core adapts to various configurations, including complete H.264 encoders and I-Frame Only variations, supporting custom pixel depths and unique resolutions. The core's design focuses on reducing latency to a mere 1 millisecond at 1080p30, setting a high industry standard for performance. Flexibility in deployment allows this core to meet bespoke requirements, offering significant value for customer-specific applications. It stands as a versatile solution for applications demanding high-speed video processing while maintaining compliance with industry standards. Supporting a variety of FPGA platforms, the core is especially valuable in environments where space and power constraints are crucial. Its adaptability, combined with A2e's integration capabilities, ensures seamless incorporation into existing systems, bolstering performance and development efficiency.
The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.
The RISC-V Core IP by AheadComputing Inc. exemplifies cutting-edge processor technology, particularly in the realm of 64-bit application processing. Designed for superior IPC (Instructions Per Cycle) performance, this core is engineered to enhance per-core computing capabilities, catering to high-performance computing needs. It stands as a testament to AheadComputing's commitment to achieving the pinnacle of processor speed, setting new industry standards. This processor core is instrumental for various applications requiring robust processing power. It allows for seamless performance in a multitude of environments, whether in consumer electronics, enterprise solutions, or advanced computational fields. The innovation behind this IP reflects the deep expertise and forward-thinking approach of AheadComputing's experienced team. Furthermore, the RISC-V Core IP supports diverse computing needs by enabling adaptable and scalable solutions. AheadComputing leverages the open-source RISC-V architecture to offer customizable computing power, ensuring that their solutions are both versatile and future-ready. This IP is aimed at delivering efficiency and power optimization, supporting sophisticated applications with precision.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
Dyumnin's RISCV SoC is a versatile platform centered around a 64-bit quad-core server-class RISCV CPU, offering extensive subsystems, including AI/ML, automotive, multimedia, memory, cryptographic, and communication systems. This test chip can be reviewed in an FPGA format, ensuring adaptability and extensive testing possibilities. The AI/ML subsystem is particularly noteworthy due to its custom CPU configuration paired with a tensor flow unit, accelerating AI operations significantly. This adaptability lends itself to innovations in artificial intelligence, setting it apart in the competitive landscape of processors. Additionally, the automotive subsystem caters robustly to the needs of the automotive sector with CAN, CAN-FD, and SafeSPI IPs, all designed to enhance systems connectivity within vehicles. Moreover, the multimedia subsystem boasts a complete range of IPs to support HDMI, Display Port, MIPI, and more, facilitating rich audio and visual experiences across devices.
The SiFive Performance family is at the forefront of providing maximum throughput and performance across a spectrum of computing requirements, from datacenter workloads to consumer applications. These 64-bit, out-of-order cores incorporate advanced vector processing capabilities up to 256-bit, supporting a diversity of workloads including AI. The architecture spans from three to six-wide out-of-order cores, optimized for either dedicated vector engines or a balanced energy-efficient setup, making it a versatile choice for high-performance needs. Engineered for modern AI workloads, the Performance series offers a robust compute density and performance efficiency that is ideal for both mobile and stationary infrastructure. Customers can take advantage of flexible configuration options to balance power and area constraints, thanks to SiFive's state-of-the-art RISC-V solutions. The family’s cores, such as the P400, P600, and P800 Series, offer scalability from low-power tasks to demanding datacenter applications. The series is particularly adept at handling AI workloads, making it suitable for applications that demand high-speed data processing and analysis, such as internet of things (IoT) devices, network infrastructure, and high-volume consumer electronics. Customers benefit from the ability to combine various performance cores into a unified, high-performance CPU optimized for minimal power consumption, making it possible to design systems that balance performance and efficiency.
The Tyr family of processors brings the cutting-edge power of Edge AI to the forefront, emphasizing real-time data processing directly at its point of origin. This capability facilitates instant insights with reduced latency and enhanced privacy, as it limits the reliance on cloud-based processing. Ideal for settings such as autonomous vehicles and smart factories, Tyr is engineered to operate faster and more secure with data-center-class performance in a compact, ultra-efficient design. The processors within the Tyr family are purpose-built to support local processing, which saves bandwidth and protects sensitive data, making it suitable for real-world applications like autonomous driving and factory automation. Edge AI is further distinguished by its ability to provide immediate analysis and decision-making capabilities. Whether it's enabling autonomous vehicles to understand their environment for safe navigation or facilitating real-time industrial automation, the Tyr processors excel in delivering low-latency, high-compute performance essential for mission-critical operations. The local data processing capabilities inherent in the Tyr line not only cut down on costs associated with bandwidth but also contribute towards compliance with stringent privacy standards. In addition to performance and privacy benefits, the Tyr family emphasizes sustainability. By minimizing cloud dependency, these processors significantly reduce operational costs and the carbon footprint, aligning with the growing demand for greener AI solutions. This combination of performance, security, and sustainability makes Tyr processors a cornerstone in advancing industrial and consumer applications using Edge AI.
Tensix Neo represents a transformative leap in enhancing AI computational efficiency, specifically designed to empower developers working on sophisticated AI networks and applications. Built around a Network-on-Chip (NoC) framework, Tensix Neo optimizes performance-per-watt, a critical factor for AI processing. It supports multiple precision formats to adapt to diverse AI workloads efficiently, allowing seamless integration with existing models and enabling scalability. Careful design ensures that Tensix Neo delivers consistent high performance across varied AI tasks, from image recognition algorithms to advanced analytics, making it an essential component in the AI development toolkit. Its capability to connect with an expanding library of AI models allows developers to leverage its full potential across multiple cutting-edge applications. This synthesis of performance and efficiency makes Tensix Neo a vital player in fields requiring high adaptability and rapid processing, such as autonomous vehicles, smart devices, and dynamic data centers. Moreover, the compatibility of Tensix Neo with Tenstorrent's other solutions underscores its importance as a flexible and powerful processing core. Designed with the contemporary developer in mind, Tensix Neo integrates seamlessly with open-source resources and tools, ensuring that developers have the support and flexibility needed to meet the challenges of tomorrow's AI solutions.
The UltraLong FFT core from Dillon Engineering offers exceptional performance for applications requiring extensive sequence lengths. This core utilizes external memory in coordination with dual FFT engines to facilitate high throughput. While it typically hinges on memory bandwidth for its speed, the UltraLong FFT effectively processes lengthy data sequences in a streamlined manner. This core is characterized by its medium to high-speed capabilities and is an excellent choice for applications where external memory can be leveraged to support processing requirements. Its architecture allows for flexible design implementation, ensuring seamless integration with existing systems, and is particularly well-suited for advanced signal processing applications in both FPGA and ASIC environments. With Dillon's ParaCore Architect tool, customization and re-targeting of the IP core towards any technology are straightforward, offering maximum adaptability. This FFT solution stands out for its capacity to manage complex data tasks, making it an ideal fit for cutting-edge technologies demanding extensive data length processing efficiency.
Functioning as a comprehensive cross-correlator, the XCM_64X64 facilitates efficient and precise signal processing required in synthetic radar receivers and advanced spectrometers. Designed on IBM's 45nm SOI CMOS technology, it supports ultra-low power operation at about 1.5W for the entire array, with a sampling performance of 1GSps across a bandwidth of 10MHz to 500MHz. The ASIC is engineered to manage high-throughput data channels, a vital component for high-energy physics and space observation instruments.
The XCM_64X64_A is a powerful array designed for cross-correlation operations, integrating 128 ADCs each capable of 1GSps. Targeted at high-precision synthetic radar and radiometer systems, this ASIC delivers ultra-low power consumption around 0.5W, ensuring efficient performance over a wide bandwidth range from 10MHz to 500MHz. Built on IBM's 45nm SOI CMOS technology, it forms a critical component in systems requiring rapid data sampling and intricate signal processing, all executed with high accuracy, making it ideal for airborne and space-based applications.
Designed to cater to the needs of edge computing, the Neural Network Accelerator by Gyrus AI is a powerhouse of performance and efficiency. With a focus on graph processing capabilities, this product excels in implementing neural networks by providing native graph processing. The accelerator attains impressive speeds, achieving 30 TOPS/W, while offering efficient computational power with significantly reduced clock cycles, ranging between 10 to 30 times less compared to traditional models. The design ensures that power consumption is kept at a minimum, being 10-20 times lower due to its low memory usage configuration. Beyond its power efficiency, this accelerator is designed to maximize space with a smaller die area, ensuring an 8-10 times reduction in size while maintaining high utilization rates of over 80% for various model structures. Such design optimizations make it an ideal choice for applications requiring a compact, high-performance solution capable of delivering fast computations without compromising on energy efficiency. The Neural Network Accelerator is a testament to Gyrus AI's commitment to enabling smarter edge computing solutions. Additionally, Gyrus AI has paired this technology with software tools that facilitate the execution of neural networks on the IP, simplifying integration and use in various applications. This seamless integration is part of their broader strategy to augment human intelligence, providing solutions that enhance and expand the capabilities of AI-driven technologies across industries.
Nuclei's RISC-V CPU IP UX Class is a cutting-edge solution designed for 64-bit computing, particularly in data center operations, network systems, and Linux environments. Engineered with Verilog, the UX Class boasts outstanding readability and is tailored for effective debugging and PPA optimization, thus streamlining its deployment in performance-centric applications. Its comprehensive configurability allows for precise system incorporation by selecting features pertinent to specific operational needs. This processor IP is fortified with extensive RISC-V extension support, enhancing its applicability in various domains. Noteworthy are its security features, including TEE support and a robust physical security package, critical for maintaining information security integrity. Additionally, its alignment with safety protocols like ASIL-B and ASIL-D underscores its reliability in environments that demand stringent safety measures. The UX Class represents Nuclei's flagship offering for enterprises requiring powerful, flexible, and secure processing capabilities. By providing essential integration into Linux and network-driven systems, the UX Class solidifies its place as a cornerstone for modern, high-performance computing infrastructure.
Our SoC Platform is designed to accelerate the development of custom silicon products. Built with domain-specific architecture, it provides rapid and streamlined SoC design using silicon-proven IPs. The platform offers lower costs and reduced risks associated with prototyping and manufacturing, ensuring a quicker turnaround. Users benefit from pre-configured and verified IP pools, enabling faster bring-up of hardware and software. Designed for flexible applications, it supports a range of use cases from AI inference to IoT, helping companies achieve up to 50% faster time-to-market compared to industry standards.
AccelerComm's Software-Defined High PHY is a malleable solution, catered to the ARM processor framework, capable of fulfilling the diverse requirements of modern telecommunications infrastructures. This technology is renowned for its optimization capabilities, functioning either with or without hardware acceleration, contingent on the exigencies of the target application with regards to power and capacity. The implementation of Software-Defined High PHY signifies a leap in configuring PHY layers, facilitating adaptation to varying performance and efficiency mandates of different hardware platforms. The technology supports seamless transitions across platforms, making it applicable for a spectrum of use cases, harmonizing with both flexible software protocols and established hardware standards. By uniting traditional hardware PHY layers with modern software innovations, this solution propels network performance while reducing latency, enhancing data throughput, and minimizing overall system power consumption. This adaptability is vital for enterprises aiming to meet the dynamic demands for quality and reliability in wireless communication network setups.
The Codasip L-Series DSP Core offers specialized features tailored for digital signal processing applications. It is designed to efficiently handle high data throughput and complex algorithms, making it ideal for applications in telecommunications, multimedia processing, and advanced consumer electronics. With its high configurability, the L-Series can be customized to optimize processing power, ensuring that specific application needs are met with precision. One of the key advantages of this core is its ability to be finely tuned to deliver optimal performance for signal processing tasks. This includes configurable instruction sets that align precisely with the unique requirements of DSP applications. The core’s design ensures it can deliver top-tier performance while maintaining energy efficiency, which is critical for devices that operate in power-sensitive environments. The L-Series DSP Core is built on Codasip's proven processor design methodologies, integrating seamlessly into existing systems while providing a platform for developers to expand and innovate. By offering tools for easy customization within defined parameters, Codasip ensures that users can achieve the best possible outcomes for their DSP needs efficiently and swiftly.
SEMIFIVE's AIoT Platform is crafted to meet the evolving needs of the AI and IoT convergence. Aimed at enabling edge computing and connecting smart devices, this platform seamlessly integrates AI processing with IoT capabilities. It is ideal for developing efficient and responsive IoT solutions that require sophisticated AI integration. By utilizing advanced process nodes, the platform ensures that the solutions are not only powerful but also energy-efficient, supporting innovations in smart home technology, connected vehicles, and industrial IoT applications.
The AI Inference Platform is a specialized solution tailored for high-performance AI applications. This platform integrates state-of-the-art silicon technologies and AI-optimized IPs to facilitate efficient inference processing. It supports accelerated computational tasks with reduced latency, which is essential for AI-driven solutions such as neural network processing and machine learning deployments. SEMIFIVE's AI Inference Platform ensures that businesses can leverage AI capabilities efficiently, delivering responsive and powerful AI applications at reduced operational costs.
AON1100 is acclaimed as a forefront AI chip specifically for voice and sensor applications. Known for its extraordinary power efficiency, it consumes less than 260μW, excelling in sub-0dB signal-to-noise ratio environments while maintaining 90% accuracy. Designed for constantly operating devices, this chip leverages high-precision processing, facilitating its extensive application in always-on technologies like smart homes and automotive systems.
TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.
The Vega eFPGA is a flexible programmable solution crafted to enhance SoC designs with substantial ease and efficiency. This IP is designed to offer multiple advantages such as increased performance, reduced costs, secure IP handling, and ease of integration. The Vega eFPGA boasts a versatile architecture allowing for tailored configurations to suit varying application requirements. This IP includes configurable tiles like CLB (Configurable Logic Blocks), BRAM (Block RAM), and DSP (Digital Signal Processing) units. The CLB part includes eight 6-input Lookup Tables that provide dual outputs, and also an optional configuration with a fast adder having a carry chain. The BRAM supports 36Kb dual-port memory and offers flexibility for different configurations, while the DSP component is designed for complex arithmetic functions with its 18x20 multipliers and a wide 64-bit accumulator. Focused on allowing easy system design and acceleration, Vega eFPGA ensures seamless integration and verification into any SoC design. It is backed by a robust EDA toolset and features that allow significant customization, making it adaptable to any semiconductor fabrication process. This flexibility and technological robustness places the Vega eFPGA as a standout choice for developing innovative and complex programmable logic solutions.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!