All IPs > Platform Level IP > Multiprocessor / DSP
In the realm of semiconductor IP, the Multiprocessor and Digital Signal Processor (DSP) category plays a crucial role in enhancing the processing performance and efficiency of a vast array of modern electronic devices. Semiconductor IPs in this category are designed to support complex computational tasks, enabling sophisticated functionalities in consumer electronics, automotive systems, telecommunications, and more. With the growing need for high-performance processing in a compact and energy-efficient form, multiprocessor and DSP IPs have become integral to product development across industries.
The multiprocessor IPs are tailored to provide parallel processing capabilities, which significantly boost the computational power required for intensive applications. By employing multiple processing cores, these IPs allow for the concurrent execution of multiple tasks, leading to faster data processing and improved system performance. This is especially vital in applications such as gaming consoles, smartphones, and advanced driver-assistance systems (ADAS) in vehicles, where seamless and rapid processing is essential.
Digital Signal Processors are specialized semiconductor IPs used to perform mathematical operations on signals, allowing for efficient processing of audio, video, and other types of data streams. DSPs are indispensable in applications where real-time data processing is critical, such as noise cancellation in audio devices, image processing in cameras, and signal modulation in communication systems. By providing dedicated hardware structures optimized for these tasks, DSP IPs deliver superior performance and lower power consumption compared to general-purpose processors.
Products in the multiprocessor and DSP semiconductor IP category range from core subsystems and configurable processors to specialized accelerators and integrated solutions that combine processing elements with other essential components. These IPs are designed to help developers create cutting-edge solutions that meet the demands of today’s technology-driven world, offering flexibility and scalability to adapt to different performance and power requirements. As technology evolves, the importance of multiprocessor and DSP IPs will continue to grow, driving innovation and efficiency across various sectors.
The Akida 2nd Generation continues BrainChip's legacy of low-power, high-efficiency AI processing at the edge. This iteration of the Akida platform introduces expanded support for various data precisions, including 8-, 4-, and 1-bit weights and activations, which enhance computational flexibility and efficiency. Its architecture is significantly optimized for both spatial and temporal data processing, serving applications that demand high precision and rapid response times such as robotics, advanced driver-assistance systems (ADAS), and consumer electronics. The Akida 2nd Generation's event-based processing model greatly reduces unnecessary operations, focusing on real-time event detection and response, which is vital for applications requiring immediate feedback. Furthermore, its sophisticated on-chip learning capabilities allow adaptation to new tasks with minimal data, fostering more robust AI models that can be personalized to specific use cases without extensive retraining. As industries continue to migrate towards AI-powered solutions, the Akida 2nd Generation provides a compelling proposition with its improved performance metrics and lower power consumption profile.
Designed for high-performance applications, the Metis AIPU PCIe AI Accelerator Card by Axelera AI offers powerful AI processing capabilities in a PCIe card format. This card is equipped with the Metis AI Processing Unit, capable of delivering up to 214 TOPS, making it ideal for intensive AI tasks and vision applications that require substantial computational power. With support for the Voyager SDK, this card ensures seamless integration and rapid deployment of AI models, helping developers leverage existing infrastructures efficiently. It's tailored for applications that demand robust AI processing like high-resolution video analysis and real-time object detection, handling complex networks with ease. Highlighted for its performance in ResNet-50 processing, which it can execute at a rate of up to 3,200 frames per second, the PCIe AI Accelerator Card perfectly meets the needs of cutting-edge AI applications. The software stack enhances the developer experience, simplifying the scaling of AI workloads while maintaining cost-effectiveness and energy efficiency for enterprise-grade solutions.
Panmnesia's CXL 3.1 Switch is an integral component designed to facilitate high-speed, low-latency data transfers across multiple connected devices. It is architected to manage resource allocation seamlessly in AI and high-performance computing environments, supporting broad bandwidth, robust data throughput, and efficient power consumption, creating a cohesive foundation for scalable AI infrastructures. Its integration with advanced protocols ensures high system compatibility.
EXTOLL's Universal Chiplet Interconnect Express (UCIe) is a cutting-edge solution designed to meet the evolving needs of chip-to-chip communication. UCIe enables seamless data exchange between chiplets, fostering a new era of modular and scalable processor designs. This technology is especially vital for applications requiring high bandwidth and low latency in data transfer between different chip components. Built to support heterogeneous integration, UCIe offers superior scalability and is compatible with a variety of process nodes, enabling easy adaptation to different technological requirements. This ensures that system architects can achieve optimal performance without compromising on design flexibility or efficiency. Furthermore, UCIe's design philosophy is centered around maintaining ultra-low power consumption, aligning with modern demands for energy-efficient technology. Through EXTOLL’s UCIe, developers have the capability to build versatile and multi-functional platforms that are more robust than ever. This interconnect technology not only facilitates communications between chips but enhances the overall architecture, paving the way for future innovations in chiplet systems.
The Yitian 710 processor from T-Head represents a significant advancement in server chip technology, featuring an ARM-based architecture optimized for cloud applications. With its impressive multi-core design and high-speed memory access, this processor is engineered to handle intensive data processing tasks with efficiency and precision. It incorporates advanced fabrication techniques, offering high throughput and low latency to support next-generation cloud computing environments. Central to its architecture are 128 high-performance CPU cores utilizing the Armv9 structure, which facilitate superior computational capabilities. These cores are paired with substantial cache size and high-speed DDR5 memory interfaces, optimizing the processor's ability to manage massive workloads effectively. This attribute makes it an ideal choice for data centers looking to enhance processing speed and efficiency. In addition to its hardware prowess, the Yitian 710 is designed to deliver excellent energy efficiency. It boasts a sophisticated power management system that minimizes energy consumption without sacrificing performance, aligning with green computing trends. This combination of power, efficiency, and environmentally friendly design positions the Yitian 710 as a pivotal choice for enterprises propelling into the future of computing.
The Chimera GPNPU from Quadric is engineered to meet the diverse needs of modern AI applications, bridging the gap between traditional processing and advanced AI model requirements. It's a fully licensable processor, designed to deliver high AI inference performance while eliminating the complexity of traditional multi-core systems. The GPNPU boasts an exceptional ability to execute various AI models, including classical backbones, state-of-the-art transformers, and large language models, all within a single execution pipeline.\n\nOne of the core strengths of the Chimera GPNPU is its unified architecture that integrates matrix, vector, and scalar processing capabilities. This singular design approach allows developers to manage complex tasks such as AI inference and data-parallel processing without resorting to multiple tools or artificial partitioning between processors. Users can expect heightened productivity thanks to its modeless operation, which is fully programmable and efficiently executes C++ code alongside AI graph code.\n\nIn terms of versatility and application potential, the Chimera GPNPU is adaptable across different market segments. It's available in various configurations to suit specific performance needs, from single-core designs to multi-core clusters capable of delivering up to 864 TOPS. This scalability, combined with future-proof programmability, ensures that the Chimera GPNPU not only addresses current AI challenges but also accommodates the ever-evolving landscape of cognitive computing requirements.
xcore.ai is a versatile and powerful processing platform designed for AIoT applications, delivering a balance of high performance and low power consumption. Crafted to bring AI processing capabilities to the edge, it integrates embedded AI, DSP, and advanced I/O functionalities, enabling quick and effective solutions for a variety of use cases. What sets xcore.ai apart is its cycle-accurate programmability and low-latency control, which improve the responsiveness and precision of the applications in which it is deployed. Tailored for smart environments, xcore.ai ensures robust and flexible computing power, suitable for consumer, industrial, and automotive markets. xcore.ai supports a wide range of functionalities, including voice and audio processing, making it ideal for developing smart interfaces such as voice-controlled devices. It also provides a framework for implementing complex algorithms and third-party applications, positioning it as a scalable solution for the growing demands of the connected world.
The Metis AIPU M.2 Accelerator Module from Axelera AI is a cutting-edge solution designed for enhancing AI performance directly within edge devices. Engineered to fit the M.2 form factor, this module packs powerful AI processing capabilities into a compact and efficient design, suitable for space-constrained applications. It leverages the Metis AI Processing Unit to deliver high-speed inference directly at the edge, minimizing latency and maximizing data throughput. The module is optimized for a range of computer vision tasks, making it ideal for applications like multi-channel video analytics, quality inspection, and real-time people monitoring. With its advanced architecture, the AIPU module supports a wide array of neural networks and can handle up to 24 concurrent video streams, making it incredibly versatile for industries looking to implement AI-driven solutions across various sectors. Providing seamless compatibility with AI frameworks such as TensorFlow, PyTorch, and ONNX, the Metis AIPU integrates seamlessly with existing systems to streamline AI model deployment and optimization. This not only boosts productivity but also significantly reduces time-to-market for edge AI solutions. Axelera's comprehensive software support ensures that users can achieve maximum performance from their AI models while maintaining operational efficiency.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
The SAKURA-II AI Accelerator represents a cutting-edge advancement in the field of generative AI, offering remarkable efficiency in a compact form factor. Engineered for rapid real-time inferencing, it excels in applications requiring low latency and robust performance in small, power-efficient silicon. This accelerator adeptly manages multi-billion parameter models, including Llama 2 and Stable Diffusion, under typical power requirements of 8W, catering to diverse applications from Vision to Language and Audio. Its core advantage lies in exceeding the AI compute utilization of other solutions, ensuring outstanding energy efficiency. The SAKURA-II further supports up to 32GB of DRAM, leveraging enhanced bandwidth for superior performance. Sparse computing techniques minimize memory footprint, while real-time data streaming and support for arbitrary activation functions elevate its functionality, enabling sophisticated applications in edge environments. This versatile AI accelerator not only enhances energy efficiency but also delivers robust memory management, supporting advanced precision for near-FP32 accuracy. Coupled with advanced power management, it suits a wide array of edge AI implementations, affirming its place as a leader in generative AI technologies at the edge.
The Talamo Software Development Kit (SDK) is a comprehensive toolkit designed to facilitate the development and deployment of advanced neuromorphic AI applications. Leveraging the familiar PyTorch environment, Talamo simplifies AI model creation and deployment, allowing developers to efficiently build spiking neural network models or adapt existing frameworks. The SDK integrates essential tools for compiling, training, and simulating AI models, providing users a complete environment to tailor their AI solutions without requiring extensive expertise in neuromorphic computing. One of Talamo's standout features is its seamless integration with the Spiking Neural Processor (SNP), offering an easy path from model creation to application deployment. The SDK's architecture simulator supports rapid validation and iteration, giving developers a valuable resource for refining their models. By enabling streamlined processes for building and optimizing applications, Talamo reduces development time and enhances the flexibility of AI deployment in edge scenarios. Talamo is designed to empower developers to utilize the full potential of brain-inspired AI, allowing the creation of end-to-end application pipelines. It supports building complex functions and neural networks through a plug-and-play model approach, minimizing the barriers to entry for deploying neuromorphic solutions. As an all-encompassing platform, Talamo paves the way for the efficient realization of sophisticated AI-driven applications, from inception to final implementation.
The Jotunn8 AI Accelerator represents a pioneering approach in AI inference chip technology, designed to cater to the demanding needs of contemporary data centers. Its architecture is optimized for high-speed deployment of AI models, combining rapid data processing capabilities with cost-effectiveness and energy efficiency. By integrating features such as ultra-low latency and substantial throughput capacity, it supports real-time applications like chatbots and fraud detection that require immediate data processing and agile responses. The chip's impressive performance per watt metric ensures a lower operational cost, making it a viable option for scalable AI operations that demand both efficiency and sustainability. By reducing power consumption, Jotunn8 not only minimizes expenditure but also contributes to a reduced carbon footprint, aligning with the global move towards greener technology solutions. These attributes make Jotunn8 highly suitable for applications where energy considerations and environmental impact are paramount. Additionally, Jotunn8 offers flexibility in memory performance, allowing for the integration of complexity in AI models without compromising on speed or efficiency. The design emphasizes robustness in handling large-scale AI services, catering to the new challenges posed by expanding data needs and varied application environments. Jotunn8 is not simply about enhancing inference speed; it proposes a new baseline for scalable AI operations, making it a foundational element for future-proof AI infrastructure.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
Designed for minimal power consumption, the Tianqiao-70 is a 64-bit RISC-V CPU core that harmonizes efficiency with energy savings. Targeting primarily the commercial space, this CPU core supports applications that demand lower power usage without compromising performance outputs. It stands out in the fields of mobile and desktop processing, AI learning, and other demanding applications that require consistent yet power-efficient computing. Architected to provide maximum throughput with minimum power draw, it is essential for energy-critical systems. The Tianqiao-70 showcases StarFive's commitment to optimizing for efficiency, enabling mobile, desktop, and AI platforms to leverage low power requirements effectively. This makes it a compelling choice for developers aiming to integrate eco-friendly solutions in their products.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
The SiFive Intelligence X280 processor targets applications in machine learning and artificial intelligence, offering a high-performance, scalable architecture for emerging data workloads. As part of the Intelligence family, the X280 prioritizes a software-first methodology in processor design, addressing future ML and AI deployment needs, especially at the edge. This makes it particularly useful for scenarios requiring high computational power close to the data source. Central to its capabilities are scalable vector and matrix compute engines that can adapt to evolving workloads, thus future-proofing investments in AI infrastructure. With high-bandwidth bus interfaces and support for custom engine control, the X280 ensures seamless integration with varied system architectures, enhancing operational efficiency and throughput. By focusing on versatility and scalability, the X280 allows developers to deploy high-performance solutions without the typical constraints of more traditional platforms. It supports wide-ranging AI applications, from edge computing in IoT to advanced machine learning tasks, underpinning its role in modern and future-ready computing solutions.
This core is designed for ultra-low power applications, offering a remarkable balance of power efficiency and performance. Operating at a mere 10mW at 1GHz, it showcases Micro Magic's advanced design techniques that allow for high-speed processing while maintaining low voltage operations. The core is ideal for energy-sensitive applications where performance cannot be compromised. With its ability to operate efficiently at 5GHz, this RISC-V core provides a formidable foundation for high-performance, low-power computing. It is a testament to Micro Magic's ability to develop cutting-edge solutions that cater to the needs of modern semiconductor applications. The 64-bit architecture ensures robust processing capabilities, making it suitable for a wide range of applications in various sectors. Whether for IoT devices or complex computing operations, this core is designed to meet diverse requirements by delivering power-packed performance.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The H.264 FPGA Encoder and CODEC Micro Footprint Cores from A2e Technologies is a highly customizable IP core designed specifically for FPGAs. This core is notable for its small size and high speed, capable of supporting 1080p60 H.264 Baseline video with a single core. Featuring exceptionally low latency, as little as 1ms at 1080p30, it offers a customizable solution for various video resolutions and pixel depths. These capabilities make it a competitive choice for applications requiring high-performance video compression with minimal footprint. Designed to be ITAR compliant and licensable, the H.264 core can be tailored to meet specific requirements, offering flexibility in video applications. This product is especially suitable for industries where space and performance are critical, such as defense and industrial controls. The core can work efficiently across a range of resolutions and color depths, providing the potential for integration into a wide array of devices and systems. The company's expertise ensures that this H.264 core is not only versatile but also comes with the option of a low-cost evaluation license, allowing potential users to explore its capabilities before committing fully. With A2e's strong support and integration services, customers have assurance that even complex design requirements can be met with experienced guidance.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
The Spiking Neural Processor T1 is a neuromorphic microcontroller engineered for always-on sensor applications. It utilizes a spiking neural network engine alongside a RISC-V processor core, creating an ultra-efficient single-chip solution for real-time data processing. With its optimized power consumption, it enables next-generation artificial intelligence and signal processing in small, battery-operated devices. The T1 delivers advanced applications capabilities within a minimal power envelope, making it suitable for use in devices where power and latency are critical factors. The T1 includes a compact, multi-core RISC-V CPU paired with substantial on-chip SRAM, enabling fast and responsive processing of sensor data. By employing the remarkable abilities of spiking neural networks for pattern recognition, it ensures superior power performance on signal-processing tasks. The versatile processor can execute both SNNs and conventional processing tasks, supported by various standard interfaces, thus offering maximum flexibility to developers looking to implement AI features across different devices. Developers can quickly prototype and deploy solutions using the T1's development kit, which includes software for easy integration into existing systems and tools for accurate performance profiling. The development kit supports a variety of sensor interfaces, streamlining the creation of sophisticated sensor applications without the need for extensive power or size trade-offs.
The SiFive Essential family of processors is renowned for its flexibility and wide applicability across embedded systems. These CPU cores are designed to meet specific market needs with pre-defined, silicon-proven configurations or through use of SiFive Core Designer for custom processor builds. Serving in a range of 32-bit to 64-bit options, the Essential processors can scale from microcontrollers to robust dual-issue CPUs. Widely adopted in the embedded market, the Essential series cores stand out for their scalable performance, adapting to diverse application requirements while maintaining power and area efficiency. They cater to billions of units worldwide, indicating their trusted performance and integration across various industries. The SiFive Essential processors offer an optimal balance of power, area, and cost, making them suitable for a wide array of devices, from IoT and consumer electronics to industrial applications. They provide a solid foundation for products that require reliable performance at a competitive price.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
ISELED represents a breakthrough in automotive lighting with its integration of RGB LED control and communication in a single, smart LED component. This innovative system simplifies lighting design by enabling digital color value input for immediate autonomous color mixing and temperature adjustments, reducing both complexity and cost in vehicles. ISELED operates by implementing a manufacturer-calibrated RGB LED setup suitable for diverse applications, from ambient to functional lighting systems within vehicles. Utilizing a bidirectional communication protocol, ISELED manages up to 4,079 addressable LEDs, offering easy installation and high precision control over individual light characteristics, ideal for creating dynamic and at times synchronized lighting across the automotive interior. This technology ultimately enhances network resilience with features like DC/DC conversion from a standard 12V battery, consistent communication despite power variations, and compatibility with software-free Ethernet bridge systems for streamlined connectivity. This strong focus on reducing production and operational costs, while simultaneously broadening lighting functionality, positions ISELED as a modern solution for smart automotive lighting architectures.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
The RISC-V Core IP developed by AheadComputing Inc. stands out in the field of 64-bit application processors. Designed to deliver exceptional per-core performance, this processor is engineered with the highest standards to maximize the Instructions Per Cycle (IPC) efficiency. AheadComputing's RISC-V Core IP is continuously refined to address the growing demands of high-performance computing applications. The innovative architecture of this core allows for seamless execution of complex algorithms while achieving superior speed and efficiency. This design is crucial for applications that require fast data processing and real-time computational capabilities. By integrating advanced power management techniques, the RISC-V Core IP ensures energy efficiency without sacrificing performance, making it suitable for a wide range of electronic devices. Anticipating future computing needs, AheadComputing's RISC-V Core IP incorporates state-of-the-art features that support scalability and adaptability. These features ensure that the IP remains relevant as technology evolves, providing a solid foundation for developing next-generation computing solutions. Overall, it embodies AheadComputing’s commitment to innovation and performance excellence.
The Digital PreDistortion (DPD) Solution from Systems4Silicon is a comprehensive adaptive technology aimed at improving the efficiency of RF power amplifiers. It is designed to maximize amplifier performance by allowing operation in the non-linear region while significantly reducing distortion. The solution is highly scalable, allowing for resource optimization across bandwidth, performance, and multiple antenna configurations. It is technology-agnostic, supporting various transistor technologies such as LDMOS and GaN, and can be adapted to different amplifier topologies including Doherty configurations. Benefits of the DPD technology include achieving over 50% efficiency improvements when utilized alongside the latest GaN devices, with amplifier distortion improvements of over 45 dB. This IP also supports multi-carrier and multi-standard transmissions, covering a broad array of standards such as 3G, 4G, 5G, DVB, and many more. It is compliant with the O-RAN standard for 7-2x deployments, making it a versatile solution for modern wireless communication systems. Systems4Silicon's DPD solution includes comprehensive integration and performance analysis tools, backed by expert support from experienced radio systems engineers. Designed for both FPGA/SoC and ASIC platforms, it provides a low resource footprint while ensuring maximum efficiency across diverse applications.
TT-Ascalon™ is a versatile RISC-V CPU core developed by Tenstorrent, emphasizing the utility of open standards to meet a diverse array of computing needs. Built to be highly configurable, TT-Ascalon™ allows for the inclusion of 2 to 8 cores per cluster complemented by a customizable L2 cache. This architecture caters to clients seeking a tailored processing solution without the limitations tied to proprietary systems. With support for CHI.E and AXI5-LITE interfaces, TT-Ascalon™ ensures robust connectivity while maintaining system integrity and performance density. Its security capabilities are premised on equivalent RISC-V primitives, ensuring a reliable and trusted environment for operations involving sensitive data. Tenstorrent’s engineering prowess, evident in TT-Ascalon™, has been shaped by experienced personnel from renowned tech giants. This IP is meant to align with various performance targets, suited for complex computational tasks that demand flexibility and efficiency in design.
Tyr AI Processor Family is engineered to bring unprecedented processing capabilities to Edge AI applications, where real-time, localized data processing is crucial. Unlike traditional cloud-based AI solutions, Edge AI facilitated by Tyr operates directly at the site of data generation, thereby minimizing latency and reducing the need for extensive data transfers to central data centers. This processor family stands out in its ability to empower devices to deliver instant insights, which is critical in time-sensitive operations like autonomous driving or industrial automation. The innovative design of the Tyr family ensures enhanced privacy and compliance, as data processing stays on the device, mitigating the risks associated with data exposure. By doing so, it supports stringent requirements for privacy while also reducing bandwidth utilization. This makes it particularly advantageous in settings like healthcare or environments with limited connectivity, where maintaining data integrity and efficiency is crucial. Designed for flexibility and sustainability, the Tyr AI processors are adept at balancing computing power with energy consumption, thus enabling the integration of multi-modal inputs and outputs efficiently. Their performance nears data center levels, yet they are built to consume significantly less energy, making them a cost-effective solution for implementing AI capabilities across various edge computing environments.
The Network Protocol Accelerator Platform (NPAP) is engineered to accelerate network protocol processing and offload tasks at speeds reaching up to 100 Gbps when implemented on FPGAs, and beyond in ASICs. This platform offers patented and patent-pending technologies that provide significant performance boosts, aiding in efficient network management. With its support for multiple protocols like TCP, UDP, and IP, it meets the demands of modern networking environments effectively, ensuring low latency and high throughput solutions for critical infrastructure. NPAP facilitates the construction of function accelerator cards (FACs) that support 10/25/50/100G speeds, effectively handling intense data workloads. The stunning capabilities of NPAP make it an indispensable tool for businesses needing to process vast amounts of data with precision and speed, thereby greatly enhancing network operations. Moreover, the NPAP emphasizes flexibility by allowing integration with a variety of network setups. Its capability to streamline data transfer with minimal delay supports modern computational demands, paving the way for optimized digital communication in diverse industries.
Dyumnin's RISCV SoC is a versatile platform centered around a 64-bit quad-core server-class RISCV CPU, offering extensive subsystems, including AI/ML, automotive, multimedia, memory, cryptographic, and communication systems. This test chip can be reviewed in an FPGA format, ensuring adaptability and extensive testing possibilities. The AI/ML subsystem is particularly noteworthy due to its custom CPU configuration paired with a tensor flow unit, accelerating AI operations significantly. This adaptability lends itself to innovations in artificial intelligence, setting it apart in the competitive landscape of processors. Additionally, the automotive subsystem caters robustly to the needs of the automotive sector with CAN, CAN-FD, and SafeSPI IPs, all designed to enhance systems connectivity within vehicles. Moreover, the multimedia subsystem boasts a complete range of IPs to support HDMI, Display Port, MIPI, and more, facilitating rich audio and visual experiences across devices.
The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.
The iCan PicoPop® is a miniaturized system on module (SOM) based on the Xilinx Zynq UltraScale+ Multi-Processor System-on-Chip (MPSoC). This advanced module is designed to handle sophisticated signal processing tasks, making it particularly suited for aeronautic embedded systems that require high-performance video processing capabilities. The module leverages the powerful architecture of the Zynq MPSoC, providing a robust platform for developing cutting-edge avionics and defense solutions. With its compact form factor, the iCan PicoPop® SOM offers unparalleled flexibility and performance, allowing it to seamlessly integrate into various system architectures. The high level of integration offered by the Zynq UltraScale+ MPSoC aids in simplifying the design process while reducing system latency and power consumption, providing a highly efficient solution for demanding applications. Additionally, the iCan PicoPop® supports advanced functionalities through its integration of programmable logic, multi-core processing, and high-speed connectivity options, making it ideal for developing next-generation applications in video processing and other complex avionics functions. Its modular design also allows for easy customization, enabling developers to tailor the system to meet specific performance and functionality needs, ensuring optimal adaptability for intricate aerospace environments. Overall, the iCan PicoPop® demonstrates a remarkable blend of high-performance computing capabilities and adaptable configurations, making it a valuable asset in the development of high-tech avionics solutions designed to withstand rigorous operational demands in aviation and defense.
The Maverick-2 Intelligent Compute Accelerator (ICA) is a groundbreaking innovation by Next Silicon Ltd. This architecture introduces a novel software-defined approach that adapts in real-time to optimize computational tasks, breaking the traditional constraints of CPUs and GPUs. By dynamically learning and accelerating critical code segments, Maverick-2 ensures enhanced efficiency and performance efficiency for high-performance computing (HPC), artificial intelligence (AI), and vector databases. Designers have developed the Maverick-2 to support a wide range of common programming languages, including C/C++, FORTRAN, OpenMP, and Kokkos, facilitating an effortless porting process. This robust toolchain reduces time-intensive application porting, allowing for a significant cut in development time while maximizing scientific output and insights. Developers can enjoy seamless integration into their existing workflows without needing new proprietary software stacks. A standout feature of this intelligent architecture is its ability to adjust hardware configurations on-the-fly, optimizing power efficiency and overall performance. With an emphasis on sustainable innovation, the Maverick-2 offers a performance-per-watt advantage that exceeds traditional GPU and high-end CPU solutions by over fourfold, making it a cost-effective and environmentally friendly choice for modern data centers and research facilities.
The SiFive Performance family of processors is designed to offer top-tier performance and throughput across a range of sizes and power profiles. These cores provide highly efficient RISC-V scalar and vector computing capabilities, tailored for an optimal balance that delivers industry-leading results. With options for high-performance 64-bit out-of-order scalar engines and optional vector compute engines, the Performance series ensures customers get the maximum capabilities in computational power. Incorporating a robust architecture, these processors support extensive hardware capabilities, including full support for the RVA23 profile and an option for vector processing adjustments that maximizes computing efficiency. The SiFive Performance series has cores that cater to various needs, whether for general-purpose computing or applications requiring extensive parallel processing capabilities. SiFive's architecture allows for scalability and customization, bridging the gap between high-demand computational tasks and power efficiency. It is meticulously designed to meet the rigorous demands of modern and future computing applications, ensuring that both enterprise and consumer electronics can leverage the power of RISC-V computing. This makes it an ideal choice for developers seeking to push the boundaries of processing capabilities.
Tensix Neo is an AI-focused semiconductor solution from Tenstorrent that capitalizes on the robustness of RISC-V architecture. This IP is crafted to enhance the efficiency of both AI training and inference processes, making it a vital tool for entities needing scalable AI solutions without hefty power demands. With Tensix Neo, developers can rest assured of the silicon-proven reliability that backs its architecture, facilitating a smooth integration into existing AI platforms. The IP embraces the flexibility and customization needed for advanced AI workloads, optimizing resources and yielding results with high performance per watt. As the demand for adaptable AI solutions grows, Tensix Neo offers a future-proof platform that can accommodate rapid advancements and complex deployments in machine learning applications. By providing developers with tested and verified infrastructure, Tensix Neo stands as a benchmark in AI IP development.
Designed to accelerate the development of AI-driven solutions, the AI Inference Platform by SEMIFIVE offers a powerful infrastructure for deploying artificial intelligence applications quickly and efficiently. This platform encompasses an AI-focused architecture with silicon-proven IPs tailored specifically for machine learning tasks, providing a robust foundation for developers to build upon. The platform is equipped with high-performance processors optimized for AI workloads, including sophisticated neural processing units (NPUs) and memory interfaces that support large datasets and reduce latency in processing. It integrates seamlessly with existing tools and environments, minimizing the need for additional investments in infrastructure. Through strategic partnerships and an extensive library of pre-verified components, this platform reduces the complexity and time associated with AI application development. SEMIFIVE’s approach ensures end-users can focus on innovation rather than the underlying technology challenges, delivering faster time-to-market and enhanced performance for AI applications.
The AON1100 offers a sophisticated AI solution for voice and sensor applications, marked by a remarkable power usage of less than 260μW during processing yet maintaining high levels of accuracy in environments with sub-0dB SNR. It is a leading option for always-on devices, providing effective solutions for contexts requiring constant machine listening ability.\n\nThis AI chip excels in processing real-world acoustic and sensor data efficiently, delivering up to 90% accuracy by employing advanced signal processing techniques. The AON1100's low power requirements make it an excellent choice for battery-operated devices, ensuring sustainable functionality through efficient power consumption over extended operational periods.\n\nThe scalability of the AON1100 allows it to be adapted for various applications, including smart homes and automotive settings. Its integration within broader AI platform strategies enhances intelligent data collection and contextual understanding capabilities, delivering transformative impacts on device interactivity and user experience.
The Codasip L-Series DSP Core offers specialized features tailored for digital signal processing applications. It is designed to efficiently handle high data throughput and complex algorithms, making it ideal for applications in telecommunications, multimedia processing, and advanced consumer electronics. With its high configurability, the L-Series can be customized to optimize processing power, ensuring that specific application needs are met with precision. One of the key advantages of this core is its ability to be finely tuned to deliver optimal performance for signal processing tasks. This includes configurable instruction sets that align precisely with the unique requirements of DSP applications. The core’s design ensures it can deliver top-tier performance while maintaining energy efficiency, which is critical for devices that operate in power-sensitive environments. The L-Series DSP Core is built on Codasip's proven processor design methodologies, integrating seamlessly into existing systems while providing a platform for developers to expand and innovate. By offering tools for easy customization within defined parameters, Codasip ensures that users can achieve the best possible outcomes for their DSP needs efficiently and swiftly.
AccelerComm's Software-Defined High PHY is a malleable solution, catered to the ARM processor framework, capable of fulfilling the diverse requirements of modern telecommunications infrastructures. This technology is renowned for its optimization capabilities, functioning either with or without hardware acceleration, contingent on the exigencies of the target application with regards to power and capacity. The implementation of Software-Defined High PHY signifies a leap in configuring PHY layers, facilitating adaptation to varying performance and efficiency mandates of different hardware platforms. The technology supports seamless transitions across platforms, making it applicable for a spectrum of use cases, harmonizing with both flexible software protocols and established hardware standards. By uniting traditional hardware PHY layers with modern software innovations, this solution propels network performance while reducing latency, enhancing data throughput, and minimizing overall system power consumption. This adaptability is vital for enterprises aiming to meet the dynamic demands for quality and reliability in wireless communication network setups.
This high-performance cross-correlator module integrates 128 channels of 1GSps ADCs. Each channel features a VGA front end, optimizing it for synthetic radar receivers and spectrometer systems. It excels in low power consumption, critical in space-limited applications like satellite-based remote sensing or data-intensive spectrometers, making it invaluable in advanced research operations.
The NoISA Processor is an innovative microprocessor designed by Hotwright Inc. to overcome the limitations of traditional instruction set architectures. Unlike standard processors, which rely on a fixed ALU, register file, and hardware controller, the NoISA Processor utilizes the Hotstate machine - an advanced microcoded algorithmic state machine. This technology allows for runtime reprogramming and flexibility, making it highly suitable for various applications where space, power efficiency, and adaptability are paramount. With the NoISA Processor, users can achieve significant performance improvements without the limitations imposed by fixed instruction sets. It's particularly advantageous in IoT and edge computing scenarios, offering enhanced efficiency compared to conventional softcore CPUs while maintaining lower energy consumption. Moreover, this processor is ideal for creating small, programmable state machines and Sysotlic arrays rapidly. Its unique architecture permits behavior modification through microcode, rather than altering the FPGA, thus offering unprecedented flexibility and power in adapting to specific technological needs.
Gyrus AI's Neural Network Accelerator is specifically crafted to enhance edge computing with its groundbreaking graph processing capabilities. This innovative solution achieves unparalleled efficiency with a performance of 30 trillion operations per second per Watt (TOPS/W). Such efficiency significantly enhances the speed of machine learning operations, minimizing the clock cycles required for tasks, which translates to a 10-30x reduction in clock-cycle count. As a low-power usage configuration, the Neural Network Accelerator ensures reduced energy consumption without compromising computational performance. Designed to offer seamless integration, this accelerator maximizes die area utilization over 80%, ensuring the efficient implementation of diverse model architectures. Its uniqueness lies in its software tools that complement the IP, facilitating the operation of neural networks on the IP with seamless ease. The Neural Network Accelerator is tailored to provide high performance without the trade-offs typically associated with increased power consumption, making it ideal for a variety of edge computing applications. The product serves as a critical enabler for enterprises seeking to implement sophisticated AI solutions at the edge, ensuring that their wide-ranging applications are both efficient and high-functioning. As edge devices increasingly drive innovation across industries, Gyrus AI's solution stands out for its dexterity in supporting complex model structures while conserving power, thereby catering to the modern demands of AI-driven operations.
The XCM_64X64 is a complete cross-correlator designed for synthetic radar receivers. With 64 channels arranged in a sophisticated configuration, it processes vast amounts of data efficiently at low power consumption rates. Ideal for radiometers and spectrometer applications, this module is tailored for environments where bandwidth and speed are pivotal, supporting precise remote sensing operations.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!