All IPs > Platform Level IP > Multiprocessor / DSP
In the realm of semiconductor IP, the Multiprocessor and Digital Signal Processor (DSP) category plays a crucial role in enhancing the processing performance and efficiency of a vast array of modern electronic devices. Semiconductor IPs in this category are designed to support complex computational tasks, enabling sophisticated functionalities in consumer electronics, automotive systems, telecommunications, and more. With the growing need for high-performance processing in a compact and energy-efficient form, multiprocessor and DSP IPs have become integral to product development across industries.
The multiprocessor IPs are tailored to provide parallel processing capabilities, which significantly boost the computational power required for intensive applications. By employing multiple processing cores, these IPs allow for the concurrent execution of multiple tasks, leading to faster data processing and improved system performance. This is especially vital in applications such as gaming consoles, smartphones, and advanced driver-assistance systems (ADAS) in vehicles, where seamless and rapid processing is essential.
Digital Signal Processors are specialized semiconductor IPs used to perform mathematical operations on signals, allowing for efficient processing of audio, video, and other types of data streams. DSPs are indispensable in applications where real-time data processing is critical, such as noise cancellation in audio devices, image processing in cameras, and signal modulation in communication systems. By providing dedicated hardware structures optimized for these tasks, DSP IPs deliver superior performance and lower power consumption compared to general-purpose processors.
Products in the multiprocessor and DSP semiconductor IP category range from core subsystems and configurable processors to specialized accelerators and integrated solutions that combine processing elements with other essential components. These IPs are designed to help developers create cutting-edge solutions that meet the demands of today’s technology-driven world, offering flexibility and scalability to adapt to different performance and power requirements. As technology evolves, the importance of multiprocessor and DSP IPs will continue to grow, driving innovation and efficiency across various sectors.
The Akida 2nd Generation is an evolution of BrainChip's innovative neural processor technology. It builds upon its predecessor's strengths by delivering even greater efficiency and a broader range of applications. The processor maintains an event-based architecture that optimizes performance and power consumption, providing rapid response times suitable for edge AI applications that prioritize speed and privacy.\n\nThis next-generation processor enhances accuracy with support for 8-bit quantization, which allows for finer grained processing capabilities and more robust AI model implementations. Furthermore, it offers extensive scalability, supporting configurations from a few nodes for low-power needs to many nodes for handling more complex cognitive tasks. As with the previous version, its architecture is inherently cloud-independent, enabling inference and learning directly on the device.\n\nAkida 2nd Generation continues to push the boundaries of AI processing at the edge by offering enhanced processing capabilities, making it ideal for applications demanding high accuracy and efficiency, such as automotive safety systems, consumer electronics, and industrial monitoring.
The Metis AIPU PCIe AI Accelerator Card offers exceptional performance for AI workloads demanding significant computational capacity. It is powered by a single Metis AIPU and delivers up to 214 TOPS, catering to high-demand applications such as computer vision and real-time image processing. This PCIe card is integrated with the Voyager SDK, providing developers with a powerful yet user-friendly software environment for deploying complex AI applications seamlessly. Designed for efficiency, this accelerator card stands out by providing cutting-edge performance without the excessive power requirements typical of data center equipment. It achieves remarkable speed and accuracy, making it an ideal solution for tasks requiring fast data processing and inference speeds. The PCIe card supports a wide range of AI application scenarios, from enhancing existing infrastructure capabilities to integrating with new, dynamic systems. Its utility in various industrial settings is bolstered by its compatibility with the suite of state-of-the-art neural networks provided in the Axelera AI ecosystem.
The CXL 3.1 Switch by Panmnesia is a high-tech solution designed to manage diverse CXL devices within a cache-coherent system, minimizing latency through its proprietary low-latency CXL IP. This switch supports a scalable and flexible architecture, offering multi-level switching and port-based routing capabilities that allow expansive system configurations to meet various application demands. It is engineered to connect system devices such as CPUs, GPUs, and memory modules, ideal for constructing large-scale systems tailored to specific needs.
The Yitian 710 Processor is an advanced Arm-based server chip developed by T-Head, designed to meet the extensive demands of modern data centers and enterprise applications. This processor boasts 128 high-performance Armv9 CPU cores, each coupled with robust caches, ensuring superior processing speeds and efficiency. With a 2.5D packaging technology, the Yitian 710 integrates multiple dies into a single unit, facilitating enhanced computational capability and energy efficiency. One of the key features of the Yitian 710 is its memory subsystem, which supports up to 8 channels of DDR5 memory, achieving a peak bandwidth of 281 GB/s. This configuration guarantees rapid data access and processing, crucial for high-throughput computing environments. Additionally, the processor is equipped with 96 PCIe 5.0 lanes, offering a dual-direction bandwidth of 768 GB/s, enabling seamless connectivity with peripheral devices and boosting system performance overall. The Yitian 710 Processor is meticulously crafted for applications in cloud services, big data analytics, and AI inference, providing organizations with a robust platform for their computing needs. By combining high core count, extensive memory support, and advanced I/O capabilities, the Yitian 710 stands as a cornerstone for deploying powerful, scalable, and energy-efficient data processing solutions.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
Designed for extreme low-power environments, the Tianqiao-70 RISC-V CPU core emphasizes energy efficiency while maintaining sufficient computational strength for commercial applications. It serves scenarios where low power consumption is critical, such as mobile devices, desktop applications, AI, and autonomous systems. This model caters to the requirements of energy-conscious markets, facilitating operations that demand efficiency and performance within minimal power budgets.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The Jotunn8 represents a leap in AI inference technology, delivering unmatched efficiency for modern data centers. This chip is engineered to manage AI model deployments with lightning-fast execution, at minimal cost and high scalability. It ensures optimal performance by balancing high throughput and low latency, while being extremely power-efficient, which significantly lowers operational costs and supports sustainable infrastructures. The Jotunn8 is designed to unlock the full capacity of AI investments by providing a high-performance platform that enhances the delivery and impact of AI models across applications. It is particularly suitable for real-time applications such as chatbots, fraud detection, and search engines, where ultra-low latency and very high throughput are critical. Power efficiency is a major emphasis of the Jotunn8, optimizing performance per watt to control energy as a substantial operational expense. Its architecture allows for flexible memory allocation ensuring seamless adaptability across varied applications, providing a robust foundation for scalable AI operations. This solution is aimed at enhancing business competitiveness by supporting large-scale model deployment and infrastructure optimization.
The Chimera GPNPU from Quadric is designed as a general-purpose neural processing unit intended to meet a broad range of demands in machine learning inference applications. It is engineered to perform both matrix and vector operations along with scalar code within a single execution pipeline, which offers significant flexibility and efficiency across various computational tasks. This product achieves up to 864 Tera Operations per Second (TOPs), making it suitable for intensive applications including automotive safety systems. Notably, the GPNPU simplifies system-on-chip (SoC) hardware integration by consolidating hardware functions into one processor core. This unification reduces complexity in system design tasks, enhances memory usage profiling, and optimizes power consumption when compared to systems involving multiple heterogeneous cores such as NPUs and DSPs. Additionally, its single-core setup enables developers to efficiently compile and execute diverse workloads, improving performance tuning and reducing development time. The architecture of the Chimera GPNPU supports state-of-the-art models with its Forward Programming Interface that facilitates easy adaptation to changes, allowing support for new network models and neural network operators. It’s an ideal solution for products requiring a mix of traditional digital signal processing and AI inference like radar and lidar signal processing, showcasing a rare blend of programming simplicity and long-term flexibility. This capability future-proofs devices, expanding their lifespan significantly in a rapidly evolving tech landscape.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
The eSi-3250 stands as a high-performance 32-bit RISC IP processor, optimized for implementations within ASIC or FPGA environments that demand rigorous caching strategies due to slower internal or external memories. Noteworthy for its adaptable instruction and data cache capabilities, this core is tailored to excel in scenarios where the CPU core to bus clock ratio exceeds singularity. The eSi-3250 integrates dual separate caches for both data and instructions, enabling configuration in various associativity forms optimizing for elevated performance while maintaining power efficiency. It includes a specialized optional memory management unit, vital for memory protection and the deployment of virtual memory, accommodating sophisticated system requirements. Incorporating an expansive instruction set, the processor is equipped for intensive computational tasks with a multitude of optional additional instruction types and addressing modes. Additional requisite supporting hardware includes incorporated debug features conducive to efficient system analysis and troubleshooting, solidifying the eSi-3250's position as a favored choice for high-throughput, low-power applications across a spectrum of technology processes.
The Universal Chiplet Interconnect Express (UCIe) by Extoll exemplifies a transformative approach towards interconnect technology, underpinning the age of chiplets with a robust framework for high-speed data exchange. This innovative solution caters to the growing demands of heterogeneous integration, providing a standardized protocol that empowers seamless communication between various chiplet designs. UCIe stands out by offering unparalleled connectivity and interoperability, ensuring that diverse chiplet systems function cohesively. This interconnect solution is tailored to the needs of modern digital architectures, emphasizing adaptability and performance across different tech nodes. With Extoll’s mastery in digital-centric design, the UCIe provides an efficient gateway for integrating multiple technological processes into a singular framework. The development of UCIe is also driven by the need for solutions that are both energy and cost-efficient. By leveraging Extoll’s low power architecture, UCIe facilitates energy savings without compromising on speed and data integrity. This makes it an indispensable tool for entities that prioritize scalable, high-performance interconnection solutions, aligning with the semiconductor industry's move toward more modular and sustainable system architectures.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
The Dynamic Neural Accelerator II (DNA-II) is a highly efficient and versatile IP specifically engineered for optimizing AI workloads at the edge. Its unique architecture allows runtime reconfiguration of interconnects among computing units, which facilitates improved parallel processing and efficiency. DNA-II supports a broad array of networks, including convolutional and transformer networks, making it an ideal choice for numerous edge applications. Its design emphasizes low power consumption while maintaining high computational performance. By utilizing a dynamic data path architecture, DNA-II sets a new benchmark for IP cores aimed at enhancing AI processing capabilities.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The NeuroVoice chip by Polyn Technology is engineered to improve voice processing capabilities for a variety of consumer electronic devices, particularly focusing on addressing challenges associated with traditional digital voice solutions. Built on the NASP platform, this AI chip is tailored to operate efficiently in noisy environments without relying on cloud-based processing, thus ensuring privacy and reducing latency. A key feature of NeuroVoice is its ultra-low power consumption, which allows continuous device operation even in power-sensitive applications like wearables and smart home devices. It includes abilities such as always-on voice activity detection, smart voice control, speaker recognition, and real-time voice extraction. This amalgamation of capabilities makes the NeuroVoice a versatile component in enhancing voice-controlled systems' efficacy. NeuroVoice stands out by seamlessly integrating into devices, offering users the advantage of precise voice recognition and activity detection with minimal energy demands. It further differentiates itself by delivering clear communication even amidst irregular background noises, setting a new benchmark for on-device audio processing with its advanced neural network-driven design.
The eSi-3200 represents the mid-tier solution in the eSi-RISC family, bringing a high degree of versatility and performance to embedded control systems. This 32-bit processor is proficiently designed for scenarios demanding enhanced computational capabilities or extended address spaces without compromise on power efficiency, suitably fitting applications with on-chip memory implementations. Engineered without a cache, the eSi-3200 facilitates deterministic performance essential for real-time applications. It leverages a modified-Harvard architecture allowing concurrent instruction and data fetches, maximizing throughput. With a 5-stage pipeline, the processor achieves high clock frequencies suitable for time-critical operations enhancing responsiveness and efficiency. The comprehensive instruction set encompasses core arithmetic functions, including advanced IEEE-754 single-precision floating-point operations, which cater to data-intensive and mathematically challenging applications. Designed with optimal flexibility, it can accommodate optional custom instructions tailored to specific processing needs, offering a well-balanced solution for versatile embedded applications. Delivered as a Verilog RTL IP core, it ensures platform compatibility, simplifying integration into diverse silicon nodes.
The Nerve IIoT Platform is a comprehensive solution for machine builders, offering cloud-managed edge computing capabilities. This innovative platform delivers high levels of openness, security, flexibility, and real-time data handling, enabling businesses to embark on their digital transformation journeys. Nerve's architecture allows for seamless integration with a variety of hardware devices, from basic gateways to advanced IPCs, ensuring scalability and operational efficiency across different industrial settings. Nerve facilitates the collection, processing, and analysis of machine data in real-time, which is crucial for optimizing production and enhancing operational efficiency. By providing robust remote management functionalities, businesses can efficiently handle device operations and application deployments from any location. This capacity to manage data flows between the factory floor and the cloud transitions enterprises into a new era of digital management, thereby minimizing costs and maximizing productivity. The platform also supports multiple cloud environments, empowering businesses to select their preferred cloud service while maintaining operational continuity. With its secure, IEC 62443-4-1 certified infrastructure, Nerve ensures that both data and applications remain protected from cyber threats. Its integration of open technologies, such as Docker and virtual machines, further facilitates rapid implementation and prototyping, enabling businesses to adapt swiftly to ever-changing demands.
Wormhole is a high-efficiency processor designed to handle intensive AI processing tasks. Featuring an advanced architecture, it significantly accelerates AI workload execution, making it a key component for developers looking to optimize their AI applications. Wormhole supports an expansive range of AI models and frameworks, enabling seamless adaptation and deployment across various platforms. The processor’s architecture is characterized by high core counts and integrated system interfaces that facilitate rapid data movement and processing. This ensures that Wormhole can handle both single and multi-user environments effectively, especially in scenarios that demand extensive computational resources. The seamless connectivity supports vast memory pooling and distributed processing, enhancing AI application performance and scalability. Wormhole’s full integration with Tenstorrent’s open-source ecosystem further amplifies its utility, providing developers with the tools to fully leverage the processor’s capabilities. This integration facilitates optimized ML workflows and supports continuous enhancement through community contributions, making Wormhole a forward-thinking solution for cutting-edge AI development.
SAKURA-II is an advanced AI accelerator recognized for its efficiency and adaptability. It is specifically designed for edge applications that require rapid, real-time AI inference with minimal delay. Capable of processing expansive generative AI models such as Llama 2 and Stable Diffusion within an 8W power envelope, this accelerator supports a wide range of applications from vision to language processing. Its enhanced memory bandwidth and substantial DRAM capacity ensure its suitability for handling complex AI workloads, including large-scale language and vision models. The SAKURA-II platform also features robust power management, allowing it to achieve high efficiency during operations.
The 2D FFT core is engineered to deliver fast processing for two-dimensional FFT computations, essential in image and video processing applications. By utilizing both internal and external memory effectively, this core is capable of handling large data sets typical in medical imaging or aerial surveillance systems. This core leverages Dillon Engineering’s ParaCore Architect utility to maximize flexibility and efficiency. It takes advantage of a two-engine design, where data can flow between stages without interruption, ensuring high throughput and minimal memory delays. Such a robust setup is vital for applications where swift processing of extensive data grids is crucial. The architecture is structured to provide consistent, high-quality transform computations that are essential in applications where accuracy and speed are non-negotiable. The 2D FFT core, with its advanced design parameters, supports the varied demands of modern imaging technology, providing a reliable tool for developers and engineers working within these sectors.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The Intelligence X280 is engineered to provide extensive capabilities for artificial intelligence and machine learning applications, emphasizing a software-first design approach. This high-performance processor supports vector and matrix computations, making it adept at handling the demanding workloads typical in AI-driven environments. With an extensive ALU and integrated VFPU capabilities, the X280 delivers superior data processing power. Capable of supporting complex AI tasks, the X280 processor leverages SiFive's advanced vector architecture to allow for high-speed data manipulation and precision. The core supports extensive vector lengths and offers compatibility with various machine learning frameworks, facilitating seamless deployment in both embedded and edge AI applications. The Intelligence family, represented by the X280, offers solutions that are not only scalable but are customizable to particular workload specifications. With high-bandwidth interfaces for connecting custom engines, this processor is built to evolve alongside AI's progressive requirements, ensuring relevance in rapidly changing technology landscapes.
Micro Magic offers a state-of-the-art 64-bit RISC-V core known for its ultra-low power consumption, clocking in at just 10mW when operating at 1GHz. This processor harnesses advanced design techniques that allow it to achieve high performance while maintaining low operational voltages, optimizing energy efficiency. This processor stands out for its capability to deliver impressive processing speeds, reaching up to 5GHz under optimal conditions. It is designed with power conservation in mind, making it ideal for applications where energy efficiency is critical without sacrificing processing capability. The core is part of Micro Magic’s commitment to pushing the boundaries of low-power processing technology, making it suitable for a variety of high-speed computing tasks. Its design is particularly advantageous in environments demanding swift data processing and minimal power use, reaffirming Micro Magic’s reputation for pioneering efficient silicon solutions.
The Network Protocol Accelerator Platform (NPAP) by Missing Link Electronics is engineered to significantly enhance network protocol processing. This platform leverages MLE's innovative patented and patent-pending technologies to boost the speed of data transmission within FPGAs, achieving impressive rates of up to 100 Gbps. The NPAP provides a robust, efficient solution for offloading processing tasks, leading to superior networking efficiency. MLE's NPAP facilitates multiple high-speed connections and can manage large volumes of data effectively, incorporating support for a variety of network protocols. The design ensures that users benefit from reduced latency and improved data throughput, making it an ideal choice for network-intensive applications. MLE’s expertise in integrating high-performance networking capabilities into FPGA environments comes to the forefront with this product, providing users with a dependable tool for optimizing their network infrastructures.
The eSi-3264 epitomizes the pinnacle of the eSi-RISC portfolio, presenting a 32/64-bit processor furnished with SIMD extensions catering to high-performance requirements. Designed for applications demanding digital signal processing functionality, this processor capitalizes on minimal silicon usage while ensuring exceedingly low power consumption. Incorporating an extensive pipeline capable of dual and quad multiply-accumulate operations, the eSi-3264 significantly benefits applications in audio processing, sensor control, and touch interfacing. Its built-in IEEE-754 single and double-precision floating point operations promote comprehensive data processing capabilities, extending versatility across computationally intensive domains. The processor accommodates configurable caching attributes and a memory management unit to bolster performance amidst off-chip memory access. Its robust instruction repertoire, optional custom operations, and user-privilege modes ensure full control in secure execution environments, supporting diverse operational requirements with unmatched resource efficiency.
The eSi-ADAS Radar IP Suite and Co-processor Engine is at the forefront of automotive and unmanned systems, enhancing radar detection and processing capabilities. It leverages cutting-edge signal processing technologies to provide accurate and rapid situational awareness, crucial for modern vehicles and aerial drones. With its comprehensive offering of radar algorithms, eSi-ADAS supports both traditional automotive radar applications and emerging unmanned aerial vehicle (UAV) platforms. This suite is crafted to meet the complex demands of real-time data processing and simultaneous multi-target tracking in dense environments, key for advanced driver-assistance systems. The co-processor engine within eSi-ADAS is highly efficient, designed to operate alongside existing vehicle systems with minimal additional power consumption. This suite is adaptable, supporting a wide range of vehicle architectures and operational scenarios, from urban driving to cross-country navigation.
Tensix Neo represents the next evolution in AI processing, offering robust capabilities for handling modern AI challenges. Its design focuses on maximizing performance while maintaining efficiency, a crucial aspect in AI and machine learning environments. Tensix Neo facilitates advanced computation across multiple frameworks, supporting a range of AI applications. Featuring a strategic blend of core architecture and integrated memory, Tensix Neo excels in both processing speed and capacity, essential for handling comprehensive AI workloads. Its architecture supports multi-threaded operations, optimizing performance for parallel computing scenarios, which are common in AI tasks. Tensix Neo's seamless connection with Tenstorrent's open-source software environment ensures that developers can quickly adapt it to their specific needs. This interconnectivity not only boosts operational efficiency but also supports continuous improvements and feature expansions through community contributions, positioning Tensix Neo as a versatile solution in the landscape of AI technology.
The Spiking Neural Processor T1 is a microcontroller tailored for ultra-low-power applications demanding high-performance pattern recognition at the sensor edge. It features an advanced neuromorphic architecture that leverages spiking neural network engines combined with RISC-V core capabilities. This architecture allows for sub-milliwatt power dissipation and sub-millisecond latency, enabling the processor to conduct real-time analysis and identification of embedded patterns in sensor data while operating in always-on scenarios. Additionally, the T1 provides diverse interfaces, making it adaptable for use with various sensor types.
Functioning as a comprehensive cross-correlator, the XCM_64X64 facilitates efficient and precise signal processing required in synthetic radar receivers and advanced spectrometers. Designed on IBM's 45nm SOI CMOS technology, it supports ultra-low power operation at about 1.5W for the entire array, with a sampling performance of 1GSps across a bandwidth of 10MHz to 500MHz. The ASIC is engineered to manage high-throughput data channels, a vital component for high-energy physics and space observation instruments.
ISELED Technology emerges as a revolutionary solution in automotive lighting, integrating digital control of smart RGB LEDs and addressing automotive gradings. The initiative offers precisely calibrated Smart RGB LEDs, deftly facilitating color calibration at production, providing manufacturers with a streamlined implementation process. Primarily, ISELED excels in reducing complexities associated with the management of color mixing and compensation. Ideal for ambient and functional lighting, ISELED supports daisy-chain configurations of RGB LEDs, enhancing both aesthetic and functional vehicle illumination possibilities. The integrated communication protocol simplifies color adjustments through a straightforward digital interface, moving away from traditional 3-channel current control methods. These advancements promote innovation beyond aesthetics by fostering significant reductions in system cost and complexity. This comes from integrated features such as onboard calibration data storage, which removes dependency on external resources during vehicle manufacturing, making ISELED an optimal choice for next-generation automotive lighting technologies.
The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.
The XCM_64X64_A is a powerful array designed for cross-correlation operations, integrating 128 ADCs each capable of 1GSps. Targeted at high-precision synthetic radar and radiometer systems, this ASIC delivers ultra-low power consumption around 0.5W, ensuring efficient performance over a wide bandwidth range from 10MHz to 500MHz. Built on IBM's 45nm SOI CMOS technology, it forms a critical component in systems requiring rapid data sampling and intricate signal processing, all executed with high accuracy, making it ideal for airborne and space-based applications.
Dyumnin Semiconductors' RISCV SoC is a robust solution built around a 64-bit quad-core server-class RISC-V CPU, designed to meet advanced computing demands. This chip is modular, allowing for the inclusion of various subsystems tailored to specific applications. It integrates a sophisticated AI/ML subsystem that features an AI accelerator tightly coupled with a TensorFlow unit, streamlining AI operations and enhancing their efficiency. The SoC supports a multimedia subsystem equipped with IP for HDMI, Display Port, and MIPI, as well as camera and graphic accelerators for comprehensive multimedia processing capabilities. Additionally, the memory subsystem includes interfaces for DDR, MMC, ONFI, NorFlash, and SD/SDIO, ensuring compatibility with a wide range of memory technologies available in the market. This versatility makes it a suitable choice for devices requiring robust data storage and retrieval capabilities. To address automotive and communication needs, the chip's automotive subsystem provides connectivity through CAN, CAN-FD, and SafeSPI IPs, while the communication subsystem supports popular protocols like PCIe, Ethernet, USB, SPI, I2C, and UART. The configurable nature of this SoC allows for the adaptation of its capabilities to meet specific end-user requirements, making it a highly flexible tool for diverse applications.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
Designed for high-performance computing environments, the RISC-V CPU IP UX Class incorporates a 64-bit architecture enriched with MMU capabilities, making it an excellent choice for Linux-based applications within data centers and network infrastructures. This class of processors is optimized to meet the demanding requirements of modern computing systems, where throughput and reliability are critical. The UX Class supports advanced features like multi-core designs, which enable it to efficiently manage parallel processing tasks. This capability allows for significant performance improvements in applications where simultaneous process execution is desired. Moreover, the UX Class adheres to the RISC-V open architecture, promoting flexibility and innovation among developers who require customized, high-performance processor cores. Accompanied by an extensive ecosystem, the UX Class provides developers with a wealth of resources needed to maximize the processor's capabilities. From toolchains to development kits, these resources streamline the deployment process, allowing for the quick adaptation and integration of UX Class processors into existing and new systems alike. The UX Class is instrumental in advancing the development of data-centric applications and infrastructures.
The SiFive Performance family is an embodiment of high-efficiency computing, tailored to deliver maximum throughput across various applications. Designed with a 64-bit out-of-order architecture, these processors are equipped with up to 256-bit vector support, making them proficient in handling complex data and multimedia processing tasks critical for data centers and AI applications. The Performance cores range from 3-wide to 6-wide out-of-order models, capable of integrating up to two vector engines dedicated to AI workload optimizations. This setup provides an excellent balance of energy efficiency and computing power, supporting diverse applications ranging from web servers and network storage to consumer electronics requiring smart capabilities. Focused on maximizing performance while minimizing power usage, the Performance family allows developers to customize and optimize processing capabilities to match specific use-cases. This adaptability, combined with high efficiency, renders the Performance line a fitting choice for modern computational tasks that demand both high throughput and energy conservation.
AheadComputing offers an advanced RISC-V Core IP optimized for high-performance applications. These cores are crafted to enhance instruction per cycle (IPC) and power efficiency, making them ideal for cutting-edge processors demanding robust and reliable performance. By integrating the RISC-V Core into their designs, businesses can leverage a customizable and scalable architecture for their specific application needs. The RISC-V Core from AheadComputing demonstrates superior speed, a testament to their team’s deep expertise in processor technology. It allows seamless integration into existing infrastructure, offering flexibility and adaptability across various applications. The core is engineered to support a wide range of process nodes, ensuring compatibility and longevity in the dynamic tech industry. Standout features include the core's streamlined implementation process and its ability to significantly reduce time-to-market for new products. These cores represent not only a technological advancement but also a strategic resource that empowers companies to maintain a competitive edge in an ever-evolving marketplace.
The UltraLong FFT is designed specifically for handling lengthy data sequences and is optimized for Xilinx FPGAs. This core utilizes external memory to enable the processing of very large block sizes, suitable for applications requiring extensive data handling. Performance is typically constrained by the bandwidth of the external memory, making this a robust option for demanding applications where memory resources are a pivotal consideration. By leveraging Dillon Engineering's sophisticated ParaCore Architect utility, the UltraLong FFT Core is tailored to individual project needs. This core provides engineers with a flexible tool, capable of adapting to variable lengths and data throughput requirements. As such, it plays a vital role in numerous fields including astrophysics and remote sensing, where large-scale data manipulation is essential. The core's architecture is finely tuned to achieve optimal data throughput while balancing memory usage. This makes it highly desirable in scenarios where efficiency and scale are crucial, enabling extensive and complex computations to be conducted seamlessly on FPGA platforms.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The Software-Defined High PHY offered by AccelerComm is a flexible solution designed for ARM processor architectures. This IP enables high performance across various platforms, optimizing capacity and power utilization based on application demands. By embodying a software-defined approach, it affords users the versatility to either integrate it with hardware acceleration or operate it as a standalone solution, depending on specific project needs. This IP underscores AccelerComm's focus on platform independence while ensuring seamless integration across diverse systems. The Software-Defined High PHY is equipped to handle high throughput and low latency requirements, making it ideal for applications that demand dynamic performance adjustments. It allows for seamless blending of hardware and software, delivering a balance between performance and resource consumption. This makes the Software-Defined High PHY an ideal choice for companies looking to implement scalable, adaptable wireless communication solutions with efficiency at their core.
The Tyr AI Processor Family is designed around versatile programmability and high performance for AI and general-purpose processing. It consists of variants such as Tyr4, Tyr2, and Tyr1, each offering a unique performance profile optimized for different operational scales. These processors are fully programmable and support high-level programming throughout, ensuring they meet diverse computing needs with precision. Each member of the Tyr family features distinct core configurations, tailored for specific throughput and performance needs. The top-tier Tyr4 boasts 8 cores with a peak capability of 1600 Tflops when leveraging fp8 tensor cores, making it suitable for demanding AI tasks. Tyr2 and Tyr1 scale down these resources to 4 and 2 cores, respectively, achieving proportional efficiency and power savings. All models incorporate substantial on-chip memory, optimizing data handling and execution efficiency without compromising on power use. Moreover, the Tyr processors adapt AI processes automatically on a layer-by-layer basis to enhance implementation efficiency. This adaptability, combined with their near-theory performance levels, renders them ideal for high-throughput AI workloads that require flexible execution and dependable scalability.
VisualSim Architect is an advanced modeling and simulation tool designed for system engineers to explore and analyze performance, power, and functionality of electronic systems. This platform supports a multi-domain model of computation that is capable of simulating a wide range of devices including processors, memory storage, wireless systems, and semiconductor buses. Utilizing an XML database, VisualSim Architect allows for flexible model creation, easy integration across distributed systems, and supports real-time adjustments and batch processing for comprehensive system analysis. The platform boasts extensive libraries for various types of components such as hardware, software, resource management, and traffic control, each designed to streamline model construction and enable thorough exploration across diverse applications. Users benefit from the ability to examine internal logics, manage buffers, and accurately model functionalities to ensure all components meet industry specifications. These IP blocks can be customized and adjusted in real time to fit specific project requirements. VisualSim Architect is equipped with robust reporting features that provide essential insights into system utilization, delay metrics, and advanced cache performance analyses. This tool is designed to be user-friendly, offering a graphical environment for model construction and validation. The software is compatible with major operating systems including Windows, Linux, and Mac OS X, empowering users to leverage its capabilities irrespective of their technical environment.
The AIoT Platform from SEMIFIVE is crafted to create specialized IoT and edge processing devices with efficiency and cutting-edge technology. Leveraging silicon-proven design components on Samsung's 14nm process, it streamlines the development of high-performance, power-efficient applications. This platform is equipped with dual SiFive U54 RISC-V CPUs, LPDDR4 memory, and comprehensive interfaces like MIPI-CSI and USB3.0. Targeted at consumer electronics such as wearables and smart home devices, this platform supports a wide array of IoT applications, including industrial IoT and smart security systems. Its architectural flexibility allows customization of system specifications, enabling designers to address the unique requirements of diverse IoT deployments. The AIoT platform supports applications with rigorous demands for power efficiency and cost-effectiveness, ensuring swift time-to-market and reduced development cycles. With a collaborative ecosystem of package design, board evaluation, and software, it paves the way for innovative IoT solutions that seamlessly integrate advanced technologies into everyday devices.
The JPEG FPGA core from A2e Technologies is a high-speed solution designed for still image and video compression applications. It delivers exceptional performance, capable of compressing 140 million pixels per second for 4-2-0 and 4-2-2 image formats on Xilinx Spartan 6 FPGAs. This core is distinguished by its compact size, needing under 500 slices in a Xilinx Spartan 6 FPGA. It supports true grayscale mode and includes easy-to-interface FIFO interfaces for both input and output. Notably, the core's low power consumption stems from its efficient design. A2e's JPEG core stands out due to its compliance with the ISO/IEC 10918-1 JPEG standards and offers high-speed DCT core options. It features a fixed entropy table, with sixteen programmable quantization tables, supporting a wide array of JPEG formats. The core handles any image size up to 16K by 16K with varying processing rates: one clock per pixel for grayscale, 1.5 clocks per pixel for YUV 4:2:0, and two clocks per pixel for YUV 4:2:2. This core is highly customizable and is available with AXI-Stream and Generic Interface bus versions. Deliverables include FPGA-specific netlists, a bit-accurate C model, and a complete HDL testbench with test images. A2e Technologies provides comprehensive support and licensing options to facilitate seamless integration and deployment of the core.
The SoC Platform by SEMIFIVE facilitates the rapid development of custom silicon chips, optimized for specific applications through the use of domain-specific architectures. Paired with a pool of pre-verified IPs, it lowers the cost, mitigates risks, and speeds up the development timeline compared to traditional methods. This platform effortlessly supports a multitude of applications by providing silicon-proven infrastructure. Supporting various process technologies, this platform integrates seamlessly with existing design methodologies, offering flexibility and the possibility to fine-tune specifications according to application needs. The core of the platform's design philosophy focuses on maximizing reusability and minimizing engineering overhead, key for reducing time-to-market. Designed for simplicity and comprehensiveness, the SoC Platform offers tools and models that ensure quality and reduce integration complexity, from architecture and physical design to software support. As an end-to-end solution, it stands out as a reliable partner for enterprises aiming to bring innovative products to market efficiently and effectively.
The Chimera Software Development Kit (SDK) by Quadric empowers developers with a robust platform to create and deploy complex AI and machine learning applications efficiently. It offers tools for developing, simulating, profiling, and deploying software, perfectly adaptable for Quadric’s Chimera GPNPU. The SDK simplifies coding by allowing integration of machine learning graph code with traditional C++ code into a singular, streamlined programming flow. This SDK includes the Chimera LLVM C++ compiler which utilizes state-of-the-art compiler infrastructure tailored to Chimera's specific instruction sets, driving efficiency and optimization. The SDK is compatible with Docker environments, enabling seamless on-premises or cloud-based deployment. This flexibility supports the versatile development needs of corporates working with private proprietary models while streamlining the toolchain for increased productivity. Its Graph Compiler transcodes machine learning inference models from popular frameworks like TensorFlow and PyTorch into optimized C++ using the Chimera Compute Library. This feature ensures that even the most complex AI models are efficiently deployed, lowering computational overheads and maximizing processing potential. Hence, the Chimera SDK serves as an invaluable tool for engineers aiming to expedite the deployment of cutting-edge ML algorithms both effectively and swiftly.
GateMate FPGA is a highly versatile and cost-effective Field-Programmable Gate Array designed to cater to a wide array of applications, from telecommunications to industrial purposes. Utilized in applications where flexibility and adaptability are critical, the GateMate FPGA shines with its reprogrammable architecture. Engineers appreciate the ability to tailor the device post-manufacturing to suit specific needs, providing an edge in scenarios demanding rapid technological adaptability. The GateMate FPGAs are noted for their power efficiency and broad multi-node portfolio, accommodating both low- and mid-range applications. This FPGA stands out for its impressive balance of price, performance, and reliability. Manufactured using the GlobalFoundries 28nm node process, it ensures durability and a consistent supply chain. Industries leveraging the GateMate FPGAs benefit from its robust performance in areas such as signal processing, data transmission, and complex algorithm acceleration. It plays a crucial role in enabling real-time data flows and tasks that demand parallel processing, especially evident in sectors like automotive and aerospace where the ability to evolve rapidly with industry needs is indispensable.
Grayskull is Tenstorrent's flagship product specifically designed to enhance AI and machine learning workloads. This high-performance solution provides optimized computing power through its use of advanced architectural design, tailored for efficient AI processing. Grayskull supports diverse applications, providing developers with the flexibility to implement and scale across various AI frameworks. Equipped with numerous cores and integrated memory capabilities, Grayskull offers a robust platform that caters to both training and inference operations. It excels in handling large datasets and complex model computations, delivering superior throughput and latency management which are crucial for AI tasks. The design is also power-efficient, making it an ideal choice for institutions aiming to maximize performance while managing power consumption. Grayskull is fully supported by Tenstorrent’s open-source software stack, which enhances its integration capabilities with pre-existing systems. By aligning with popular AI frameworks and encouraging community-driven improvements, Grayskull represents a strategic investment for industries looking to push the envelope in artificial intelligence.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!