Find IP Sell IP AI Assistant Chip Talk About Us
Log In

All IPs > Platform Level IP

Platform Level IP: Comprehensive Semiconductor Solutions

Platform Level IP is a critical category within the semiconductor IP ecosystem, offering a wide array of solutions that are fundamental to the design and efficiency of semiconductor devices. This category includes various IP blocks and cores tailored for enhancing system-level performance, whether in consumer electronics, automotive systems, or networking applications. Suitable for both embedded control and advanced data processing tasks, Platform Level IP encompasses versatile components necessary for building sophisticated, multicore systems and other complex designs.

Subcategories within Platform Level IP cover a broad spectrum of integration needs:

1. **Multiprocessor/DSP (Digital Signal Processing)**: This includes specialized semiconductor IPs for handling tasks that require multiple processor cores working in tandem. These IPs are essential for applications needing high parallelism and performance, such as media processing, telecommunications, and high-performance computing.

2. **Processor Core Dependent**: These semiconductor IPs are designed to be tightly coupled with specific processor cores, ensuring optimal compatibility and performance. They include enhancements that provide seamless integration with one or more predetermined processor architectures, often used in specific applications like embedded systems or custom computing solutions.

3. **Processor Core Independent**: Unlike core-dependent IPs, these are flexible solutions that can integrate with a wide range of processor cores. This adaptability makes them ideal for designers looking to future-proof their technological investments or who are working with diverse processing environments.

Overall, Platform Level IP offers a robust foundation for developing flexible, efficient, and scalable semiconductor devices, catering to a variety of industries and technological requirements. Whether enhancing existing architectures or pioneering new designs, semiconductor IPs in this category play a pivotal role in the innovation and evolution of electronic devices.

All semiconductor IP
221
IPs available
Platform Level IP
Analog Comparator Analog Filter Analog Multiplexer Analog Subsystems Clock Synthesizer Coder/Decoder D/A Converter Graphics & Video Modules Other Oversampling Modulator Photonics PLL Power Management RF Modules Sensor Switched Cap Filter Temperature Sensor CAN CAN XL CAN-FD FlexRay LIN Other Safe Ethernet Arbiter Audio Controller CRT Controller Disk Controller DMA Controller GPU Input/Output Controller Interrupt Controller LCD Controller Other Peripheral Controller Receiver/Transmitter Timer/Watchdog AMBA AHB / APB/ AXI CXL D2D Gen-Z I2C IEEE1588 Interlaken MIL-STD-1553 MIPI Multi-Protocol PHY PCI PowerPC RapidIO SAS SATA USB V-by-One VESA Embedded Memories I/O Library Other Standard cell DDR eMMC Flash Controller Mobile DDR Controller NAND Flash NVM Express ONFI Controller RLDRAM Controller SD SDRAM Controller SRAM Controller 2D / 3D ADPCM Audio Interfaces AV1 H.263 H.264 H.265 Image Conversion JPEG QOI TICO VC-2 HQ VGA WMA WMV Network on Chip Multiprocessor / DSP Processor Core Dependent Processor Core Independent AI Processor Audio Processor Building Blocks Coprocessor CPU DSP Core IoT Processor Microcontroller Processor Cores Security Processor Vision Processor Wireless Processor Content Protection Software Cryptography Cores Embedded Security Modules Other Platform Security Security Protocol Accelerators Security Subsystems 3GPP-5G 3GPP-LTE 802.11 802.16 / WiMAX Bluetooth CPRI Digital Video Broadcast GPS JESD 204A / JESD 204B W-CDMA ATM / Utopia CEI Cell / Packet Error Correction/Detection Ethernet Fibre Channel Interleaver/Deinterleaver Modulation/Demodulation Optical/Telecom
Vendor

Akida 2nd Generation

The second generation of BrainChip's Akida platform expands upon its predecessor with enhanced features for even greater performance, efficiency, and accuracy in AI applications. This platform leverages advanced 8-bit quantization and advanced neural network support, including temporal event-based neural nets and vision transformers. These advancements allow for significant reductions in model size and computational requirements, making the Akida 2nd Generation a formidable component for edge AI solutions. The platform effectively supports complex neural models necessary for a wide range of applications, from advanced vision tasks to real-time data processing, all while minimizing cloud interaction to protect data privacy.

BrainChip
AI Processor, Digital Video Broadcast, IoT Processor, Multiprocessor / DSP, Security Protocol Accelerators, Vision Processor
View Details

CXL 3.1 Switch

The CXL 3.1 Switch by Panmnesia is a high-performance solution facilitating flexible and scalable inter-device connectivity. Designed for data centers and HPC systems, this switch supports extensive device integration, including memory, CPUs, and accelerators, thanks to its advanced connectivity features. The switch's design allows for complex networking configurations, promoting efficient resource utilization while ensuring low-latency communication between connected devices. It stands as an essential component in disaggregated compute environments, driving down latency and operational costs.

Panmnesia
AMBA AHB / APB/ AXI, CXL, D2D, Fibre Channel, Multiprocessor / DSP, PCI, Processor Core Dependent, Processor Core Independent, RapidIO, SAS, SATA, V-by-One
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Metis AIPU PCIe AI Accelerator Card

Axelera AI has crafted a PCIe AI acceleration card, powered by their high-efficiency quad-core Metis AIPU, to tackle complex AI vision tasks. This card provides an extraordinary 214 TOPS, enabling it to process the most demanding AI workloads. Enhanced by the Voyager SDK's streamlined integration capabilities, this card promises quick deployment while maintaining superior accuracy and power efficiency. It is tailored for applications that require high throughput and minimal power consumption, making it ideal for edge computing.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor, WMV
View Details

Exostiv

Exostiv is designed to provide significant visibility inside FPGA systems, enabling engineers to conduct real-environment testing and ensure that designs function efficiently before entering production. Featuring high-speed probes capable of capturing complex signals, Exostiv supports advanced FPGA debugging through its user-centric interface and adaptable insertion flows. It facilitates both pre-silicon validation and debugging by allowing in-depth monitoring across various clock domains. With connectivity options like QSFP28 and SAMTEC ARF-6, Exostiv empowers engineers with a flexible approach to manage different prototyping platforms effectively. The scalability of Exostiv allows its users to adapt to diverse FPGA configurations by adjusting the number and type of probes. Exostiv significantly reduces the likelihood of FPGA bugs in end-user environments by enabling engineers to thoroughly validate and adjust designs dynamically as needed. Its modular setup characterizes the adaptive nature of Exostiv’s architecture, making it suitable for application-specific optimizations in complex design environments.

Exostiv Labs
AMBA AHB / APB/ AXI, Processor Core Independent, SDRAM Controller
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

The Tianqiao-70 is engineered for ultra-low power consumption while maintaining robust computational capabilities. This commercial-grade 64-bit RISC-V CPU core presents an ideal choice for scenarios demanding minimal power usage without conceding performance. It is primarily designed for emerging mobile applications and devices, providing both economic and environmental benefits. Its architecture prioritizes low energy profiles, making it perfect for a wide range of applications, including mobile computing, desktop devices, and intelligent IoT systems. The Tianqiao-70 fits well into environments where conserving battery life is a priority, ensuring that devices remain operational for extended periods without needing frequent charging. The core maintains a focus on energy efficiency, yet it supports comprehensive computing functions to address the needs of modern, power-sensitive applications. This makes it a flexible component in constructing a diverse array of SoC solutions and meeting specific market demands for sustainability and performance.

StarFive
AI Processor, CPU, Multiprocessor / DSP, Processor Cores
View Details

Metis AIPU M.2 Accelerator Module

The Metis M.2 AI accelerator module from Axelera AI is a cutting-edge solution for embedded AI applications. Designed for high-performance AI inference, this card boasts a single quad-core Metis AIPU that delivers industry-leading performance. With dedicated 1 GB DRAM memory, it operates efficiently within compact form factors like the NGFF M.2 socket. This capability unlocks tremendous potential for a range of AI-driven vision applications, offering seamless integration and heightened processing power.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Processor Core Dependent, Vision Processor, WMV
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDARIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

NuLink Die-to-Die PHY for Standard Packaging

The NuLink Die-to-Die PHY for Standard Packaging is a cutting-edge interconnect solution that bridges multiple dies on a single standard package substrate. This technology supports numerous industry standards, including UCIe and BoW, and adapts to both advanced and conventional packaging setups. It enables low-power, high-performance interconnections that are instrumental in the design of multi-die systems like SiPs, facilitating bandwidth and power efficiencies comparable to that of more costly packaging technologies. Eliyan's PHY technology, distinctive for its innovative implementation methods, offers similar performance attributes as advanced packaging alternatives but at a fraction of the thermal, cost, and production time expenditures. This design approach effectively utilizes standard packages, circumventing the complexities associated with silicon interposers, while still delivering robust data handling capabilities essential for sophisticated ASIC designs. With up to 64 data lanes, and operating at data rates that reach 32Gbps per lane, the NuLink Die-to-Die interconnect elements ensure consistent performance. Such specifications make them suitable for high-demand applications requiring reliable, efficient data transmission across multiple processing elements, reinforcing their role as a fundamental building block in the semiconductor landscape.

Eliyan
Samsung
3nm, 5nm
AMBA AHB / APB/ AXI, CXL, D2D, MIPI, Network on Chip, Processor Core Dependent
View Details

SCR9 Processor Core

Designed for entry-level server-class applications, the SCR9 is a 64-bit RISC-V processor core that comes equipped with cutting-edge features, such as an out-of-order superscalar pipeline, making it apt for processing-intensive environments. It supports both single and double-precision floating-point operations adhering to IEEE standards, which ensure precise computation results. This processor core is tailored for high-performance computing needs, with a focus on AI and ML, as well as conventional data processing tasks. It integrates an advanced interrupt system featuring APLIC configurations, enabling responsive operations even under heavy workloads. SCR9 supports up to 16 cores in a multi-cluster arrangement, each utilizing coherent multi-level caches to maintain rapid data processing and management. The comprehensive development package for SCR9 includes ready-to-deploy toolchains and simulators that expedite software development, particularly within Linux environments. The core is well-suited for deployment in entry-level server markets and data-intensive applications, with robust support for virtualization and heterogeneous architectures.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Cores
View Details

Veyron V2 CPU

Ventana's Veyron V2 CPU represents the pinnacle of high-performance AI and data center-class RISC-V processors. Engineered to deliver world-class performance, it supports extensive data center workloads, offering superior computational power and efficiency. The V2 model is particularly focused on accelerating AI and ML tasks, ensuring compute-intensive applications run seamlessly. Its design makes it an ideal choice for hyperscale, cloud, and edge computing solutions where performance is non-negotiable. This CPU is instrumental for companies aiming to scale with the latest in server-class technology.

Ventana Micro Systems
AI Processor, CPU, Processor Core Dependent, Processor Cores
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

ORC3990 – DMSS LEO Satellite Endpoint System On Chip (SoC)

The ORC3990 SoC is a state-of-the-art solution designed for satellite IoT applications within Totum's DMSSâ„¢ network. This low-power sensor-to-satellite system integrates an RF transceiver, ARM CPUs, memories, and PA to offer seamless IoT connectivity via LEO satellite networks. It boasts an optimized link budget for effective indoor signal coverage, eliminating the need for additional GNSS components. This compact SoC supports industrial temperature ranges and is engineered for a 10+ year battery life using advanced power management.

Orca Systems Inc.
TSMC
22nm
3GPP-5G, Bluetooth, Processor Core Independent, RF Modules, USB, Wireless Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor is an advanced Arm-based server chip developed by T-Head, designed to meet the extensive demands of modern data centers and enterprise applications. This processor boasts 128 high-performance Armv9 CPU cores, each coupled with robust caches, ensuring superior processing speeds and efficiency. With a 2.5D packaging technology, the Yitian 710 integrates multiple dies into a single unit, facilitating enhanced computational capability and energy efficiency. One of the key features of the Yitian 710 is its memory subsystem, which supports up to 8 channels of DDR5 memory, achieving a peak bandwidth of 281 GB/s. This configuration guarantees rapid data access and processing, crucial for high-throughput computing environments. Additionally, the processor is equipped with 96 PCIe 5.0 lanes, offering a dual-direction bandwidth of 768 GB/s, enabling seamless connectivity with peripheral devices and boosting system performance overall. The Yitian 710 Processor is meticulously crafted for applications in cloud services, big data analytics, and AI inference, providing organizations with a robust platform for their computing needs. By combining high core count, extensive memory support, and advanced I/O capabilities, the Yitian 710 stands as a cornerstone for deploying powerful, scalable, and energy-efficient data processing solutions.

T-Head
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

eSi-3250

The eSi-3250 32-bit RISC processor core excels in applications needing efficient caching structures and high-performance computation, thanks to its support for both instruction and data caches. This core targets applications where slower memory technologies or higher core/bus clock ratios exist, by leveraging configurable caches which reduce power consumption and boost performance. This advanced processor design integrates a wide range of arithmetic capabilities, supporting IEEE-754 floating-point functions and 32-bit SIMD operations to facilitate complex data processing. It uses an optional memory management unit (MMU) for virtual memory implementation and memory protection, enhancing its functional safety in various operating environments.

eSi-RISC
Intel Foundry, Samsung, TSMC, UMC
12nm, 16nm, 28nm
CPU, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

This ultra-compact and high-speed H.264 core is engineered for FPGA platforms, boasting industry-leading size and performance. Capable of providing 1080p60 H.264 Baseline support, it accommodates various customization needs, including different pixel depths and resolutions. The core is particularly noted for its minimal latency of less than 1ms at 1080p30, a significant advantage over competitors. Its flexibility allows integration with a range of FPGA systems, ensuring efficient compression without compromising on speed or size. In one versatile package, users have access to a comprehensive set of encoding features including variable and fixed bit-rate options. The core facilitates simultaneous processing of multiple video streams, adapting to various compression ratios and frame types (I and P frames). Its support for advanced video input formats and compliance with ITAR guidelines make it a robust choice for both military and civilian applications. Moreover, the availability of low-cost evaluation licenses invites experimentation and custom adaptation, promoting broad application and ease of integration in diverse projects. These cores are especially optimized for low power consumption, drawing minimal resources in contrast to other market offerings due to their efficient FPGA design architecture. They include a suite of enhanced features such as an AXI wrapper for simple system integration and significantly reduced Block RAM requirements. Embedded systems benefit from its synchronous design and wide support for auxiliary functions like simultaneous stream encoding, making it a versatile addition to complex signal processing environments.

A2e Technologies
TSMC
16nm, 130nm, 180nm
AI Processor, AMBA AHB / APB/ AXI, Arbiter, H.264, Multiprocessor / DSP, Other, TICO, USB
View Details

Chimera GPNPU

The Chimera GPNPU by Quadric redefines AI computing on devices by combining processor flexibility with NPU efficiency. Tailored for on-device AI, it tackles significant machine learning inference challenges faced by SoC developers. This licensable processor scales massively offering performance from 1 to 864 TOPs. One of its standout features is the ability to execute matrix, vector, and scalar code in a single pipeline, essentially merging the functionalities of NPUs, DSPs, and CPUs into a single core. Developers can easily incorporate new ML networks such as vision transformers and large language models without the typical overhead of partitioning tasks across multiple processors. The Chimera GPNPU is entirely code-driven, empowering developers to optimize their models throughout a device's lifecycle. Its architecture allows for future-proof flexibility, handling newer AI workloads as they emerge without necessitating hardware changes. In terms of memory efficiency, the Chimera architecture is notable for its compiler-driven DMA management and support for multiple levels of data storage. Its rich instruction set optimizes both 8-bit integer operations and complex DSP tasks, providing full support for C++ coded projects. Furthermore, the Chimera GPNPU integrates AXI Interfaces for efficient memory handling and configurable L2 memory to minimize off-chip access, crucial for maintaining low power dissipation.

Quadric
All Foundries
All Process Nodes
AI Processor, AMBA AHB / APB/ AXI, CPU, DSP Core, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, VGA, Vision Processor
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

eSi-3200

The eSi-3200 is a versatile 32-bit RISC processor core that combines low power usage with high performance, ideal for embedded control applications using on-chip memory. Its structure supports a wide range of computational tasks with a modified-Harvard architecture that allows simultaneous instruction and data fetching. This design facilitates deterministic performance, making it perfect for real-time control. The eSi-3200 processor supports extensive arithmetic operations, offering optional IEEE-754 floating-point units for both single-precision and SIMD instructions which optimize parallel data processing. Its compatibility with AMBA AXI or AHB interconnects ensures easy integration into various systems.

eSi-RISC
Intel Foundry, Samsung, TSMC, UMC
12nm, 16nm, 28nm
CPU, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

xcore.ai

The xcore.ai platform from XMOS is engineered to revolutionize the scope of intelligent IoT by offering a powerful yet cost-efficient solution that combines high-performance AI processing with flexible I/O and DSP capabilities. At its heart, xcore.ai boasts a multi-threaded architecture with 16 logical cores divided across two processor tiles, each equipped with substantial SRAM and a vector processing unit. This setup ensures seamless execution of integer and floating-point operations while facilitating high-speed communication between multiple xcore.ai systems, allowing for scalable deployments in varied applications. One of the standout features of xcore.ai is its software-defined I/O, enabling deterministic processing and precise timing accuracy, which is crucial for time-sensitive applications. It integrates embedded PHYs for various interfaces such as MIPI, USB, and LPDDR, enhancing its adaptability in meeting custom application needs. The device's clock frequency can be adjusted to optimize power consumption, affirming its cost-effectiveness for IoT solutions demanding high efficiency. The platform's DSP and AI performances are equally impressive. The 32-bit floating-point pipeline can deliver up to 1600 MFLOPS with additional block floating point capabilities, accommodating complex arithmetic computations and FFT operations essential for audio and vision processing. Its AI performance reaches peaks of 51.2 GMACC/s for 8-bit operations, maintaining substantial throughput even under intensive AI workloads, making xcore.ai an ideal candidate for AI-enhanced IoT device creation.

XMOS Semiconductor
20 Categories
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Talamo SDK

The Talamo SDK is a comprehensive software development toolkit designed to facilitate the creation and deployment of advanced neuromorphic AI models on Innatera's Spiking Neural Processor (SNP). It integrates into the PyTorch environment, allowing developers to seamlessly build, train, and deploy neural network models, enhancing flexibility and accessibility in developing AI applications tailored to specific needs without requiring detailed expertise in neuromorphic computing. Talamo enhances the development workflow by offering standard and custom function integration, model zoo access, and application pipeline construction. The SDK provides profiling and optimization tools to ensure applications are both efficient and performant, allowing for quick design iterations. It also includes a hardware architecture simulator, enabling developers to validate and iterate their designs rapidly before implementation on actual hardware. With the Talamo SDK, developers can exploit the SNP's heterogeneous computing capabilities, utilizing its diverse architectural elements to optimize application performance. Additionally, its support for end-to-end application development, without the necessity of deep SNN knowledge, allows for broader reach and application, from research to industrial solutions.

Innatera Nanosystems
Processor Core Independent
View Details

AndeShape Platforms

The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.

Andes Technology
Embedded Memories, Microcontroller, Processor Core Dependent, Processor Core Independent, Standard cell
View Details

eSi-3264

The eSi-3264 is a cutting-edge 32/64-bit processor core that incorporates SIMD DSP extensions, making it suitable for applications requiring both efficient data parallelism and minimal silicon footprint. Designed for high-accuracy DSP tasks, this processor's multifunctional capabilities target audio processing, sensor hubs, and complex arithmetic operations. The eSi-3264 processor supports sizeable instruction and data caches, which significantly enhance system performance when accessing slower external memory sources. With dual and quad MAC operations that include 64-bit accumulation, it enhances DSP execution, applying 8, 16, and 32-bit SIMD instructions for real-time data handling and minimizing CPU load.

eSi-RISC
Intel Foundry, Samsung, TSMC, UMC
12nm, 16nm, 28nm
CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

Avispado

The Avispado core is a 64-bit in-order RISC-V processor that provides an excellent balance of performance and power efficiency. With a focus on energy-conscious designs, Avispado facilitates the development of machine learning applications and is prime for environments with limited silicon resources. It leverages Semidynamics' innovative Gazzillion Missesâ„¢ technology to address challenges with sparse tensor weights, enhancing energy efficiency and operational performance for AI tasks. Structured to support multiprocessor configurations, Avispado is integral in systems requiring cache coherence and high memory throughput. It is particularly suitable for setups aimed at recommendation systems due to its ability to manage numerous outstanding memory requests, thanks to its advanced memory interface architectures. Integration with Semidynamics' Vector Unit enriches its offering, allowing dense computations and providing optimal performance in handling vector tasks. The ability to engage with Linux-ready environments and support for RISC-V Vector Specification 1.0 ensures that Avispado integrates seamlessly into existing frameworks, fostering innovative applications in fields like data centers and beyond.

Semidynamics
AI Processor, AMBA AHB / APB/ AXI, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, WMA
View Details

Veyron V1 CPU

The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.

Ventana Micro Systems
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Cores
View Details

Jotunn8 AI Accelerator

The Jotunn8 is engineered to redefine performance standards for AI datacenter inference, supporting prominent large language models. Standing as a fully programmable and algorithm-agnostic tool, it supports any algorithm, any host processor, and can execute generative AI like GPT-4 or Llama3 with unparalleled efficiency. The system excels in delivering cost-effective solutions, offering high throughput up to 3.2 petaflops (dense) without relying on CUDA, thus simplifying scalability and deployment. Optimized for cloud and on-premise configurations, Jotunn8 ensures maximum utility by integrating 16 cores and a high-level programming interface. Its innovative architecture addresses conventional processing bottlenecks, allowing constant data availability at each processing unit. With the potential to operate large and complex models at reduced query costs, this accelerator maintains performance while consuming less power, making it the preferred choice for advanced AI tasks. The Jotunn8's hardware extends beyond AI-specific applications to general processing (GP) functionalities, showcasing its agility. By automatically selecting the most suitable processing paths layer-by-layer, it optimizes both latency and power consumption. This provides its users with a flexible platform that supports the deployment of vast AI models under efficient resource utilization strategies. This product's configuration includes power peak consumption of 180W and an impressive 192 GB on-chip memory, accommodating sophisticated AI workloads with ease. It aligns closely with theoretical limits for implementation efficiency, accentuating VSORA's commitment to high-performance computational capabilities.

VSORA
AI Processor, Interleaver/Deinterleaver, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Time-Triggered Ethernet

Time-Triggered Ethernet (TTEthernet) represents a significant advancement in network technology by integrating time-triggered communication over standard Ethernet infrastructures. This technology is designed to meet the stringent real-time requirements of aerospace and industrial applications, offering deterministic data transfer alongside regular Ethernet traffic within a shared network. TTEthernet delivers seamless synchronization across all network devices, ensuring that time-critical data packets are processed with precise timing. This capability is essential for applications where simultaneous actions from multiple systems require tight coordination, such as flight control systems or automated industrial processes. The protocol's compatibility with existing Ethernet environments allows for easy integration into current systems, reducing costs associated with network infrastructure upgrades. TTEthernet also enhances network reliability through redundant data paths and failover mechanisms, which guarantee continuous operation even in the event of link failures. As a result, TTEthernet provides a future-proof solution for managing both regular and mission-critical data streams within a single unified network environment. Its capacity to support various operational modes makes it an attractive choice for industries pursuing high standards of safety and efficiency.

TTTech Computertechnik AG
Ethernet, FlexRay, LIN, MIL-STD-1553, MIPI, Processor Core Independent, Safe Ethernet
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator (DNA) II offers a groundbreaking approach to enhancing edge AI performance. This neural network architecture core stands out due to its runtime reconfigurable architecture that allows for efficient interconnections between compute components. DNA II supports both convolutional and transformer network applications, accommodating an extensive array of edge AI functions. By leveraging scalable performance, it makes itself a valuable asset in the development of systems-on-chip (SoC) solutions. DNA II is spearheaded by EdgeCortix's patented data path architecture, focusing on technical optimization to maximize available computing resources. This architecture uniquely allows DNA II to maintain low power consumption while flexibly adapting to various task demands across diverse AI models. Its higher utilization rates and faster processing set it apart from traditional IP core solutions, addressing industry demands for more efficient and effective AI processing. In concert with the MERA software stack, DNA II optimally sequences computation tasks and resource distribution, further refining efficiency and effectiveness in processing neural networks. This integration of hardware and software not only aids in reducing on-chip memory bandwidth usage but also enhances the parallel processing ability of the system, catering to the intricate needs of modern AI computing environments.

EdgeCortix Inc.
AI Processor, Audio Processor, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

GSHARK

GSHARK is a high-performance GPU IP designed to accelerate graphics on embedded devices. Known for its extreme power efficiency and seamless integration, this GPU IP significantly reduces CPU load, making it ideal for use in devices like digital cameras and automotive systems. Its remarkable track record of over one hundred million shipments underscores its reliability and performance. Engineered with TAKUMI's proprietary architecture, GSHARK integrates advanced rendering capabilities. This architecture supports real-time, on-the-fly graphics processing similar to that found in PCs, smartphones, and gaming consoles, ensuring a rich user experience and efficient graphics applications. This IP excels in environments where power consumption and performance balance are crucial. GSHARK is at the forefront of embedded graphics solutions, providing significant improvements in processing speed while maintaining low energy usage. Its architecture easily handles demanding graphics rendering tasks, adding considerable value to any embedded system it is integrated into.

TAKUMI Corporation
GPU, Processor Core Independent
View Details

ISPido on VIP Board

ISPido on VIP Board is a customized runtime solution tailored for Lattice Semiconductors’ Video Interface Platform (VIP) board. This setup enables real-time image processing and provides flexibility for both automated configuration and manual control through a menu interface. Users can adjust settings via histogram readings, select gamma tables, and apply convolutional filters to achieve optimal image quality. Equipped with key components like the CrossLink VIP input bridge board and ECP5 VIP Processor with ECP5-85 FPGA, this solution supports dual image sensors to produce a 1920x1080p HDMI output. The platform enables dynamic runtime calibration, providing users with interface options for active parameter adjustments, ensuring that image settings are fine-tuned for various applications. This system is particularly advantageous for developers and engineers looking to integrate sophisticated image processing capabilities into their devices. Its runtime flexibility and comprehensive set of features make it a valuable tool for prototyping and deploying scalable imaging solutions.

DPControl
18 Categories
View Details

Tyr Superchip

The Tyr Superchip is engineered to tackle the most daunting computational challenges in edge AI, autonomous driving, and decentralized AIoT applications. It merges AI and DSP functionalities into a single, unified processing unit capable of real-time data management and processing. This all-encompassing chip solution handles vast amounts of sensor data necessary for complete autonomous driving and supports rapid AI computing at the edge. One of the key challenges it addresses is providing massive compute power combined with low-latency outputs, achieving what traditional architectures cannot in terms of energy efficiency and speed. Tyr chips are surrounded by robust safety protocols, being ISO26262 and ASIL-D ready, making them ideally suited for the critical standards required in automotive systems. Designed with high programmability, the Tyr Superchip accommodates the fast-evolving needs of AI algorithms and supports modern software-defined vehicles. Its low power consumption, under 50W for higher-end tasks, paired with a small silicon footprint, ensures it meets eco-friendly demands while staying cost-effective. VSORA’s Superchip is a testament to their innovative prowess, promising unmatched efficiency in processing real-time data streams. By providing both power and processing agility, it effectively supports the future of mobility and AI-driven automation, reinforcing VSORA’s position as a forward-thinking leader in semiconductor technology.

VSORA
AI Processor, Audio Processor, CAN XL, CPU, Interleaver/Deinterleaver, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Ultra-Low-Power 64-Bit RISC-V Core

The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic, Inc. is engineered to operate efficiently with minimal power consumption, making it a standout solution for high-performance applications. This processor core is capable of running at an impressive 5GHz, yet it only consumes 10mW at 1GHz, illustrating its capability to deliver exceptional performance while keeping power usage to a minimum. Ideal for scenarios where energy efficiency is crucial, it leverages advanced design techniques to reduce voltage alongside high-speed processing. Maximizing power efficiency without compromising speed, this RISC-V core is suited for a wide array of applications ranging from IoT devices to complex computing systems. Its design allows it to maintain performance even at lower power inputs, a critical feature in sectors that prioritize energy savings and sustainability. The core's architecture supports full configurability, catering to diverse design needs across different technological fields. In addition to its energy-efficient design, the core offers robust computational capabilities, making it a competitive choice for companies looking to implement high-speed, low-power processing solutions in their product lines. The flexibility and power of this core accentuate Micro Magic's commitment to delivering top-tier semiconductor solutions that meet the evolving demands of modern technology.

Micro Magic, Inc.
Samsung, TSMC
10nm, 16nm
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator by T-Head is an advanced semiconductor technology designed to accelerate AI computations and machine learning tasks. This accelerator is specifically optimized for high-performance inference, offering substantial improvements in processing times for deep learning applications. Its architecture is developed to leverage parallel computing capabilities, making it highly suitable for tasks that require fast and efficient data handling. This AI accelerator supports a broad spectrum of machine learning frameworks, ensuring compatibility with various AI algorithms. It is equipped with specialized processing units and a high-throughput memory interface, allowing it to handle large datasets with minimal latency. The Hanguang 800 is particularly effective in environments where rapid inferencing and real-time data processing are essential, such as in smart cities and autonomous driving. With its robust design and multi-faceted processing abilities, the Hanguang 800 Accelerator empowers industries to enhance their AI and machine learning deployments. Its capability to deliver swift computation and inference results ensures it is a valuable asset for companies looking to stay at the forefront of technological advancement in AI applications.

T-Head
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

aiWare

aiWare stands out as a premier hardware IP for high-performance neural processing, tailored for complex automotive AI applications. By offering exceptional efficiency and scalability, aiWare empowers automotive systems to harness the full power of neural networks across a wide variety of functions, from Advanced Driver Assistance Systems (ADAS) to fully autonomous driving platforms. It boasts an innovative architecture optimized for both performance and energy efficiency, making it capable of handling the rigorous demands of next-generation AI workloads. The aiWare hardware features an NPU designed to achieve up to 256 Effective Tera Operations Per Second (TOPS), delivering high performance at significantly lower power. This is made possible through a thoughtfully engineered dataflow and memory architecture that minimizes the need for external memory bandwidth, thus enhancing processing speed and reducing energy consumption. The design ensures that aiWare can operate efficiently across a broad range of conditions, maintaining its edge in both small and large-scale applications. A key advantage of aiWare is its compatibility with aiMotive's aiDrive software, facilitating seamless integration and optimizing neural network configurations for automotive production environments. aiWare's development emphasizes strong support for AI algorithms, ensuring robust performance in diverse applications, from edge processing in sensor nodes to high central computational capacity. This makes aiWare a critical component in deploying advanced, scalable automotive AI solutions, designed specifically to meet the safety and performance standards required in modern vehicles.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

XCM_64X64

Functioning as a comprehensive cross-correlator, the XCM_64X64 facilitates efficient and precise signal processing required in synthetic radar receivers and advanced spectrometers. Designed on IBM's 45nm SOI CMOS technology, it supports ultra-low power operation at about 1.5W for the entire array, with a sampling performance of 1GSps across a bandwidth of 10MHz to 500MHz. The ASIC is engineered to manage high-throughput data channels, a vital component for high-energy physics and space observation instruments.

Pacific MicroCHIP Corp.
X-Fab
40nm
3GPP-LTE, ADPCM, D/A Converter, Multiprocessor / DSP, Oversampling Modulator, Power Management, Receiver/Transmitter, RF Modules, VGA, Wireless Processor
View Details

ZIA Stereo Vision

ZIA Stereo Vision by Digital Media Professionals Inc. revolutionizes three-dimensional image processing by delivering exceptional accuracy and performance. This stereo vision technology is particularly designed for use in autonomous systems and advanced robotics, where precise spatial understanding is crucial. It incorporates deep learning algorithms to provide robust 3D mapping and object recognition capabilities. The IP facilitates extensive depth perception and analyzed spatial data for applications in areas like automated surveillance and navigation. Its ability to create detailed 3D maps of environments assists machines in interpreting and interacting with their surroundings effectively. By applying sophisticated AI algorithms, it enhances the ability of devices to make intelligent decisions based on rich visual data inputs. Integration into existing systems is simplified due to its compatibility with a variety of platforms and configurations. By enabling seamless deployment in sectors demanding high reliability and accuracy, ZIA Stereo Vision stands as a core component in the ongoing evolution towards more autonomous and smart digital environments.

Digital Media Professionals Inc.
2D / 3D, AI Processor, Arbiter, GPU, Graphics & Video Modules, Platform Security, Processor Core Independent, Vision Processor
View Details

RegSpec - Register Specification Tool

RegSpec is a comprehensive register specification tool that excels in generating Control Configuration and Status Register (CCSR) code. The tool is versatile, supporting various input formats like SystemRDL, IP-XACT, and custom formats via CSV, Excel, XML, or JSON. Its ability to output in formats such as Verilog RTL, System Verilog UVM code, and SystemC header files makes it indispensable for IP designers, offering extensive features for synchronization across multiple clock domains and interrupt handling. Additionally, RegSpec automates verification processes by generating UVM code and RALF files useful in firmware development and system modeling.

Dyumnin Semiconductors
TSMC
28nm, 65nm, 90nm
AMBA AHB / APB/ AXI, Disk Controller, DMA Controller, Flash Controller, GPU, I/O Library, I2C, Input/Output Controller, ONFI Controller, PCI, Processor Core Dependent, Receiver/Transmitter
View Details

ISELED Technology

ISELED Technology introduces a revolutionary approach to automotive interior lighting with smart digital RGB LEDs. These LEDs facilitate dynamic lighting solutions by embedding a sophisticated driver directly with the LED, enabling unique features such as pre-calibration and independent temperature compensation. This innovation allows for significant simplification of the overall lighting system architecture as the intricate calibration tasks are handled at the LED module level. The ISELED system supports an expansive 4K address space, offering seamless integration into daisy-chained configurations and precise color control through a straightforward digital command interface. This approach not only enhances visual quality but also reduces the complexity typically associated with RGB LED configurations, eliminating the need for separate power management or additional calibration setups. ISELED’s robust design is particularly beneficial in automotive environments, where durability and dependability are crucial. The technology also extends to ILaS systems, providing interconnected networks of LEDs and sensors that are both energy-efficient and capable of rapid diagnostics and reconfiguration. In essence, ISELED technology allows automotive designers unprecedented flexibility and control over vehicle interior lighting designs.

INOVA Semiconductors GmbH
LIN, Multiprocessor / DSP, Other, Safe Ethernet, Sensor, Temperature Sensor
View Details

ISPido

ISPido represents a fully configurable RTL Image Signal Processing Pipeline, adhering to the AMBA AXI4 standards and tailored through the AXI4-LITE protocol for seamless integration with systems such as RISC-V. This advanced pipeline supports a variety of image processing functions like defective pixel correction, color filter interpolation using the Malvar-Cutler algorithm, and auto-white balance, among others. Designed to handle resolutions up to 7680x7680, ISPido provides compatibility for both 4K and 8K video systems, with support for 8, 10, or 12-bit depth inputs. Each module within this pipeline can be fine-tuned to fit specific requirements, making it a versatile choice for adapting to various imaging needs. The architecture's compatibility with flexible standards ensures robust performance and adaptability in diverse applications, from consumer electronics to professional-grade imaging solutions. Through its compact design, ISPido optimizes area and energy efficiency, providing high-quality image processing while keeping hardware demands low. This makes it suitable for battery-operated devices where power efficiency is crucial, without sacrificing the processing power needed for high-resolution outputs.

DPControl
21 Categories
View Details

2D FFT

Dillon Engineering's 2D FFT Core is specifically developed for applications involving two-dimensional data processing, perfect for implementations in image processing and radar signal analysis. This FFT Core operates by processing data in a layered approach, enabling it to concurrently handle two-dimensional data arrays. It effectively leverages internal and external memory, maximizing throughput while minimizing the impact on bandwidth, which is crucial in handling large-scale data sets common in imaging technologies. Its ability to process data in two dimensions simultaneously offers a substantial advantage in applications that require comprehensive analysis of mass data points, including medical imaging and geospatial data processing. With a focus on flexibility, the 2D FFT Core, designed using the ParaCore Architect, offers configurable data processing abilities that can be tailored to unique project specifications. This ensures that the core can be adapted to meet a range of application needs while maintaining high-performance standards that Dillon Engineering is renowned for.

Dillon Engineering, Inc.
TSMC
28nm
2D / 3D, GPU, Multiprocessor / DSP, PLL, Processor Core Independent, Vision Processor
View Details

BlueLynx Chiplet Interconnect

The BlueLynx Chiplet Interconnect is a sophisticated die-to-die interconnect solution that offers industry-leading performance and flexibility for both advanced and conventional packaging applications. As an adaptable subsystem, BlueLynx supports the integration of Universal Chiplet Interconnect Express (UCIe) as well as Bunch of Wires (BoW) standards, facilitating high bandwidth capabilities essential for contemporary chip designs.\n\nBlueLynx IP emphasizes seamless connectivity to on-die buses and network-on-chip (NoCs) using standards such as AMBA, AXI, and ACE among others, thereby accelerating the design process from system-on-chip (SoC) architectures to chiplet-based designs. This innovative approach not only allows for faster deployment but also mitigates development risks through a predictable and silicon-friendly design process with comprehensive support for rapid first-pass silicon success.\n\nWith BlueLynx, designers can take advantage of a highly optimized performance per watt, offering customizable configurations tailored to specific application needs across various markets like AI, high-performance computing, and mobile technologies. The IP is crafted to deliver outstanding bandwidth density and energy efficiency, bridging the requirements of advanced nodal technologies with compatibility across several foundries, ensuring extensive applicability and cost-effectiveness for diverse semiconductor solutions.

Blue Cheetah Analog Design, Inc.
TSMC
4nm, 7nm, 10nm, 12nm, 16nm
AMBA AHB / APB/ AXI, Clock Synthesizer, D2D, Gen-Z, IEEE1588, Interlaken, MIPI, Modulation/Demodulation, Network on Chip, PCI, Processor Core Independent, VESA, VGA
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI accelerator is designed specifically to address the challenges of energy efficiency and processing demands in edge AI applications. This powerhouse delivers top-tier performance while maintaining a compact and low-power silicon architecture. The key advantage of SAKURA-II is its capability to handle vision and Generative AI applications with unmatched efficiency, thanks to the integration of the Dynamic Neural Accelerator (DNA) core. This core exhibits run-time reconfigurability that supports multiple neural network models simultaneously, adapting in real-time without compromising on speed or accuracy. Focusing on the demanding needs of modern AI applications, the SAKURA-II easily manages models with billions of parameters, such as Llama 2 and Stable Diffusion, all within a mere power envelope of 8W. It supports a large memory bandwidth and DRAM capacity, ensuring smooth handling of complex workloads. Furthermore, its multiple form factors, including modules and cards, allow for versatile system integration and rapid development, significantly shortening the time-to-market for AI solutions. EdgeCortix has engineered the SAKURA-II to offer superior DRAM bandwidth, allowing for up to 4x the DRAM bandwidth of other accelerators, crucial for low-latency operations and nimbly executing large-scale AI workflows such as Language and Vision Models. Its architecture promises higher AI compute utilization than traditional solutions, thus delivering significant energy efficiency advantages.

EdgeCortix Inc.
All Foundries
5nm
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Zhenyue 510 SSD Controller

The Zhenyue 510 SSD Controller is a high-performance enterprise-grade controller providing robust management for SSD storage solutions. It is engineered to deliver exceptional I/O throughput of up to 3400K IOPS and a data transfer rate reaching 14 GByte/s. This remarkable performance is achieved through the integration of T-Head's proprietary low-density parity-check (LDPC) error correction algorithms, enhancing reliability and data integrity. Equipped with T-Head's low-latency architecture, the Zhenyue 510 offers swift read and write operations, crucial for applications demanding fast data processing capabilities. It supports flexible Nand flash interfacing, which makes it adaptable to multiple generations of flash memory technologies. This flexibility ensures that the device remains a viable solution as storage standards evolve. Targeted at applications such as online transactions, large-scale data management, and software-defined storage systems, the Zhenyue 510's advanced capabilities make it a cornerstone for organizations needing seamless and efficient data storage solutions. The combination of innovative design, top-tier performance metrics, and adaptability positions the Zhenyue 510 as a leader in SSD controller technologies.

T-Head
eMMC, Flash Controller, NAND Flash, NVM Express, ONFI Controller, Processor Core Dependent, RLDRAM Controller, SAS, SATA, SDRAM Controller
View Details

XCM_64X64_A

The XCM_64X64_A is a powerful array designed for cross-correlation operations, integrating 128 ADCs each capable of 1GSps. Targeted at high-precision synthetic radar and radiometer systems, this ASIC delivers ultra-low power consumption around 0.5W, ensuring efficient performance over a wide bandwidth range from 10MHz to 500MHz. Built on IBM's 45nm SOI CMOS technology, it forms a critical component in systems requiring rapid data sampling and intricate signal processing, all executed with high accuracy, making it ideal for airborne and space-based applications.

Pacific MicroCHIP Corp.
X-Fab
40nm
3GPP-LTE, ADPCM, D/A Converter, Multiprocessor / DSP, Oversampling Modulator, Power Management, Receiver/Transmitter, RF Modules, VGA, Wireless Processor
View Details

M3000 Graphics Processor

The M3000 Graphics Processor offers a comprehensive solution for 3D rendering, providing high efficiency and quality output in graphical processing tasks. It paves the way for enhancing visual performance in devices ranging from gaming consoles to sophisticated simulation systems. This processor supports an array of graphic formats and resolutions, rendering high-quality 3D visuals efficiently. Its robust architecture is designed to handle complex visual computations, making it ideal for industries that require superior graphical interfaces and detailed rendering capabilities. As part of its user-friendly design, the M3000 is compatible with established graphic APIs, allowing for easy integration and broad utility within existing technology structures. The processor serves as a benchmark for innovations in 3D graphical outputs, ensuring optimal end-user experiences in digital simulation and entertainment environments.

Digital Media Professionals Inc.
2D / 3D, ADPCM, GPU, Multiprocessor / DSP
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt