Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Platform Level IP

Platform Level IP: Comprehensive Semiconductor Solutions

Platform Level IP is a critical category within the semiconductor IP ecosystem, offering a wide array of solutions that are fundamental to the design and efficiency of semiconductor devices. This category includes various IP blocks and cores tailored for enhancing system-level performance, whether in consumer electronics, automotive systems, or networking applications. Suitable for both embedded control and advanced data processing tasks, Platform Level IP encompasses versatile components necessary for building sophisticated, multicore systems and other complex designs.

Subcategories within Platform Level IP cover a broad spectrum of integration needs:

1. **Multiprocessor/DSP (Digital Signal Processing)**: This includes specialized semiconductor IPs for handling tasks that require multiple processor cores working in tandem. These IPs are essential for applications needing high parallelism and performance, such as media processing, telecommunications, and high-performance computing.

2. **Processor Core Dependent**: These semiconductor IPs are designed to be tightly coupled with specific processor cores, ensuring optimal compatibility and performance. They include enhancements that provide seamless integration with one or more predetermined processor architectures, often used in specific applications like embedded systems or custom computing solutions.

3. **Processor Core Independent**: Unlike core-dependent IPs, these are flexible solutions that can integrate with a wide range of processor cores. This adaptability makes them ideal for designers looking to future-proof their technological investments or who are working with diverse processing environments.

Overall, Platform Level IP offers a robust foundation for developing flexible, efficient, and scalable semiconductor devices, catering to a variety of industries and technological requirements. Whether enhancing existing architectures or pioneering new designs, semiconductor IPs in this category play a pivotal role in the innovation and evolution of electronic devices.

All semiconductor IP
254
IPs available
Platform Level IP
A/D Converter Amplifier Analog Comparator Analog Filter Analog Front Ends Analog Multiplexer Analog Subsystems Clock Synthesizer Coder/Decoder D/A Converter DLL Graphics & Video Modules Other Oversampling Modulator Photonics PLL Power Management RF Modules Sensor Switched Cap Filter Temperature Sensor CAN CAN XL CAN-FD FlexRay LIN Other Safe Ethernet Arbiter Audio Controller CRT Controller Disk Controller DMA Controller GPU Input/Output Controller Interrupt Controller LCD Controller Other Peripheral Controller Receiver/Transmitter Timer/Watchdog AMBA AHB / APB/ AXI CXL D2D Gen-Z I2C IEEE 1394 IEEE1588 Interlaken MIL-STD-1553 MIPI Multi-Protocol PHY PCI PCMCIA PowerPC RapidIO SAS SATA Smart Card USB V-by-One VESA Embedded Memories I/O Library Standard cell DDR eMMC Flash Controller HBM Mobile DDR Controller NAND Flash NVM Express ONFI Controller RLDRAM Controller SD SDRAM Controller SRAM Controller 2D / 3D ADPCM Audio Interfaces AV1 H.263 H.264 H.265 Image Conversion JPEG JPEG 2000 QOI VC-2 HQ VGA WMA WMV Network on Chip Multiprocessor / DSP Processor Core Dependent Processor Core Independent AI Processor Audio Processor Building Blocks Coprocessor CPU DSP Core IoT Processor Microcontroller Processor Cores Security Processor Vision Processor Wireless Processor Content Protection Software Cryptography Cores Cryptography Software Library Embedded Security Modules Other Platform Security Security Protocol Accelerators Security Subsystems 3GPP-5G 3GPP-LTE 802.11 802.16 / WiMAX Bluetooth CPRI Digital Video Broadcast GPS JESD 204A / JESD 204B NFC OBSAI UWB W-CDMA Wireless USB ATM / Utopia CEI Cell / Packet Error Correction/Detection Ethernet Fibre Channel Interleaver/Deinterleaver Modulation/Demodulation Optical/Telecom
Vendor

Akida 2nd Generation

The Akida 2nd Generation is an evolution of BrainChip's innovative neural processor technology. It builds upon its predecessor's strengths by delivering even greater efficiency and a broader range of applications. The processor maintains an event-based architecture that optimizes performance and power consumption, providing rapid response times suitable for edge AI applications that prioritize speed and privacy.\n\nThis next-generation processor enhances accuracy with support for 8-bit quantization, which allows for finer grained processing capabilities and more robust AI model implementations. Furthermore, it offers extensive scalability, supporting configurations from a few nodes for low-power needs to many nodes for handling more complex cognitive tasks. As with the previous version, its architecture is inherently cloud-independent, enabling inference and learning directly on the device.\n\nAkida 2nd Generation continues to push the boundaries of AI processing at the edge by offering enhanced processing capabilities, making it ideal for applications demanding high accuracy and efficiency, such as automotive safety systems, consumer electronics, and industrial monitoring.

BrainChip
TSMC
28nm
AI Processor, CPU, Digital Video Broadcast, IoT Processor, Multiprocessor / DSP, Network on Chip, Security Protocol Accelerators, Vision Processor
View Details

MetaTF

MetaTF is BrainChip's proprietary software development framework built to streamline the creation, training, and deployment of neural networks on their Akida neuromorphic processor. This tool is designed specifically for working with edge AI, complementing the hardware capabilities of Akida by providing a rich environment for model development and conversion.\n\nThe framework supports the conversion of traditional TensorFlow and Keras models into spiking neural networks optimized for BrainChip's unique event-based processing. This conversion allows developers to harness the energy efficiency and performance benefits of the Akida architecture without needing to overhaul existing machine learning frameworks.\n\nMetaTF facilitates the adaptation of models to the Akida system through its model zoo, which includes various pre-configured network models, and offers comprehensive tools for simulation and testing. This environment makes it an indispensable resource for businesses aiming to deploy sophisticated AI applications at the edge, minimizing development time while maximizing performance and efficiency.

BrainChip
AI Processor, Coprocessor, Processor Core Independent
View Details

Metis AIPU PCIe AI Accelerator Card

The Metis AIPU PCIe AI Accelerator Card offers exceptional performance for AI workloads demanding significant computational capacity. It is powered by a single Metis AIPU and delivers up to 214 TOPS, catering to high-demand applications such as computer vision and real-time image processing. This PCIe card is integrated with the Voyager SDK, providing developers with a powerful yet user-friendly software environment for deploying complex AI applications seamlessly. Designed for efficiency, this accelerator card stands out by providing cutting-edge performance without the excessive power requirements typical of data center equipment. It achieves remarkable speed and accuracy, making it an ideal solution for tasks requiring fast data processing and inference speeds. The PCIe card supports a wide range of AI application scenarios, from enhancing existing infrastructure capabilities to integrating with new, dynamic systems. Its utility in various industrial settings is bolstered by its compatibility with the suite of state-of-the-art neural networks provided in the Axelera AI ecosystem.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

CXL 3.1 Switch

The CXL 3.1 Switch by Panmnesia is a high-tech solution designed to manage diverse CXL devices within a cache-coherent system, minimizing latency through its proprietary low-latency CXL IP. This switch supports a scalable and flexible architecture, offering multi-level switching and port-based routing capabilities that allow expansive system configurations to meet various application demands. It is engineered to connect system devices such as CPUs, GPUs, and memory modules, ideal for constructing large-scale systems tailored to specific needs.

Panmnesia
AMBA AHB / APB/ AXI, CXL, D2D, Fibre Channel, Gen-Z, Multiprocessor / DSP, PCI, Processor Core Dependent, Processor Core Independent, RapidIO, SAS, SATA, V-by-One
View Details

Yitian 710 Processor

The Yitian 710 Processor is an advanced Arm-based server chip developed by T-Head, designed to meet the extensive demands of modern data centers and enterprise applications. This processor boasts 128 high-performance Armv9 CPU cores, each coupled with robust caches, ensuring superior processing speeds and efficiency. With a 2.5D packaging technology, the Yitian 710 integrates multiple dies into a single unit, facilitating enhanced computational capability and energy efficiency. One of the key features of the Yitian 710 is its memory subsystem, which supports up to 8 channels of DDR5 memory, achieving a peak bandwidth of 281 GB/s. This configuration guarantees rapid data access and processing, crucial for high-throughput computing environments. Additionally, the processor is equipped with 96 PCIe 5.0 lanes, offering a dual-direction bandwidth of 768 GB/s, enabling seamless connectivity with peripheral devices and boosting system performance overall. The Yitian 710 Processor is meticulously crafted for applications in cloud services, big data analytics, and AI inference, providing organizations with a robust platform for their computing needs. By combining high core count, extensive memory support, and advanced I/O capabilities, the Yitian 710 stands as a cornerstone for deploying powerful, scalable, and energy-efficient data processing solutions.

T-Head
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module is designed for edge AI applications that demand high-performance inference capabilities. This module integrates a single Metis AI Processing Unit (AIPU), providing an excellent solution for AI acceleration within constrained devices. Its capability to handle high-speed data processing with limited power consumption makes it an optimal choice for applications requiring efficiency and precision. With 1GB of dedicated DRAM memory, it seamlessly supports a wide array of AI pipelines, ensuring rapid integration and deployment. The design of the Metis AIPU M.2 module is centered around maximizing performance without excessive energy consumption, making it suitable for diverse applications such as real-time video analytics and multi-camera processing. Its compact form factor eases incorporation into various devices, delivering robust performance for AI tasks without the heat or power trade-offs typically associated with such systems. Engineered to problem-solve current AI demands efficiently, the M.2 module comes supported by the Voyager SDK, which simplifies the integration process. This comprehensive software suite empowers developers to build and optimize AI models directly on the Metis platform, facilitating a significant reduction in time-to-market for innovative solutions.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Processor Core Dependent, Processor Cores, Vision Processor, WMV
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

Designed for extreme low-power environments, the Tianqiao-70 RISC-V CPU core emphasizes energy efficiency while maintaining sufficient computational strength for commercial applications. It serves scenarios where low power consumption is critical, such as mobile devices, desktop applications, AI, and autonomous systems. This model caters to the requirements of energy-conscious markets, facilitating operations that demand efficiency and performance within minimal power budgets.

StarFive
AI Processor, CPU, Multiprocessor / DSP, Processor Cores
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

NuLink Die-to-Die PHY for Standard Packaging

The NuLink Die-to-Die PHY is a state-of-the-art IP solution designed to facilitate efficient die-to-die communication on standard organic/laminate packaging. It supports multiple industry standards, including UCIe and Bunch of Wires (BoW) protocols, and features advanced bidirectional signaling capabilities to enhance data transfer rates. The NuLink technology enables exceptional performance, power economy, and reduced area footprint, which elevates its utility in AI applications and complex chiplet systems. A unique feature of this PHY is its simultaneous bidirectional signaling (SBD), that allows data to be sent and received simultaneously on the same physical line, effectively doubling the available bandwidth. This capacity is crucial for applications needing high interconnect performance, such as AI training or inference workloads, without requiring advanced packaging techniques like silicon interposers. The PHY's design supports 64 data lanes configured for optimal placement and bump map layout. With a focus on power efficiency, the NuLink achieves competitive performance metrics even in standard packaging, making it particularly suitable for high-density systems-in-package solutions.

Eliyan
Intel Foundry
5nm, 7nm LPP
AMBA AHB / APB/ AXI, CXL, D2D, MIPI, Network on Chip, Processor Core Dependent
View Details

Veyron V2 CPU

Ventana's Veyron V2 CPU represents the pinnacle of high-performance AI and data center-class RISC-V processors. Engineered to deliver world-class performance, it supports extensive data center workloads, offering superior computational power and efficiency. The V2 model is particularly focused on accelerating AI and ML tasks, ensuring compute-intensive applications run seamlessly. Its design makes it an ideal choice for hyperscale, cloud, and edge computing solutions where performance is non-negotiable. This CPU is instrumental for companies aiming to scale with the latest in server-class technology.

Ventana Micro Systems
AI Processor, CPU, Processor Core Dependent, Processor Cores
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

Jotunn8 AI Accelerator

The Jotunn8 represents a leap in AI inference technology, delivering unmatched efficiency for modern data centers. This chip is engineered to manage AI model deployments with lightning-fast execution, at minimal cost and high scalability. It ensures optimal performance by balancing high throughput and low latency, while being extremely power-efficient, which significantly lowers operational costs and supports sustainable infrastructures. The Jotunn8 is designed to unlock the full capacity of AI investments by providing a high-performance platform that enhances the delivery and impact of AI models across applications. It is particularly suitable for real-time applications such as chatbots, fraud detection, and search engines, where ultra-low latency and very high throughput are critical. Power efficiency is a major emphasis of the Jotunn8, optimizing performance per watt to control energy as a substantial operational expense. Its architecture allows for flexible memory allocation ensuring seamless adaptability across varied applications, providing a robust foundation for scalable AI operations. This solution is aimed at enhancing business competitiveness by supporting large-scale model deployment and infrastructure optimization.

VSORA
AI Processor, DSP Core, Interleaver/Deinterleaver, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Chimera GPNPU

The Chimera GPNPU from Quadric is designed as a general-purpose neural processing unit intended to meet a broad range of demands in machine learning inference applications. It is engineered to perform both matrix and vector operations along with scalar code within a single execution pipeline, which offers significant flexibility and efficiency across various computational tasks. This product achieves up to 864 Tera Operations per Second (TOPs), making it suitable for intensive applications including automotive safety systems. Notably, the GPNPU simplifies system-on-chip (SoC) hardware integration by consolidating hardware functions into one processor core. This unification reduces complexity in system design tasks, enhances memory usage profiling, and optimizes power consumption when compared to systems involving multiple heterogeneous cores such as NPUs and DSPs. Additionally, its single-core setup enables developers to efficiently compile and execute diverse workloads, improving performance tuning and reducing development time. The architecture of the Chimera GPNPU supports state-of-the-art models with its Forward Programming Interface that facilitates easy adaptation to changes, allowing support for new network models and neural network operators. It’s an ideal solution for products requiring a mix of traditional digital signal processing and AI inference like radar and lidar signal processing, showcasing a rare blend of programming simplicity and long-term flexibility. This capability future-proofs devices, expanding their lifespan significantly in a rapidly evolving tech landscape.

Quadric
14 Categories
View Details

SCR9 Processor Core

Designed for entry-level server-class applications, the SCR9 is a 64-bit RISC-V processor core that comes equipped with cutting-edge features, such as an out-of-order superscalar pipeline, making it apt for processing-intensive environments. It supports both single and double-precision floating-point operations adhering to IEEE standards, which ensure precise computation results. This processor core is tailored for high-performance computing needs, with a focus on AI and ML, as well as conventional data processing tasks. It integrates an advanced interrupt system featuring APLIC configurations, enabling responsive operations even under heavy workloads. SCR9 supports up to 16 cores in a multi-cluster arrangement, each utilizing coherent multi-level caches to maintain rapid data processing and management. The comprehensive development package for SCR9 includes ready-to-deploy toolchains and simulators that expedite software development, particularly within Linux environments. The core is well-suited for deployment in entry-level server markets and data-intensive applications, with robust support for virtualization and heterogeneous architectures.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Cores
View Details

Time-Triggered Ethernet

Time-Triggered Ethernet (TTE) combines the robustness of Ethernet technology with the precision of time-triggered communication. Designed for critical applications that demand reliability and synchronized communication, TTE finds its place in aerospace and industrial sectors. TTE operates by affording secure, deterministic data transmission over Ethernet networks. It achieves this by dedicating specific time slots for high-priority traffic, ensuring latency and jitter are minimized. This segregation allows time-sensitive data to safely coexist with traditional Ethernet traffic, without sacrificing normal network operations. The protocol's architecture underlies a mixed-criticality networking environment, supporting integration with standard Ethernet devices. TTE's scheduling mechanism guarantees timely delivery of critical messages, crucial in environments where even microsecond delays can impact overall system performance. Its application ensures Ethernet networks meet the stringent requirements of real-time operations synonymous with safety-critical systems.

TTTech Computertechnik AG
Ethernet, FlexRay, LIN, MIL-STD-1553, MIPI, Processor Core Independent, Safe Ethernet
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

eSi-3250

The eSi-3250 stands as a high-performance 32-bit RISC IP processor, optimized for implementations within ASIC or FPGA environments that demand rigorous caching strategies due to slower internal or external memories. Noteworthy for its adaptable instruction and data cache capabilities, this core is tailored to excel in scenarios where the CPU core to bus clock ratio exceeds singularity. The eSi-3250 integrates dual separate caches for both data and instructions, enabling configuration in various associativity forms optimizing for elevated performance while maintaining power efficiency. It includes a specialized optional memory management unit, vital for memory protection and the deployment of virtual memory, accommodating sophisticated system requirements. Incorporating an expansive instruction set, the processor is equipped for intensive computational tasks with a multitude of optional additional instruction types and addressing modes. Additional requisite supporting hardware includes incorporated debug features conducive to efficient system analysis and troubleshooting, solidifying the eSi-3250's position as a favored choice for high-throughput, low-power applications across a spectrum of technology processes.

eSi-RISC
All Foundries
16nm, 130nm, 180nm
CPU, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

Universal Chiplet Interconnect Express (UCIe)

The Universal Chiplet Interconnect Express (UCIe) by Extoll exemplifies a transformative approach towards interconnect technology, underpinning the age of chiplets with a robust framework for high-speed data exchange. This innovative solution caters to the growing demands of heterogeneous integration, providing a standardized protocol that empowers seamless communication between various chiplet designs. UCIe stands out by offering unparalleled connectivity and interoperability, ensuring that diverse chiplet systems function cohesively. This interconnect solution is tailored to the needs of modern digital architectures, emphasizing adaptability and performance across different tech nodes. With Extoll’s mastery in digital-centric design, the UCIe provides an efficient gateway for integrating multiple technological processes into a singular framework. The development of UCIe is also driven by the need for solutions that are both energy and cost-efficient. By leveraging Extoll’s low power architecture, UCIe facilitates energy savings without compromising on speed and data integrity. This makes it an indispensable tool for entities that prioritize scalable, high-performance interconnection solutions, aligning with the semiconductor industry's move toward more modular and sustainable system architectures.

Extoll GmbH
AMBA AHB / APB/ AXI, D2D, Gen-Z, Multiprocessor / DSP, Processor Core Independent, V-by-One, VESA
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

ORC3990 – DMSS LEO Satellite Endpoint System On Chip (SoC)

The ORC3990 SoC is a state-of-the-art solution designed for satellite IoT applications within Totum's DMSSâ„¢ network. This low-power sensor-to-satellite system integrates an RF transceiver, ARM CPUs, memories, and PA to offer seamless IoT connectivity via LEO satellite networks. It boasts an optimized link budget for effective indoor signal coverage, eliminating the need for additional GNSS components. This compact SoC supports industrial temperature ranges and is engineered for a 10+ year battery life using advanced power management.

Orca Systems Inc.
TSMC
22nm
3GPP-5G, Bluetooth, Processor Core Independent, RF Modules, USB, Wireless Processor
View Details

Dynamic Neural Accelerator II Architecture

The Dynamic Neural Accelerator II (DNA-II) is a highly efficient and versatile IP specifically engineered for optimizing AI workloads at the edge. Its unique architecture allows runtime reconfiguration of interconnects among computing units, which facilitates improved parallel processing and efficiency. DNA-II supports a broad array of networks, including convolutional and transformer networks, making it an ideal choice for numerous edge applications. Its design emphasizes low power consumption while maintaining high computational performance. By utilizing a dynamic data path architecture, DNA-II sets a new benchmark for IP cores aimed at enhancing AI processing capabilities.

EdgeCortix Inc.
AI Processor, Audio Processor, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

xcore.ai

xcore.ai is a powerful platform tailored for the intelligent IoT market, offering unmatched flexibility and performance. It boasts a unique multi-threaded micro-architecture that provides low-latency and deterministic performance, perfect for smart applications. Each xcore.ai contains 16 logical cores distributed across two multi-threaded processor tiles, each equipped with 512kB of SRAM and capable of both integer and floating-point operations. The integrated interprocessor communication allows high-speed data exchange, ensuring ultimate scalability across multiple xcore.ai SoCs within a unified development environment.

XMOS Semiconductor
20 Categories
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

NeuroVoice AI Chip for Voice Processing

The NeuroVoice chip by Polyn Technology is engineered to improve voice processing capabilities for a variety of consumer electronic devices, particularly focusing on addressing challenges associated with traditional digital voice solutions. Built on the NASP platform, this AI chip is tailored to operate efficiently in noisy environments without relying on cloud-based processing, thus ensuring privacy and reducing latency. A key feature of NeuroVoice is its ultra-low power consumption, which allows continuous device operation even in power-sensitive applications like wearables and smart home devices. It includes abilities such as always-on voice activity detection, smart voice control, speaker recognition, and real-time voice extraction. This amalgamation of capabilities makes the NeuroVoice a versatile component in enhancing voice-controlled systems' efficacy. NeuroVoice stands out by seamlessly integrating into devices, offering users the advantage of precise voice recognition and activity detection with minimal energy demands. It further differentiates itself by delivering clear communication even amidst irregular background noises, setting a new benchmark for on-device audio processing with its advanced neural network-driven design.

Polyn Technology Ltd.
Audio Controller, Audio Interfaces, Audio Processor, Bluetooth, Multiprocessor / DSP, Peripheral Controller, USB
View Details

Nerve IIoT Platform

The Nerve IIoT Platform is a comprehensive solution for machine builders, offering cloud-managed edge computing capabilities. This innovative platform delivers high levels of openness, security, flexibility, and real-time data handling, enabling businesses to embark on their digital transformation journeys. Nerve's architecture allows for seamless integration with a variety of hardware devices, from basic gateways to advanced IPCs, ensuring scalability and operational efficiency across different industrial settings. Nerve facilitates the collection, processing, and analysis of machine data in real-time, which is crucial for optimizing production and enhancing operational efficiency. By providing robust remote management functionalities, businesses can efficiently handle device operations and application deployments from any location. This capacity to manage data flows between the factory floor and the cloud transitions enterprises into a new era of digital management, thereby minimizing costs and maximizing productivity. The platform also supports multiple cloud environments, empowering businesses to select their preferred cloud service while maintaining operational continuity. With its secure, IEC 62443-4-1 certified infrastructure, Nerve ensures that both data and applications remain protected from cyber threats. Its integration of open technologies, such as Docker and virtual machines, further facilitates rapid implementation and prototyping, enabling businesses to adapt swiftly to ever-changing demands.

TTTech Industrial Automation AG
18 Categories
View Details

eSi-3200

The eSi-3200 represents the mid-tier solution in the eSi-RISC family, bringing a high degree of versatility and performance to embedded control systems. This 32-bit processor is proficiently designed for scenarios demanding enhanced computational capabilities or extended address spaces without compromise on power efficiency, suitably fitting applications with on-chip memory implementations. Engineered without a cache, the eSi-3200 facilitates deterministic performance essential for real-time applications. It leverages a modified-Harvard architecture allowing concurrent instruction and data fetches, maximizing throughput. With a 5-stage pipeline, the processor achieves high clock frequencies suitable for time-critical operations enhancing responsiveness and efficiency. The comprehensive instruction set encompasses core arithmetic functions, including advanced IEEE-754 single-precision floating-point operations, which cater to data-intensive and mathematically challenging applications. Designed with optimal flexibility, it can accommodate optional custom instructions tailored to specific processing needs, offering a well-balanced solution for versatile embedded applications. Delivered as a Verilog RTL IP core, it ensures platform compatibility, simplifying integration into diverse silicon nodes.

eSi-RISC
All Foundries
16nm, 130nm, 180nm
CPU, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

RISC-V CPU IP N Class

The RISC-V CPU IP N Class is designed to cater to the needs of 32-bit microcontroller units (MCUs) and AIoT (Artificial Intelligence of Things) applications. It is engineered to provide a balance of performance and power efficiency, making it suitable for a range of general computing needs. With its adaptable architecture, the N Class processor allows for customization, enabling developers to configure the core to meet specific application requirements while minimizing unnecessary overhead. Incorporating the RISC-V open standard, the N Class delivers robust functional features, supporting both security and functional safety needs. This processor core is ideal for applications that require reliable performance combined with low energy consumption. Developers benefit from an extensive set of resources and tools available in the RISC-V ecosystem to facilitate the integration and deployment of this processor across diverse use cases. The RISC-V CPU IP N Class demonstrates excellent scalability, allowing for configuration that aligns with the specific demands of IoT devices and embedded systems. Whether for implementing sophisticated sensor data processing or managing communication protocols within a smart device, the N Class provides the foundation necessary for developing innovative and efficient solutions.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

Wormhole

Wormhole is a high-efficiency processor designed to handle intensive AI processing tasks. Featuring an advanced architecture, it significantly accelerates AI workload execution, making it a key component for developers looking to optimize their AI applications. Wormhole supports an expansive range of AI models and frameworks, enabling seamless adaptation and deployment across various platforms. The processor’s architecture is characterized by high core counts and integrated system interfaces that facilitate rapid data movement and processing. This ensures that Wormhole can handle both single and multi-user environments effectively, especially in scenarios that demand extensive computational resources. The seamless connectivity supports vast memory pooling and distributed processing, enhancing AI application performance and scalability. Wormhole’s full integration with Tenstorrent’s open-source ecosystem further amplifies its utility, providing developers with the tools to fully leverage the processor’s capabilities. This integration facilitates optimized ML workflows and supports continuous enhancement through community contributions, making Wormhole a forward-thinking solution for cutting-edge AI development.

Tenstorrent
TSMC
16nm, 28nm
AI Processor, CPU, CXL, D2D, Interlaken, IoT Processor, Multiprocessor / DSP, Network on Chip, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

2D FFT

The 2D FFT core is engineered to deliver fast processing for two-dimensional FFT computations, essential in image and video processing applications. By utilizing both internal and external memory effectively, this core is capable of handling large data sets typical in medical imaging or aerial surveillance systems. This core leverages Dillon Engineering’s ParaCore Architect utility to maximize flexibility and efficiency. It takes advantage of a two-engine design, where data can flow between stages without interruption, ensuring high throughput and minimal memory delays. Such a robust setup is vital for applications where swift processing of extensive data grids is crucial. The architecture is structured to provide consistent, high-quality transform computations that are essential in applications where accuracy and speed are non-negotiable. The 2D FFT core, with its advanced design parameters, supports the varied demands of modern imaging technology, providing a reliable tool for developers and engineers working within these sectors.

Dillon Engineering, Inc.
GLOBALFOUNDRIES, TSMC
40nm
2D / 3D, GPU, Image Conversion, Multiprocessor / DSP, PLL, Processor Core Independent, Vision Processor, Wireless Processor
View Details

aiWare

aiWare represents aiMotive's advanced hardware intellectual property core for automotive neural network acceleration, pushing boundaries in efficiency and scalability. This neural processing unit (NPU) is tailored to meet the rigorous demands of automotive AI inference, providing robust support for various AI workloads, including CNNs, LSTMs, and RNNs. By achieving up to 256 Effective TOPS and remarkable scalability, aiWare caters to a wide array of applications, from edge processors in sensors to centralized high-performance modules.\n\nThe design of aiWare is particularly focused on enhancing efficiency in neural network operations, achieving up to 98% efficiency across diverse automotive applications. It features an innovative dataflow architecture, ensuring minimal external memory bandwidth usage while maximizing in-chip data processing. This reduces power consumption and enhances performance, making it highly adaptable for deployment in resource-critical environments.\n\nAdditionally, aiWare is embedded with comprehensive tools like the aiWare Studio SDK, which streamlines the neural network optimization and iteration process without requiring extensive NPU code adjustments. This ensures that aiWare can deliver optimal performance while minimizing development timelines by allowing for early performance estimations even before target hardware testing. Its integration into ASIL-B or higher certified solutions underscores aiWare's capability to power the most demanding safety applications in the automotive domain.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

AndeShape Platforms

The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.

Andes Technology
Embedded Memories, Microcontroller, Processor Core Dependent, Processor Core Independent, Standard cell
View Details

SAKURA-II AI Accelerator

SAKURA-II is an advanced AI accelerator recognized for its efficiency and adaptability. It is specifically designed for edge applications that require rapid, real-time AI inference with minimal delay. Capable of processing expansive generative AI models such as Llama 2 and Stable Diffusion within an 8W power envelope, this accelerator supports a wide range of applications from vision to language processing. Its enhanced memory bandwidth and substantial DRAM capacity ensure its suitability for handling complex AI workloads, including large-scale language and vision models. The SAKURA-II platform also features robust power management, allowing it to achieve high efficiency during operations.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator by T-Head is an advanced semiconductor technology designed to accelerate AI computations and machine learning tasks. This accelerator is specifically optimized for high-performance inference, offering substantial improvements in processing times for deep learning applications. Its architecture is developed to leverage parallel computing capabilities, making it highly suitable for tasks that require fast and efficient data handling. This AI accelerator supports a broad spectrum of machine learning frameworks, ensuring compatibility with various AI algorithms. It is equipped with specialized processing units and a high-throughput memory interface, allowing it to handle large datasets with minimal latency. The Hanguang 800 is particularly effective in environments where rapid inferencing and real-time data processing are essential, such as in smart cities and autonomous driving. With its robust design and multi-faceted processing abilities, the Hanguang 800 Accelerator empowers industries to enhance their AI and machine learning deployments. Its capability to deliver swift computation and inference results ensures it is a valuable asset for companies looking to stay at the forefront of technological advancement in AI applications.

T-Head
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

Titanium Ti375 - High-Density, Low-Power FPGA

The Titanium Ti375 FPGA is a high-density, low-power solution featuring Efinix’s Quantum® compute fabric. This state-of-the-art FPGA is equipped with a range of advanced features including a hardened RISC-V block, SerDes transceiver, and an LPDDR4 DRAM controller. It is designed to meet the demands of applications requiring high computational efficiency and low power consumption, making it ideal for rapid application development and deployment. This FPGA offers exceptional processing capabilities and flexibility, helping to reduce design complexity while optimizing performance for data-intensive applications. Its small package footprint is suitable for highly integrated systems, providing seamless compliance with existing protocols such as MIPI D-PHY. This combination of features makes it suitable for use in edge computing devices, advanced automotive systems, and next-generation IoT applications. Additionally, the Titanium Ti375 allows developers to exploit its high-speed I/O capabilities, facilitating robust peripheral interfacing and data transfer. The device also benefits from bitstream authentication and encryption to secure the intellectual property embedded within. As part of its wide-ranging applicability, it suits industrial environments that require solid reliability and long-term product lifecycles.

Efinix, Inc.
GLOBALFOUNDRIES
90nm
Audio Processor, Content Protection Software, Cryptography Software Library, Embedded Memories, Embedded Security Modules, PLL, Processor Core Independent, Processor Cores, SDRAM Controller
View Details

SiFive Intelligence X280

The Intelligence X280 is engineered to provide extensive capabilities for artificial intelligence and machine learning applications, emphasizing a software-first design approach. This high-performance processor supports vector and matrix computations, making it adept at handling the demanding workloads typical in AI-driven environments. With an extensive ALU and integrated VFPU capabilities, the X280 delivers superior data processing power. Capable of supporting complex AI tasks, the X280 processor leverages SiFive's advanced vector architecture to allow for high-speed data manipulation and precision. The core supports extensive vector lengths and offers compatibility with various machine learning frameworks, facilitating seamless deployment in both embedded and edge AI applications. The Intelligence family, represented by the X280, offers solutions that are not only scalable but are customizable to particular workload specifications. With high-bandwidth interfaces for connecting custom engines, this processor is built to evolve alongside AI's progressive requirements, ensuring relevance in rapidly changing technology landscapes.

SiFive, Inc.
AI Processor, CPU, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

Topaz FPGAs - Volume Production Ready

The Topaz FPGA family by Efinix is crafted for high-performance, cost-efficient production volumes. Topaz FPGAs combine an advanced architecture with a low-power, high-volume design, suitable for mainstream applications. These devices integrate seamlessly into systems requiring robust protocol support, including PCIe Gen3, LVDS, and MIPI, making them ideal for machine vision, industrial automation, and wireless communications. These FPGAs are designed to pack more logic into a compact area, allowing for enhanced innovation and feature addition. The architecture facilitates seamless migration to higher performance Titanium FPGAs, making Topaz a flexible and future-proof choice for developers. With support for various BGAs, these units are easy to integrate, thus enhancing system design efficiency. Topaz FPGAs ensure product longevity and a stable supply chain, integral for applications characterized by long life cycles. This ensures systems maintain high efficiency and functionality over extended periods, aligning with Efinix’s commitment to offering durable and reliable semiconductor solutions for diverse market needs.

Efinix, Inc.
Samsung
28nm
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, Embedded Memories, Processor Core Independent, Processor Cores, USB, V-by-One
View Details

Ultra-Low-Power 64-Bit RISC-V Core

Micro Magic offers a state-of-the-art 64-bit RISC-V core known for its ultra-low power consumption, clocking in at just 10mW when operating at 1GHz. This processor harnesses advanced design techniques that allow it to achieve high performance while maintaining low operational voltages, optimizing energy efficiency. This processor stands out for its capability to deliver impressive processing speeds, reaching up to 5GHz under optimal conditions. It is designed with power conservation in mind, making it ideal for applications where energy efficiency is critical without sacrificing processing capability. The core is part of Micro Magic’s commitment to pushing the boundaries of low-power processing technology, making it suitable for a variety of high-speed computing tasks. Its design is particularly advantageous in environments demanding swift data processing and minimal power use, reaffirming Micro Magic’s reputation for pioneering efficient silicon solutions.

Micro Magic, Inc.
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

SiFive Essential

SiFive's Essential family of processor cores is designed to offer flexible and scalable performance for embedded applications and IoT devices. These cores provide a wide range of custom configurations that cater to specific power and area requirements across various markets. From minimal configuration microcontrollers to more complex, Linux-capable processors, the Essential family is geared to meet diverse needs while maintaining high efficiency. The Essential lineup includes 2-Series, 6-Series, and 7-Series cores, each offering different levels of scalability and performance efficiency. The 2-Series, for instance, focuses on power optimization, making it ideal for energy-constrained environments. The 6-Series and 7-Series expand these capabilities with richer feature sets, supporting more advanced applications with scalable infrastructure. Engineered for maximum configurability, SiFive Essential cores are equipped with robust debugging and tracing capabilities. They are customizable to optimize integration within System-on-Chip (SoC) applications, ensuring reliable and secure processing across a wide range of technologies. This ability to tailor the core designs ensures that developers can achieve a seamless balance between performance and energy consumption.

SiFive, Inc.
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Network Protocol Accelerator Platform

The Network Protocol Accelerator Platform (NPAP) by Missing Link Electronics is engineered to significantly enhance network protocol processing. This platform leverages MLE's innovative patented and patent-pending technologies to boost the speed of data transmission within FPGAs, achieving impressive rates of up to 100 Gbps. The NPAP provides a robust, efficient solution for offloading processing tasks, leading to superior networking efficiency. MLE's NPAP facilitates multiple high-speed connections and can manage large volumes of data effectively, incorporating support for a variety of network protocols. The design ensures that users benefit from reduced latency and improved data throughput, making it an ideal choice for network-intensive applications. MLE’s expertise in integrating high-performance networking capabilities into FPGA environments comes to the forefront with this product, providing users with a dependable tool for optimizing their network infrastructures.

Missing Link Electronics
AMBA AHB / APB/ AXI, Cell / Packet, Ethernet, MIL-STD-1553, Multiprocessor / DSP, RapidIO, Safe Ethernet, SATA, USB, V-by-One
View Details

Veyron V1 CPU

The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.

Ventana Micro Systems
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Cores
View Details

eSi-3264

The eSi-3264 epitomizes the pinnacle of the eSi-RISC portfolio, presenting a 32/64-bit processor furnished with SIMD extensions catering to high-performance requirements. Designed for applications demanding digital signal processing functionality, this processor capitalizes on minimal silicon usage while ensuring exceedingly low power consumption. Incorporating an extensive pipeline capable of dual and quad multiply-accumulate operations, the eSi-3264 significantly benefits applications in audio processing, sensor control, and touch interfacing. Its built-in IEEE-754 single and double-precision floating point operations promote comprehensive data processing capabilities, extending versatility across computationally intensive domains. The processor accommodates configurable caching attributes and a memory management unit to bolster performance amidst off-chip memory access. Its robust instruction repertoire, optional custom operations, and user-privilege modes ensure full control in secure execution environments, supporting diverse operational requirements with unmatched resource efficiency.

eSi-RISC
All Foundries
16nm, 130nm, 180nm
CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

eSi-ADAS

The eSi-ADAS Radar IP Suite and Co-processor Engine is at the forefront of automotive and unmanned systems, enhancing radar detection and processing capabilities. It leverages cutting-edge signal processing technologies to provide accurate and rapid situational awareness, crucial for modern vehicles and aerial drones. With its comprehensive offering of radar algorithms, eSi-ADAS supports both traditional automotive radar applications and emerging unmanned aerial vehicle (UAV) platforms. This suite is crafted to meet the complex demands of real-time data processing and simultaneous multi-target tracking in dense environments, key for advanced driver-assistance systems. The co-processor engine within eSi-ADAS is highly efficient, designed to operate alongside existing vehicle systems with minimal additional power consumption. This suite is adaptable, supporting a wide range of vehicle architectures and operational scenarios, from urban driving to cross-country navigation.

EnSilica
AI Processor, CAN XL, CAN-FD, Content Protection Software, Flash Controller, Multiprocessor / DSP, Processor Core Independent, Security Processor, Security Protocol Accelerators
View Details

Tensix Neo

Tensix Neo represents the next evolution in AI processing, offering robust capabilities for handling modern AI challenges. Its design focuses on maximizing performance while maintaining efficiency, a crucial aspect in AI and machine learning environments. Tensix Neo facilitates advanced computation across multiple frameworks, supporting a range of AI applications. Featuring a strategic blend of core architecture and integrated memory, Tensix Neo excels in both processing speed and capacity, essential for handling comprehensive AI workloads. Its architecture supports multi-threaded operations, optimizing performance for parallel computing scenarios, which are common in AI tasks. Tensix Neo's seamless connection with Tenstorrent's open-source software environment ensures that developers can quickly adapt it to their specific needs. This interconnectivity not only boosts operational efficiency but also supports continuous improvements and feature expansions through community contributions, positioning Tensix Neo as a versatile solution in the landscape of AI technology.

Tenstorrent
TSMC
20nm, 22nm
AI Processor, CPU, DSP Core, IoT Processor, Multiprocessor / DSP, Network on Chip, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

GNSS VHDL Library

The GNSS VHDL Library is a cornerstone offering from GNSS Sensor Ltd, engineered to provide a potent solution for those integrating Global Navigation Satellite System functionalities. This library is lauded for its configurability, allowing developers to harness the power of satellite navigation on-chip efficiently. It facilitates the incorporation of GPS, GLONASS, and Galileo systems into digital designs with minimum fuss. Designed to be largely independent from specific CPU platforms, the GNSS VHDL Library stands out for its flexibility. It employs a single configuration file to adapt to different hardware environments, ensuring broad compatibility and ease of implementation. Whether for research or commercial application, this library allows for rapid prototyping of reliable GNSS systems, providing essential building blocks for precise navigation capabilities. Integrating fast search engines and offering configurable signal processing capabilities, the library supports scalability across platforms, making it a crucial component for industries requiring high-precision navigation technology. Its architecture supports both 32-bit SPARC-V8 and 64-bit RISC-V system-on-chips, highlighting its adaptability and cutting-edge design.

GNSS Sensor Ltd
AMBA AHB / APB/ AXI, Amplifier, Bluetooth, CAN, GPS, Interrupt Controller, MIL-STD-1553, MIPI, Multi-Protocol PHY, Processor Core Dependent, UWB, Wireless USB
View Details

Time-Triggered Protocol

The Time-Triggered Protocol (TTP) is an advanced communication protocol designed to enable high-reliability data transmission in embedded systems. It is widely used in mission-critical environments such as aerospace and automotive industries, where it supports deterministic message delivery. By ensuring precise time coordination across various control units, TTP helps enhance system stability and predictability, which are essential for real-time operations. TTP operates on a time-triggered architecture that divides time into fixed-length intervals, known as communication slots. These slots are assigned to specific tasks, enabling precise scheduling of messages and eliminating the possibility of data collision. This deterministic approach is crucial for systems that require high levels of safety and fault tolerance, allowing them to operate effectively under stringent conditions. Moreover, TTP supports fault isolation and recovery mechanisms that significantly improve system reliability. Its ability to detect and manage faults without operator intervention is key in maintaining continuous system operations. Deployment is also simplified by its modular structure, which allows seamless integration into existing networks.

TTTech Computertechnik AG
AMBA AHB / APB/ AXI, CAN, CAN XL, CAN-FD, Ethernet, FlexRay, MIPI, Processor Core Dependent, Safe Ethernet, Temperature Sensor
View Details

Zhenyue 510 SSD Controller

The Zhenyue 510 SSD Controller is a high-performance enterprise-grade controller providing robust management for SSD storage solutions. It is engineered to deliver exceptional I/O throughput of up to 3400K IOPS and a data transfer rate reaching 14 GByte/s. This remarkable performance is achieved through the integration of T-Head's proprietary low-density parity-check (LDPC) error correction algorithms, enhancing reliability and data integrity. Equipped with T-Head's low-latency architecture, the Zhenyue 510 offers swift read and write operations, crucial for applications demanding fast data processing capabilities. It supports flexible Nand flash interfacing, which makes it adaptable to multiple generations of flash memory technologies. This flexibility ensures that the device remains a viable solution as storage standards evolve. Targeted at applications such as online transactions, large-scale data management, and software-defined storage systems, the Zhenyue 510's advanced capabilities make it a cornerstone for organizations needing seamless and efficient data storage solutions. The combination of innovative design, top-tier performance metrics, and adaptability positions the Zhenyue 510 as a leader in SSD controller technologies.

T-Head
eMMC, Flash Controller, NAND Flash, NVM Express, ONFI Controller, Processor Core Dependent, RLDRAM Controller, SAS, SATA, SDRAM Controller
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt