Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Platform Level IP

Platform Level IP: Comprehensive Semiconductor Solutions

Platform Level IP is a critical category within the semiconductor IP ecosystem, offering a wide array of solutions that are fundamental to the design and efficiency of semiconductor devices. This category includes various IP blocks and cores tailored for enhancing system-level performance, whether in consumer electronics, automotive systems, or networking applications. Suitable for both embedded control and advanced data processing tasks, Platform Level IP encompasses versatile components necessary for building sophisticated, multicore systems and other complex designs.

Subcategories within Platform Level IP cover a broad spectrum of integration needs:

1. **Multiprocessor/DSP (Digital Signal Processing)**: This includes specialized semiconductor IPs for handling tasks that require multiple processor cores working in tandem. These IPs are essential for applications needing high parallelism and performance, such as media processing, telecommunications, and high-performance computing.

2. **Processor Core Dependent**: These semiconductor IPs are designed to be tightly coupled with specific processor cores, ensuring optimal compatibility and performance. They include enhancements that provide seamless integration with one or more predetermined processor architectures, often used in specific applications like embedded systems or custom computing solutions.

3. **Processor Core Independent**: Unlike core-dependent IPs, these are flexible solutions that can integrate with a wide range of processor cores. This adaptability makes them ideal for designers looking to future-proof their technological investments or who are working with diverse processing environments.

Overall, Platform Level IP offers a robust foundation for developing flexible, efficient, and scalable semiconductor devices, catering to a variety of industries and technological requirements. Whether enhancing existing architectures or pioneering new designs, semiconductor IPs in this category play a pivotal role in the innovation and evolution of electronic devices.

All semiconductor IP
Platform Level IP
A/D Converter Amplifier Analog Comparator Analog Filter Analog Front Ends Analog Multiplexer Analog Subsystems Clock Synthesizer Coder/Decoder D/A Converter DC-DC Converter DLL Graphics & Video Modules Oscillator Oversampling Modulator Photonics PLL Power Management RF Modules Sensor Switched Cap Filter Temperature Sensor CAN CAN XL CAN-FD FlexRay LIN Other Safe Ethernet Arbiter Audio Controller Clock Generator CRT Controller DMA Controller GPU Input/Output Controller Interrupt Controller Keyboard Controller LCD Controller Other Peripheral Controller Receiver/Transmitter Timer/Watchdog AMBA AHB / APB/ AXI CXL D2D Gen-Z HDMI I2C IEEE 1394 IEEE1588 Interlaken MIL-STD-1553 MIPI Multi-Protocol PHY PCI PCMCIA PowerPC RapidIO SAS SATA USB V-by-One VESA Embedded Memories I/O Library Other Standard cell DDR eMMC Flash Controller HBM Mobile DDR Controller NAND Flash NVM Express ONFI Controller RLDRAM Controller SD SDIO Controller SDRAM Controller SRAM Controller 2D / 3D ADPCM Audio Interfaces AV1 Camera Interface CSC DVB H.263 H.264 H.265 H.266 Image Conversion JPEG JPEG 2000 MPEG 4 QOI TICO VGA WMA WMV Network on Chip Multiprocessor / DSP Processor Core Dependent Processor Core Independent AI Processor Audio Processor Building Blocks Coprocessor CPU DSP Core IoT Processor Microcontroller Other Processor Cores Security Processor Vision Processor Wireless Processor Content Protection Software Cryptography Cores Cryptography Software Library Embedded Security Modules Other Platform Security Security Protocol Accelerators Security Subsystems 3GPP-5G 3GPP-LTE 802.11 802.16 / WiMAX Bluetooth CPRI Digital Video Broadcast GPS JESD 204A / JESD 204B NFC OBSAI Other UWB W-CDMA Wireless USB ATM / Utopia CEI Cell / Packet Error Correction/Detection Ethernet Fibre Channel HDLC Interleaver/Deinterleaver Modulation/Demodulation Optical/Telecom Other
Vendor

Akida Neural Processor IP

Akida's Neural Processor IP represents a leap in AI architecture design, tailored to provide exceptional energy efficiency and processing speed for an array of edge computing tasks. At its core, the processor mimics the synaptic activity of the human brain, efficiently executing tasks that demand high-speed computation and minimal power usage. This processor is equipped with configurable neural nodes capable of supporting innovative AI frameworks such as convolutional and fully-connected neural network processes. Each node accommodates a range of MAC operations, enhancing scalability from basic to complex deployment requirements. This scalability enables the development of lightweight AI solutions suited for consumer electronics as well as robust systems for industrial use. Onboard features like event-based processing and low-latency data communication significantly decrease the strain on host processors, enabling faster and more autonomous system responses. Akida's versatile functionality and ability to learn on the fly make it a cornerstone for next-generation technology solutions that aim to blend cognitive computing with practical, real-world applications.

BrainChip
AI Processor, Coprocessor, CPU, Digital Video Broadcast, Network on Chip, Platform Security, Processor Core Independent, Vision Processor
View Details

KL730 AI SoC

The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.

Kneron
TSMC
12nm
16 Categories
View Details

Akida 2nd Generation

The second-generation Akida platform builds upon the foundation of its predecessor with enhanced computational capabilities and increased flexibility for a broader range of AI and machine learning applications. This version supports 8-bit weights and activations in addition to the flexible 4- and 1-bit operations, making it a versatile solution for high-performance AI tasks. Akida 2 introduces support for programmable activation functions and skip connections, further enhancing the efficiency of neural network operations. These capabilities are particularly advantageous for implementing sophisticated machine learning models that require complex, interconnected processing layers. The platform also features support for Spatio-Temporal and Temporal Event-Based Neural Networks, advancing its application in real-time, on-device AI scenarios. Built as a silicon-proven, fully digital neuromorphic solution, Akida 2 is designed to integrate seamlessly with various microcontrollers and application processors. Its highly configurable architecture offers post-silicon flexibility, making it an ideal choice for developers looking to tailor AI processing to specific application needs. Whether for low-latency video processing, real-time sensor data analysis, or interactive voice recognition, Akida 2 provides a robust platform for next-generation AI developments.

BrainChip
11 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

Axelera AI's Metis AIPU PCIe AI Accelerator Card is engineered to deliver top-tier inference performance in AI tasks aimed at heavy computational loads. This PCIe card is designed with the industry’s highest standards, offering exceptional processing power packaged onto a versatile PCIe form factor, ideal for integration into various computing systems including workstations and servers.<br><br>Equipped with a quad-core Metis AI Processing Unit (AIPU), the card delivers unmatched capabilities for handling complex AI models and extensive data streams. It efficiently processes multiple camera inputs and supports independent parallel neural network operations, making it indispensable for dynamic fields such as industrial automation, surveillance, and high-performance computing.<br><br>The card's performance is significantly enhanced by the Voyager SDK, which facilitates a seamless AI model deployment experience, allowing developers to focus on model logic and innovation. It offers extensive compatibility with mainstream AI frameworks, ensuring flexibility and ease of integration across diverse use cases. With a power-efficient design, this PCIe AI Accelerator Card bridges the gap between traditional GPU solutions and today's advanced AI demands.

Axelera AI
13 Categories
View Details

Akida IP

The Akida IP is a groundbreaking neural processor designed to emulate the cognitive functions of the human brain within a compact and energy-efficient architecture. This processor is specifically built for edge computing applications, providing real-time AI processing for vision, audio, and sensor fusion tasks. The scalable neural fabric, ranging from 1 to 128 nodes, features on-chip learning capabilities, allowing devices to adapt and learn from new data with minimal external inputs, enhancing privacy and security by keeping data processing localized. Akida's unique design supports 4-, 2-, and 1-bit weight and activation operations, maximizing computational efficiency while minimizing power consumption. This flexibility in configuration, combined with a fully digital neuromorphic implementation, ensures a cost-effective and predictable design process. Akida is also equipped with event-based acceleration, drastically reducing the demands on the host CPU by facilitating efficient data handling and processing directly within the sensor network. Additionally, Akida's on-chip learning supports incremental learning techniques like one-shot and few-shot learning, making it ideal for applications that require quick adaptation to new data. These features collectively support a broad spectrum of intelligent computing tasks, including object detection and signal processing, all performed at the edge, thus eliminating the need for constant cloud connectivity.

BrainChip
AI Processor, Audio Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Processor Core Independent, Vision Processor
View Details

Universal Chiplet Interconnect Express (UCIe)

Universal Chiplet Interconnect Express, or UCIe, is a forward-looking interconnect technology that enables high-speed data exchanges between various chiplets. Developed to support a modular approach in chip design, UCIe enhances flexibility and scalability, allowing manufacturers to tailor systems to specific needs by integrating multiple functions into a single package. The architecture of UCIe facilitates seamless data communication, crucial in achieving high-performance levels in integrated circuits. It is designed to support multiple configurations and implementations, ensuring compatibility across different designs and maximizing interoperability. UCIe is pivotal in advancing the chiplet strategy, which is becoming increasingly important as devices require more complex and diverse functionalities. By enabling efficient and quick interchip communication, UCIe supports innovation in the semiconductor field, paving the way for the development of highly efficient and sophisticated systems.

EXTOLL GmbH
GLOBALFOUNDRIES, Samsung, TSMC, UMC
22nm, 28nm
AMBA AHB / APB/ AXI, D2D, Gen-Z, Multiprocessor / DSP, Network on Chip, Processor Core Independent, USB, V-by-One, VESA
View Details

Yitian 710 Processor

The Yitian 710 Processor is a groundbreaking component in processor technology, designed with cutting-edge architecture to enhance computational efficiency. This processor is tailored for cloud-native environments, offering robust support for high-demand computing tasks. It is engineered to deliver significant improvements in performance, making it an ideal choice for data centers aiming to optimize their processing power and energy efficiency. With its advanced features, the Yitian 710 stands at the forefront of processor innovation, ensuring seamless integration with diverse technology platforms and enhancing the overall computing experience across industries.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

MetaTF

MetaTF is BrainChip's premier development tool platform designed to complement its neuromorphic technology solutions. This platform is a comprehensive toolkit that empowers developers to convert and optimize standard machine learning models into formats compatible with BrainChip's Akida technology. One of its key advantages is its ability to adjust models into sparse formats, enhancing processing speed and reducing power consumption. The MetaTF framework provides an intuitive interface for integrating BrainChip’s specialized AI capabilities into existing workflows. It supports streamlined adaptation of models to ensure they are optimized for the unique characteristics of neuromorphic processing. Developers can utilize MetaTF to rapidly iterate and refine AI models, making the deployment process smoother and more efficient. By providing direct access to pre-trained models and tuning mechanisms, MetaTF allows developers to capitalize on the benefits of event-based neural processing with minimal configuration effort. This platform is crucial for advancing the application of machine learning across diverse fields such as IoT devices, healthcare technology, and smart infrastructure.

BrainChip
AI Processor, Coprocessor, Processor Core Independent, Vision Processor
View Details

Chimera GPNPU

Quadric's Chimera GPNPU is an adaptable processor core designed to respond efficiently to the demand for AI-driven computations across multiple application domains. Offering up to 864 TOPS, this licensable core seamlessly integrates into system-on-chip designs needing robust inference performance. By maintaining compatibility with all forms of AI models, including cutting-edge large language models and vision transformers, it ensures long-term viability and adaptability to emerging AI methodologies. Unlike conventional architectures, the Chimera GPNPU excels by permitting complete workload management within a singular execution environment, which is vital in avoiding the cumbersome and resource-intensive partitioning of tasks seen in heterogeneous processor setups. By facilitating a unified execution of matrix, vector, and control code, the Chimera platform elevates software development ease, and substantially improves code maintainability and debugging processes. In addition to high adaptability, the Chimera GPNPU capitalizes on Quadric's proprietary Compiler infrastructure, which allows developers to transition rapidly from model conception to execution. It transforms AI workflows by optimizing memory utilization and minimizing power expenditure through smart data storage strategies. As AI models grow increasingly complex, the Chimera GPNPU stands out for its foresight and capability to unify AI and DSP tasks under one adaptable and programmable platform.

Quadric
16 Categories
View Details

Veyron V2 CPU

The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.

Ventana Micro Systems
AI Processor, Audio Processor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

eSi-3250

Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

xcore.ai

xcore.ai by XMOS is a groundbreaking solution designed to bring intelligent functionality to the forefront of semiconductor applications. It enables powerful real-time execution of AI, DSP, and control functionalities, all on a single, programmable chip. The flexibility of its architecture allows developers to integrate various computational tasks efficiently, making it a fitting choice for projects ranging from smart audio devices to automated industrial systems. With xcore.ai, XMOS provides the technology foundation necessary for swift deployment and scalable application across different sectors, delivering high performance in demanding environments.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module from Axelera AI provides an exceptional balance of performance and size, perfectly suited for edge AI applications. Designed for high-performance tasks, this module is powered by a single Metis AI Processing Unit (AIPU), which offers cutting-edge inference capabilities. With this M.2 card module, developers can easily integrate AI processing power into compact devices.<br><br>This module accommodates demanding AI workloads, enabling applications to perform complex computations with efficiency. Thanks to its low power consumption and versatile integration capabilities, it opens new possibilities for use in edge devices that require robust AI processing power. The Metis AIPU M.2 module supports a wide range of AI models and pipelines, facilitated by Axelera's Voyager SDK software platform which ensures seamless deployment and optimization of AI models.<br><br>The module's versatile design allows for streamlined concurrent multi-model processing, significantly boosting the device's AI capabilities without the need for external data centers. Additionally, it supports advanced quantization techniques, providing users with increased prediction accuracy for high-stakes applications.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CAN, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, VGA, Vision Processor, WMV
View Details

Talamo SDK

The Talamo Software Development Kit (SDK) is a comprehensive toolset designed to streamline the development and deployment of neuromorphic AI applications. Leveraging a PyTorch-integrated environment, Talamo simplifies the creation of powerful AI models for deployment on the Spiking Neural Processor. It provides developers with a user-friendly workflow, reducing the complexity usually associated with spiking neural networks. This SDK facilitates the construction of end-to-end application pipelines through a familiar PyTorch framework. By grounding development in this standard workflow, Talamo removes the need for deep expertise in spiking neural networks, offering pre-built models that are ready to use. The SDK also includes capabilities for compiling and mapping trained models onto the processor's hardware, ensuring efficient integration and utilization of computing resources. Moreover, Talamo supports an architecture simulator which allows developers to emulate hardware performance during the design phase. This feature enables rapid prototyping and iterative design, which is crucial for optimizing applications for performance and power efficiency. Thus, Talamo not only empowers developers to build sophisticated AI solutions but also ensures these solutions are practical for deployment across various devices and platforms.

Innatera Nanosystems
All Foundries
All Process Nodes
AI Processor, Content Protection Software, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

aiWare

The aiWare Neural Processing Unit (NPU) is an advanced hardware solution engineered for the automotive sector, highly regarded for its efficiency in neural network acceleration tailored for automated driving technologies. This NPU is designed to handle a broad scope of AI applications, including complex neural network models like CNNs and RNNs, offering scalability across diverse performance tiers from L2 to more demanding L4 systems. With its industry-leading efficiency, the aiWare hardware IP achieves up to 98% effectiveness over various automotive neural networks. It supports vast sensor configurations typical in automotive contexts, maintaining reliable performance under rigorous conditions validated by ISO 26262 ASIL B certification. aiWare is not only power-efficient but designed with a scalable architecture, providing up to 1024 TOPS, ensuring that it meets the demands of high-performance processing requirements. Furthermore, aiWare is meticulously crafted to facilitate integration into safety-critical environments, deploying high determinism in its operations. It minimizes external memory dependencies through an innovative dataflow approach, maximizing on-chip memory utilization and minimizing system power. Featuring extensive documentation for integration and customization, aiWare stands out as a crucial component for OEMs and Tier1s looking to optimize advanced driver-assist functionalities.

aiMotive
12 Categories
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator by EdgeCortix is an advanced processor designed for energy-efficient, real-time AI inferencing. It supports complex generative AI models such as Llama 2 and Stable Diffusion with an impressive power envelope of just 8 watts, making it ideal for applications requiring swift, on-the-fly Batch=1 AI processing. While maintaining critical performance metrics, it can simultaneously run multiple deep neural network models, facilitated by its unique DNA core. The SAKURA-II stands out with its high utilization of AI compute resources, robust memory bandwidth, and sizable DRAM capacity options of up to 32GB, all in a compact form factor. With market-leading energy efficiency, the SAKURA-II supports diverse edge AI applications, from vision and language to audio, thanks to hardware-accelerated arbitrary activation functions and advanced power management features. Designed for ARM and other platforms, the SAKURA-II can be easily integrated into existing systems for deploying AI models and leveraging low power for demanding workloads. EdgeCortix's AI Accelerator excels with innovative features like sparse computing to optimize DRAM bandwidth and real-time data streaming for Batch=1 operations, ensuring fast and efficient AI computations. It offers unmatched adaptability in power management, enabling ultra-high efficiency modes for processing complex AI tasks while maintaining high precision and low latency operations.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

KL630 AI SoC

The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.

Kneron
TSMC
12nm LP/LP+
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, Processor Core Independent, USB, VGA, Vision Processor
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is a high-performance AI processor developed to meet the complex demands of artificial intelligence workloads. This accelerator is engineered with cutting-edge AI processing capabilities, enabling rapid data analysis and machine learning model inference. Designed for flexibility, the Hanguang 800 delivers superior computation speed and energy efficiency, making it an optimal choice for AI applications in a variety of sectors, from data centers to edge computing. By supporting high-volume data throughput, it enables organizations to achieve significant advantages in speed and efficiency, facilitating the deployment of intelligent solutions.

T-Head Semiconductor
AI Processor, CPU, IoT Processor, Processor Core Dependent, Security Processor, Vision Processor
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Ultra-Low-Power 64-Bit RISC-V Core

The Ultra-Low-Power 64-Bit RISC-V Core offered by Micro Magic is engineered to cater to high-performance applications while maintaining a low power profile. Operating at just 10mW at 1GHz, this core highlights Micro Magic's commitment to energy-efficient design without compromising on speed. Leveraging design techniques that allow operation at lower voltages, the core achieves remarkable performance metrics, making it suitable for advanced computing needs. The core operates at 5GHz under optimal conditions, showcasing its ability to handle demanding processing tasks. This makes it particularly valuable for applications where both speed and power efficiency are critical, such as portable and embedded systems. Micro Magic's implementation supports seamless integration into various computing infrastructures, accommodating diverse requirements of modern technology solutions. Moreover, the architectural design harnesses the strengths of RISC-V's open and flexible standards, ensuring that users benefit from both adaptability and performance. As part of Micro Magic's standout offerings, this core is poised to make significant impacts in high-demand environments, providing a blend of economy, speed, and reliability.

Micro Magic, Inc.
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

NuLink Die-to-Die PHY for Standard Packaging

The NuLink Die-to-Die PHY for Standard Packaging by Eliyan offers an innovative solution for high-performance interconnects between die on the same package. This technology significantly boosts bandwidth and energy efficiency, using industry-standard organic/laminate substrates to simplify design and reduce costs. It leverages a unique implementation that negates the need for more expensive silicon interposers or silicon bridges while maintaining exceptional signal integrity and compact form factors. With conventional bump pitches ranging from 100um to 130um, these PHY units support various industry standards such as UCIe, BoW, UMI, and SBD, delivering a versatile platform suitable for a wide array of applications. This flexibility ensures it meets the rigorous demands of data-centric and performance-oriented computing needs, with optimal performance observed at advanced process nodes like 5nm and below. Eliyan's NuLink PHY further breaks technological barriers by delivering synchronous unidirectional and bidirectional communication capabilities, achieving data rates up to 64 Gbps. Its design supports 32 transmission and receiving lanes to ensure robust data management in complex systems, making it an ideal solution for today's and future's data-heavy applications.

Eliyan
TSMC
3nm, 4nm, 5nm
AMBA AHB / APB/ AXI, CXL, D2D, DDR, MIPI, Network on Chip, Processor Core Dependent, V-by-One
View Details

Time-Triggered Ethernet

TTTech's Time-Triggered Ethernet (TTEthernet) is a breakthrough communication technology that combines the reliability of traditional Ethernet with the precision of time-triggered protocols. Designed to meet stringent safety requirements, this IP is fundamental in environments where fail-safe operations are absolute, such as human spaceflight, nuclear facilities, and other high-risk settings. TTEthernet integrates seamlessly with existing Ethernet infrastructure while providing deterministic control over data transmission times, allowing for real-time application support. Its primary advantage lies in supporting triple-redundant networks, which ensures dual fault-tolerance, an essential feature exemplified in its use by NASA's Orion spacecraft. The integrity and precision offered by Time-Triggered Ethernet make it ideal for implementing ECSS Engineering standards in space applications. It not only permits robust redundancy and high bandwidth (exceeding 10 Gbps) but also supports interoperability with various commercial off-the-shelf components, making it a versatile solution for complex network architectures.

TTTech Computertechnik AG
Cell / Packet, Ethernet, FlexRay, IEEE1588, LIN, MIL-STD-1553, MIPI, Processor Core Independent, Safe Ethernet
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

KL520 AI SoC

The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.

Kneron
TSMC
65nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

Crafted to deliver significant power savings, the Tianqiao-70 is a low-power RISC-V CPU that excels in commercial-grade scenarios. This 64-bit CPU core is primarily designed for applications where power efficiency is critical, such as mobile devices and computationally intensive IoT solutions. The core's architecture is specifically optimized to perform under stringent power budgets without compromising on the processing power needed for complex tasks. It provides an efficient solution for scenarios that demand reliable performance while maintaining a low energy footprint. Through its refined design, the Tianqiao-70 supports a broad spectrum of applications, including personal computing, machine learning, and mobile communications. Its versatility and power-awareness make it a preferred choice for developers focused on sustainable and scalable computing architectures.

StarFive Technology
AI Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

NMP-750

The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

Ceva-SensPro2 - Vision AI DSP

The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)

Ceva, Inc.
DSP Core, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

RISC-V CPU IP N Class

The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator revolutionizes computing with its Intelligent Compute Architecture (ICA), delivering unparalleled performance and efficiency for HPC and AI applications. This innovative product leverages real-time adaptability, enabling it to optimize hardware configurations dynamically to match the specific demands of various software workloads. Its standout feature is the elimination of domain-specific languages, offering a universal solution for scientific and technical computing. Equipped with a robust developer toolchain that supports popular languages like C, C++, FORTRAN, and OpenMP, the Maverick-2 seamlessly integrates into existing workflows. This minimizes the need for code rewrites while maximizing developer productivity. By providing extensive support for emerging technologies such as CUDA and HIP/ROCm, Maverick-2 ensures that it remains a viable and potent solution for current and future computing challenges. Built on TSMC's advanced 5nm process, the accelerator incorporates HBM3E memory and high-bandwidth PCIe Gen 5 interfaces, supporting demanding computations with remarkable efficiency. The Maverick-2 achieves a significant power performance advantage, making it ideal for data centers and research facilities aiming for greater sustainability without sacrificing computational power.

Next Silicon Ltd.
TSMC
5nm
11 Categories
View Details

RapidGPT - AI-Driven EDA Tool

RapidGPT is a next-generation electronic design automation tool powered by AI. Designed for those in the hardware engineering field, it allows for a seamless transition from ideas to physical hardware without the usual complexities of traditional design tools. The interface is highly intuitive, engaging users with natural language interaction to enhance productivity and reduce the time required for design iterations.\n\nEnhancing the entire design process, RapidGPT begins with concept development and guides users through to the final stages of bitstream or GDSII generation. This tool effectively acts as a co-pilot for engineers, allowing them to easily incorporate third-party IPs, making it adaptable for various project requirements. This adaptability is paramount for industries where speed and precision are of the essence.\n\nPrimisAI has integrated novel features such as AutoReviewâ„¢, which provides automated HDL audits; AutoCommentâ„¢, which generates AI-driven comments for HDL files; and AutoDocâ„¢, which helps create comprehensive project documentation effortlessly. These features collectively make RapidGPT not only a design tool but also a comprehensive project management suite.\n\nThe effectiveness of RapidGPT is made evident in its robust support for various design complexities, providing a scalable solution that meets specific user demands from individual developers to large engineering teams seeking enterprise-grade capabilities.

PrimisAI
AMBA AHB / APB/ AXI, CPU, Ethernet, HDLC, Processor Core Independent
View Details

Digital PreDistortion (DPD) Solution

Digital Predistortion (DPD) is a sophisticated technology crafted to optimize the power efficiency of RF power amplifiers. The flagship product, FlexDPD, presents a complete, adaptable sub-system that can be customized to any ASIC or FPGA/SoC platform. Thanks to its scalability, it is compatible with various device vendors. Designed for high performance, this DPD solution significantly boosts RF efficiencies by counteracting signal distortion, ensuring clear and effective transmission. The core of the DPD solution lies in its adaptability to a broad range of systems including 5G, multi-carrier platforms, and O-RAN frameworks. It's built to handle transmission bandwidths exceeding 1 GHz, making it a versatile and future-proof technology. This capability not only enhances system robustness but also offers a seamless integration pathway for next-generation communication standards. Additionally, Systems4Silicon’s DPD solution is field-tested, ensuring reliability in real-world applications. The solution is particularly beneficial for projects that demand high signal integrity and efficiency, providing a tangible advantage in competitive markets. Its compatibility with both ASIC and FPGA implementations offers flexibility and choice to partners, significantly reducing development time and cost.

Systems4Silicon
3GPP-5G, CAN-FD, Coder/Decoder, Ethernet, HDLC, MIL-STD-1553, Modulation/Demodulation, Multiprocessor / DSP, PLL, RapidIO
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series is renowned for integrating flexibility and performance scalability within a RISC-V framework. These cores are designed to cater to various application demands, from general-purpose computing to specialized tasks requiring high processing capability. The BK series supports customization that optimizes performance, power, and area based on different application scenarios. One notable feature of the BK Core Series is its ability to be tailored using Codasip Studio, which enables architects to modify microarchitectures and instruction sets efficiently. This customization is supported by a robust set of pre-verified options, ensuring quality and reliability across applications. The BK cores also boast energy efficiency, making them suitable for both power-sensitive and performance-oriented applications. Another advantage of the BK Core Series is its compatibility with a broad range of industry-standard tools and interfaces, which simplifies integration into existing systems and accelerates time to market. The series also emphasizes secure and safe design, aligning with industry standards for functional safety and security, thereby allowing integration into safety-critical environments.

Codasip
AI Processor, Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

NPU

The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.

OPENEDGES Technology, Inc.
AI Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

The H.264 FPGA Encoder and CODEC Micro Footprint Cores are versatile, ITAR-compliant solutions providing high-performance video compression tailored for FPGAs. These H.264 cores leverage industry-leading technology to offer 1080p60 H.264 Baseline support in a compact design, presenting one of the fastest and smallest FPGA cores available. Customizable features allow for unique pixel depths and resolutions, with particular configurations including an encoder, CODEC, and I-Frame only encoder options, making this IP adaptable to varied video processing needs. Designed with precision, these cores introduce significant latency improvements, such as achieving 1ms latency at 1080p30. This capability not only enhances real-time video processing but also optimizes integration with existing electronic systems. Licensing options are flexible, offering a cost-effective evaluation license to accommodate different project scopes and needs. Customization possibilities further extend to unique resolution and pixel depth requirements, supporting diverse application needs in fields like surveillance, broadcasting, and multimedia solutions. The core’s design ensures it can seamlessly integrate into a variety of platforms, including challenging and sophisticated FPGA applications, all while keeping development timelines and budgets in focus.

A2e Technologies
AI Processor, AMBA AHB / APB/ AXI, Arbiter, Audio Controller, DVB, H.264, H.265, HDMI, Multiprocessor / DSP, Other, TICO, USB, Wireless Processor
View Details

RISC-V Core IP

The RISC-V Core IP developed by AheadComputing Inc. stands out in the field of 64-bit application processors. Designed to deliver exceptional per-core performance, this processor is engineered with the highest standards to maximize the Instructions Per Cycle (IPC) efficiency. AheadComputing's RISC-V Core IP is continuously refined to address the growing demands of high-performance computing applications. The innovative architecture of this core allows for seamless execution of complex algorithms while achieving superior speed and efficiency. This design is crucial for applications that require fast data processing and real-time computational capabilities. By integrating advanced power management techniques, the RISC-V Core IP ensures energy efficiency without sacrificing performance, making it suitable for a wide range of electronic devices. Anticipating future computing needs, AheadComputing's RISC-V Core IP incorporates state-of-the-art features that support scalability and adaptability. These features ensure that the IP remains relevant as technology evolves, providing a solid foundation for developing next-generation computing solutions. Overall, it embodies AheadComputing’s commitment to innovation and performance excellence.

AheadComputing Inc.
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

ChipJuice

ChipJuice is a sophisticated tool designed for reverse engineering of integrated circuits (ICs), which plays a vital role in digital forensics and hardware security assessments. The tool allows users to delve into the internal architecture of digital cores, analyzing and extracting detailed layouts such as netlists and HDL files from electronic images of chips. Aimed at providing comprehensive insights, ChipJuice supports a range of applications from security assessments to technological intelligence and digital IP infringement investigations. Engineered for ease of use, ChipJuice is user-friendly and integrates advanced algorithms enabling high-performance processing on standard developer machines. Its design caters to various IC types—microcontrollers, microprocessors, FPGAs, SoCs—regardless of their architecture, size, or materials (like Aluminum or Copper). ChipJuice's versatility allows users to handle both complex and standard ICs, making it a go-to resource for laboratories, researchers, and governmental entities involved in security evaluations. One standout feature of ChipJuice is the "Automated Standard Cell Research," wherein once a standard cell is identified, its occurrences are automatically cataloged and can be quickly reused for studying other chips. This systematizes the reverse engineering workflow, significantly speeding up the analysis by building upon past examinations. ChipJuice epitomizes Texplained's commitment to simplifying the complexities of hardware exploration, delivering precise and actionable insights into the ICs' security framework.

Texplained
All Foundries
All Process Nodes
AMBA AHB / APB/ AXI, NVM Express, Processor Core Independent
View Details

Digital Radio (GDR)

The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.

GIRD Systems, Inc.
3GPP-5G, 3GPP-LTE, 802.11, Coder/Decoder, CPRI, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Independent
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is an advanced microcontroller engineered for highly efficient always-on sensing tasks. Integrating a low-power spiking neural network engine with a RISC-V processor core, the T1 provides a compact solution for rapid sensor data processing. Its design supports next-generation AI applications and signal processing while maintaining a minimal power footprint. The processor excels in scenarios requiring both high power efficiency and fast response. By employing a tightly-looped spiking neural network algorithm, the T1 can execute complex pattern recognition and signal processing tasks directly on-device. This autonomy enables battery-powered devices to operate intelligently and independently of cloud-based services, ideal for portable or remote applications. A notable feature includes its low-power operation, making it suitable for use in portable devices like wearables and IoT-enabled gadgets. Embedded with a RISC-V CPU and 384KB of SRAM, the T1 can interface with a variety of sensors through diverse connectivity options, enhancing its versatility in different environments.

Innatera Nanosystems
UMC
28nm
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

Ncore Cache Coherent Interconnect

Ncore Cache Coherent Interconnect is designed to tackle the multifaceted challenges in multicore SoC systems by introducing heterogeneous coherence and efficient cache management. This NoC IP optimizes performance by ensuring high throughput and reliable data transmission across multiple cores, making it indispensable for sophisticated computing tasks. Leveraging advanced cache coherency, Ncore maintains data integrity, crucial for maintaining system stability and efficiency in operations involving heavy computational loads. With its ISO26262 support, it caters to automotive and industrial applications requiring high reliability and safety standards. This interconnect technology pairs well with diverse processor architectures and supports an array of protocols, providing seamless integration into existing systems. It enables a coherent and connected multicore environment, enhancing the performance of high-stakes applications across various industry verticals, from automotive to advanced computing environments.

Arteris
15 Categories
View Details

eSi-3200

The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

GSHARK

GSHARK is part of the TAKUMI line of GPU IPs known for its compact size and ability to richly enhance display graphics in embedded systems. Developed for devices like digital cameras, this IP has demonstrated an extensive record of reliability with over a hundred million units shipped. The proprietary architecture offers exceptional performance with low power usage and minimal CPU demand, enabling high-quality graphics rendering typical of PCs and smartphones.

TAKUMI Corporation
2D / 3D, GPU, Processor Core Independent
View Details

NMP-350

The NMP-350 is an endpoint accelerator designed to deliver the lowest power and cost efficiency in its class. Ideal for applications such as driver authentication and health monitoring, it excels in automotive, AIoT/sensors, and wearable markets. The NMP-350 offers up to 1 TOPS performance with 1 MB of local memory, and is equipped with a RISC-V or Arm Cortex-M 32-bit CPU. It supports multiple use-cases, providing exceptional value for integrating AI capabilities into various devices. NMP-350's architectural design ensures optimal energy consumption, making it particularly suited to Industry 4.0 applications where predictive maintenance is crucial. Its compact nature allows for seamless integration into systems requiring minimal footprint yet substantial computational power. With support for multiple data inputs through AXI4 interfaces, this accelerator facilitates enhanced machine automation and intelligent data processing. This product is a testament to AiM Future's expertise in creating efficient AI solutions, providing the building blocks for smart devices that need to manage resources effectively. The combination of high performance with low energy requirements makes it a go-to choice for developers in the field of AI-enabled consumer technology.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

AI Inference Platform

Designed to cater to AI-specific needs, SEMIFIVE’s AI Inference Platform provides tailored solutions that seamlessly integrate advanced technologies to optimize performance and efficiency. This platform is engineered to handle the rigorous demands of AI workloads through a well-integrated approach combining hardware and software innovations matched with AI acceleration features. The platform supports scalable AI models, delivering exceptional processing capabilities for tasks involving neural network inference. With a focus on maximizing throughput and efficiency, it facilitates real-time processing and decision-making, which is crucial for applications such as machine learning and data analytics. SEMIFIVE’s platform simplifies AI implementation by providing an extensive suite of development tools and libraries that accelerate design cycles and enhance comprehensive system performance. The incorporation of state-of-the-art caching mechanisms and optimized data flow ensures the platform’s ability to handle large datasets efficiently.

SEMIFIVE
Samsung
5nm, 12nm, 14nm
AI Processor, Cell / Packet, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt