Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Platform Level IP

Platform Level IP: Comprehensive Semiconductor Solutions

Platform Level IP is a critical category within the semiconductor IP ecosystem, offering a wide array of solutions that are fundamental to the design and efficiency of semiconductor devices. This category includes various IP blocks and cores tailored for enhancing system-level performance, whether in consumer electronics, automotive systems, or networking applications. Suitable for both embedded control and advanced data processing tasks, Platform Level IP encompasses versatile components necessary for building sophisticated, multicore systems and other complex designs.

Subcategories within Platform Level IP cover a broad spectrum of integration needs:

1. **Multiprocessor/DSP (Digital Signal Processing)**: This includes specialized semiconductor IPs for handling tasks that require multiple processor cores working in tandem. These IPs are essential for applications needing high parallelism and performance, such as media processing, telecommunications, and high-performance computing.

2. **Processor Core Dependent**: These semiconductor IPs are designed to be tightly coupled with specific processor cores, ensuring optimal compatibility and performance. They include enhancements that provide seamless integration with one or more predetermined processor architectures, often used in specific applications like embedded systems or custom computing solutions.

3. **Processor Core Independent**: Unlike core-dependent IPs, these are flexible solutions that can integrate with a wide range of processor cores. This adaptability makes them ideal for designers looking to future-proof their technological investments or who are working with diverse processing environments.

Overall, Platform Level IP offers a robust foundation for developing flexible, efficient, and scalable semiconductor devices, catering to a variety of industries and technological requirements. Whether enhancing existing architectures or pioneering new designs, semiconductor IPs in this category play a pivotal role in the innovation and evolution of electronic devices.

All semiconductor IP
Platform Level IP
A/D Converter Amplifier Analog Comparator Analog Filter Analog Front Ends Analog Multiplexer Analog Subsystems Clock Synthesizer Coder/Decoder D/A Converter DLL Graphics & Video Modules Oversampling Modulator Photonics PLL Power Management RF Modules Sensor Switched Cap Filter Temperature Sensor CAN CAN XL CAN-FD FlexRay LIN Other Safe Ethernet Arbiter Audio Controller Clock Generator CRT Controller Disk Controller DMA Controller GPU Input/Output Controller Interrupt Controller LCD Controller Other Peripheral Controller Receiver/Transmitter Timer/Watchdog AMBA AHB / APB/ AXI CXL D2D Gen-Z HDMI I2C IEEE 1394 IEEE1588 Interlaken MIL-STD-1553 MIPI Multi-Protocol PHY PCI PowerPC RapidIO SAS SATA Smart Card USB V-by-One VESA Embedded Memories I/O Library Other Standard cell DDR eMMC Flash Controller Mobile DDR Controller NAND Flash NVM Express ONFI Controller RLDRAM Controller SD SDIO Controller SDRAM Controller SRAM Controller 2D / 3D ADPCM Audio Interfaces AV1 Camera Interface CSC DVB H.263 H.264 H.265 H.266 Image Conversion JPEG JPEG 2000 MPEG 4 QOI TICO VGA WMA WMV Network on Chip Multiprocessor / DSP Processor Core Dependent Processor Core Independent AI Processor Audio Processor Building Blocks Coprocessor CPU DSP Core IoT Processor Microcontroller Other Processor Cores Security Processor Vision Processor Wireless Processor Content Protection Software Cryptography Cores Cryptography Software Library Embedded Security Modules Other Platform Security Security Protocol Accelerators Security Subsystems 3GPP-5G 3GPP-LTE 802.11 802.16 / WiMAX Bluetooth CPRI Digital Video Broadcast GPS JESD 204A / JESD 204B OBSAI Other UWB W-CDMA Wireless USB ATM / Utopia Cell / Packet Error Correction/Detection Ethernet Fibre Channel HDLC Interleaver/Deinterleaver Modulation/Demodulation Optical/Telecom Other
Vendor

KL730 AI SoC

The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.

Kneron
TSMC
12nm
16 Categories
View Details

Akida 2nd Generation

The Akida 2nd Generation represents a leap forward in the realm of AI processing, enhancing upon its predecessor with greater flexibility and improved efficiency. This advanced neural processor core is tailored for modern applications demanding real-time response and ultra-low power consumption, making it ideal for compact and battery-operated devices. Akida 2nd Generation supports various programming configurations, including 8-, 4-, and 1-bit weights and activations, thus providing developers with the versatility to optimize performance versus power consumption to meet specific application needs. Its architecture is fully digital and silicon-proven, ensuring reliable deployment across diverse hardware setups. With features such as programmable activation functions and support for sophisticated neural network models, Akida 2nd Generation enables a broad spectrum of AI tasks. From object detection in cameras to sophisticated audio sensing, this iteration of the Akida processor is built to handle the most demanding edge applications while sustaining BrainChip's hallmark efficiency in processing power per watt.

BrainChip
11 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

Addressing the need for high-performance AI processing, the Metis AIPU PCIe AI Accelerator Card from Axelera AI offers an outstanding blend of speed, efficiency, and power. Designed to boost AI workloads significantly, this PCIe card leverages the prowess of the Metis AI Processing Unit (AIPU) to deliver unparalleled AI inference capabilities for enterprise and industrial applications. The card excels in handling complex AI models and large-scale data processing tasks, significantly enhancing the efficiency of computational tasks within various edge settings. The Metis AIPU embedded within the PCIe card delivers high TOPs (Tera Operations Per Second), allowing it to execute multiple AI tasks concurrently with remarkable speed and precision. This makes it exceptionally suitable for applications such as video analytics, autonomous driving simulations, and real-time data processing in industrial environments. The card's robust architecture reduces the load on general-purpose processors by offloading AI tasks, resulting in optimized system performance and lower energy consumption. With easy integration capabilities supported by the state-of-the-art Voyager SDK, the Metis AIPU PCIe AI Accelerator Card ensures seamless deployment of AI models across various platforms. The SDK facilitates efficient model optimization and tuning, supporting a wide range of neural network models and enhancing overall system capabilities. Enterprises leveraging this card can see significant improvements in their AI processing efficiency, leading to faster, smarter, and more efficient operations across different sectors.

Axelera AI
13 Categories
View Details

Akida IP

The Akida IP is an advanced processor core designed to mimic the efficient processing characteristics of the human brain. Inspired by neuromorphic engineering principles, it delivers real-time AI performance while maintaining a low power profile. The architecture of the Akida IP is sophisticated, allowing seamless integration into existing systems without the need for continuous external computation. Equipped with capabilities for processing vision, audio, and sensor data, the Akida IP stands out by being able to handle complex AI tasks directly on the device. This is done by utilizing a flexible mesh of nodes that efficiently distribute cognitive computing tasks, enabling a scalable approach to machine learning applications. Each node supports hundreds of MAC operations and can be configured to adapt to various computational requirements, making it a versatile choice for AI-centric endeavors. Moreover, the Akida IP is particularly beneficial for edge applications where low latency, high efficiency, and security are paramount. With capabilities for event-based processing and on-chip learning, it enhances response times and reduces data transfer needs, thereby bolstering device autonomy. This solidifies its position as a leading solution for embedding AI into devices across multiple industries.

BrainChip
AI Processor, Audio Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Processor Core Independent, Vision Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor is a landmark server chip released by T-Head Semiconductor, representing a breakthrough in high-performance computing. This chip is designed with cutting-edge architecture that utilizes advanced Armv9 structure, accommodating a range of demanding applications. Engineered by T-Head's dedicated research team, Yitian 710 integrates high efficiency and bandwidth properties into a unique 2.5D package, housing two dies and a staggering 60 billion transistors. The Yitian 710 encompasses 128 Armv9 high-performance cores, each equipped with 64KB L1 instruction cache, 64KB L1 data cache, and 1MB L2 cache, further amplified by a collective on-chip system cache of 128MB. These configurations enable optimal data processing and retrieval speeds, making it suitable for data-intensive tasks. Furthermore, the memory subsystem stands out with its 8-channel DDR5 support, reaching peak bandwidths of 281GB/s. In terms of connectivity, the Yitian 710's I/O system includes 96 PCIe 5.0 channels with a bidirectional theoretical total bandwidth of 768GB/s, streamlining high-speed data transfer critical for server operations. Its architecture is not only poised to meet the current demands of data centers and cloud services but also adaptable for future advancements in AI inference and multimedia processing tasks.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

Universal Chiplet Interconnect Express (UCIe)

The Universal Chiplet Interconnect Express (UCIe) by EXTOLL is a cutting-edge interconnect framework designed to revolutionize chip-to-chip communication within heterogeneous systems. This product exemplifies the shift towards chiplet architecture, a modular approach enabling enhanced performance and flexibility in semiconductor designs. UCIe offers an open and customizable platform that supports a wide range of technology nodes, particularly excelling in the 12nm to 28nm range. This adaptability ensures it can meet the diverse needs of modern semiconductor applications, providing a bridge that enhances integration across various chiplet components. Such capabilities make it ideal for applications requiring high bandwidth and low latency. The design of UCIe focuses on minimizing power consumption while maximizing data throughput, aligning with EXTOLL’s objective of delivering eco-efficient technology. It empowers manufacturers to forge robust connections between chiplets, allowing optimized performance and scalability in data-intensive environments like data centers and advanced consumer electronics.

EXTOLL GmbH
GLOBALFOUNDRIES
22nm, 28nm, 28nm SLP
AMBA AHB / APB/ AXI, D2D, Gen-Z, Multiprocessor / DSP, Network on Chip, Processor Core Dependent, Processor Core Independent, USB, V-by-One, VESA
View Details

Veyron V2 CPU

The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.

Ventana Micro Systems
AI Processor, Audio Processor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

aiWare

aiWare is a high-performance NPU designed to meet the rigorous demands of automotive AI inference, providing a scalable solution for ADAS and AD applications. This hardware IP core is engineered to handle a wide array of AI workloads, including the most advanced neural network structures like CNNs, LSTMs, and RNNs. By integrating cutting-edge efficiency and scalability, aiWare delivers industry-leading neural processing power tailored to automobile-grade specifications.\n\nThe NPU's architecture emphasizes hardware determinism and offers ISO 26262 ASIL-B certification, ensuring that aiWare meets stringent automotive safety standards. Its efficient design also supports up to 256 effective TOPS per core, and can scale to handle thousands of TOPS through multicore integration, minimizing power consumption effectively. The aiWare's system-level optimizations reduce reliance on external memory by leveraging local memory for data management, boosting performance efficiency across varied input data sizes and complexities.\n\naiWare’s development toolkit, aiWare Studio, is distinguished by its innovative ability to optimize neural network execution without the need for manual intervention by software engineers. This empowers ai engineers to focus on refining NNs for production, significantly accelerating iteration cycles. Coupled with aiMotive's aiDrive software suite, aiWare provides an integrated environment for creating highly efficient automotive AI applications, ensuring seamless integration and rapid deployment across multiple vehicle platforms.

aiMotive
12 Categories
View Details

eSi-3250

Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

Chimera GPNPU

Chimera GPNPU is engineered to revolutionize AI/ML computational capabilities on single-core architectures. It efficiently handles matrix, vector, and scalar code, unifying AI inference and traditional C++ processing under one roof. By alleviating the need for partitioning AI workloads between different processors, it streamlines software development and drastically speeds up AI model adaptation and integration. Ideal for SoC designs, the Chimera GPNPU champions an architecture that is both versatile and powerful, handling complex parallel workloads with a single unified binary. This configuration not only boosts software developer productivity but also ensures an enduring flexibility capable of accommodating novel AI model architectures on the horizon. The architectural fabric of the Chimera GPNPU seamlessly blends the high matrix performance of NPUs with C++ programmability found in traditional processors. This core is delivered in a synthesizable RTL form, with scalability options ranging from a single-core to multi-cluster designs to meet various performance benchmarks. As a testament to its adaptability, the Chimera GPNPU can run any AI/ML graph from numerous high-demand application areas such as automotive, mobile, and home digital appliances. Developers seeking optimization in inference performance will find the Chimera GPNPU a pivotal tool in maintaining cutting-edge product offerings. With its focus on simplifying hardware design, optimizing power consumption, and enhancing programmer ease, this processor ensures a sustainable and efficient path for future AI/ML developments.

Quadric
TSMC
1000nm
17 Categories
View Details

SAKURA-II AI Accelerator

SAKURA-II AI Accelerator represents EdgeCortix's latest advancement in edge AI processing, offering unparalleled energy efficiency and extensive capabilities for generative AI tasks. This accelerator is designed to manage demanding AI models, including Llama 2, Stable Diffusion, DETR, and ViT, within a slim power envelope of about 8W. With capabilities extending to multi-billion parameter models, SAKURA-II meets a wide range of edge applications in vision, language, and audio. The SAKURA-II's architecture maximizes AI compute efficiency, delivering more than twice the utilization of competitive solutions. It boasts remarkable DRAM bandwidth, essential for large language and vision models, while maintaining low power consumption. The hardware supports real-time Batch=1 processing, demonstrating its edge in performance even in constrained environments, making it a choice solution for diverse industrial AI applications. With 60 TOPS (INT8) and 30 TFLOPS (BF16) in performance metrics, this accelerator is built to exceed expectations in demanding conditions. It features robust memory configurations supporting up to 32GB of DRAM, ideal for processing intricate AI workloads. By leveraging sparse computing techniques, SAKURA-II optimizes its memory and bandwidth usage effectively, ensuring reliable performance across all deployed applications.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Jotunn8 AI Accelerator

The Jotunn8 AI Accelerator is engineered for lightning-fast AI inference at unprecedented scale. It is designed to meet the demands of modern data centers by providing exceptional throughput, low latency, and optimized energy efficiency. The Jotunn8 outperforms traditional setups by allowing large-scale deployment of trained models, ensuring robust performance while reducing operational costs. Its capabilities make it ideal for real-time applications such as chatbots, fraud detection, and advanced search algorithms. What sets the Jotunn8 apart is its adaptability to various AI algorithms, including reasoning and generative models, alongside agentic AI frameworks. This seamless integration achieves near-theoretical performance, allowing the chip to excel in applications that require logical rigor and creative processing. With a focus on minimizing carbon footprint, the Jotunn8 is meticulously designed to enhance both performance per watt and overall sustainability. The Jotunn8 supports massive memory handling with HBM capability, promoting incredibly high data throughput that aligns with the needs of demanding AI processes. Its architecture is purpose-built for speed, efficiency, and the ability to scale with technological advances, providing a solid foundation for AI infrastructure looking to keep pace with evolving computational demands.

VSORA
TSMC
20nm
AI Processor, DSP Core, Processor Core Dependent
View Details

xcore.ai

xcore.ai is XMOS Semiconductor's innovative programmable chip designed for advanced AI, DSP, and I/O applications. It enables developers to create highly efficient systems without the complexity typical of multi-chip solutions, offering capabilities that integrate AI inference, DSP tasks, and I/O control seamlessly. The chip architecture boasts parallel processing and ultra-low latency, making it ideal for demanding tasks in robotics, automotive systems, and smart consumer devices. It provides the toolset to deploy complex algorithms efficiently while maintaining robust real-time performance. With xcore.ai, system designers can leverage a flexible platform that supports the rapid prototyping and development of intelligent applications. Its performance allows for seamless execution of tasks such as voice recognition and processing, industrial automation, and sensor data integration. The adaptable nature of xcore.ai makes it a versatile solution for managing various inputs and outputs simultaneously, while maintaining high levels of precision and reliability. In automotive and industrial applications, xcore.ai supports real-time control and monitoring tasks, contributing to smarter, safer systems. For consumer electronics, it enhances user experience by enabling responsive voice interfaces and high-definition audio processing. The chip's architecture reduces the need for exterior components, thus simplifying design and reducing overall costs, paving the way for innovative solutions where technology meets efficiency and scalability.

XMOS Semiconductor
24 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module by Axelera AI is a compact and powerful solution designed for AI inference at the edge. This module delivers remarkable performance, comparable to that of a PCIe card, all while fitting into the streamlined M.2 form factor. Ideal for demanding AI applications that require substantial computational power, the module enhances processing efficiency while minimizing power usage. With its robust infrastructure, it is geared toward integrating into applications that demand high throughput and low latency, making it a perfect fit for intelligent vision applications and real-time analytics. The AIPU, or Artificial Intelligence Processing Unit, at the core of this module provides industry-leading performance by offloading AI workloads from traditional CPU or GPU setups, allowing for dedicated AI computation that is faster and more energy-efficient. This not only boosts the capabilities of the host systems but also drastically reduces the overall energy consumption. The module supports a wide range of AI applications, from facial recognition and security systems to advanced industrial automation processes. By utilizing Axelera AI’s innovative software solutions, such as the Voyager SDK, the Metis AIPU M.2 Accelerator Module enables seamless integration and full utilization of AI models and applications. The SDK offers enhancements like compatibility with various industry tools and frameworks, thus ensuring a smooth deployment process and quick time-to-market for advanced AI systems. This product represents Axelera AI’s commitment to revolutionizing edge computing with streamlined, effective AI acceleration solutions.

Axelera AI
14 Categories
View Details

Talamo SDK

The Talamo SDK from Innatera serves as a comprehensive software development toolkit designed to maximize the capabilities of its Spiking Neural Processor (SNP) lineup. Tailored for developers and engineers, Talamo offers in-depth access to configure and deploy neuromorphic processing solutions effectively. The SDK supports the development of applications that utilize Spiking Neural Networks (SNNs) for diverse sensory processing tasks. Talamo provides a user-friendly interface that simplifies the integration of neural processing capabilities into a wide range of devices and systems. By leveraging the toolkit, developers can customize applications for specific use cases such as real-time audio analysis, touch-free interactions, and biometric data processing. The SDK comes with pre-built models and a model zoo, which helps in rapidly prototyping and deploying sensor-driven solutions. This SDK stands out by offering enhanced tools for developing low-latency, energy-efficient applications. By harnessing the temporal processing strength of SNNs, Talamo allows for the robust development of applications that can operate under strict power and performance constraints, enabling the creation of intelligent systems that can autonomously process data in real-time.

Innatera Nanosystems
AI Processor, Content Protection Software, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Ceva-SensPro2 - Vision AI DSP

The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)

Ceva, Inc.
DSP Core, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

KL520 AI SoC

The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.

Kneron
TSMC
65nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator by T-Head Semiconductor is a powerful AI acceleration chip designed to enhance machine learning tasks. It excels in providing the computational power necessary for intensive AI workloads, effectively reducing processing times for large-scale data frameworks. This makes it an ideal choice for organizations aiming to infuse AI capabilities into their operations with maximum efficiency. Built with an emphasis on speed and performance, the Hanguang 800 is optimized for applications requiring vast amounts of data crunching. It supports a diverse array of AI models and workloads, ensuring flexibility and robust performance across varying use cases. This accelerates the deployment of AI applications in sectors such as autonomous driving, natural language processing, and real-time data analysis. The Hanguang 800's architecture is complemented by proprietary algorithms that enhance processing throughput, competing against traditional processors by providing significant gains in efficiency. This accelerator is indicative of T-Head's commitment to advancing AI technologies and highlights their capability to cater to specialized industry needs through innovative semiconductor developments.

T-Head Semiconductor
AI Processor, CPU, IoT Processor, Processor Core Dependent, Security Processor, Vision Processor
View Details

KL630 AI SoC

The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.

Kneron
TSMC
12nm LP/LP+
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, Processor Core Independent, USB, VGA, Vision Processor
View Details

Time-Triggered Ethernet

Time-Triggered Ethernet (TTEthernet) is a pioneering development by TTTech that offers deterministic Ethernet capabilities for safety-critical applications. This technology supports real-time communication between network nodes while maintaining the standard Ethernet infrastructure. TTEthernet enables reliable data delivery, with built-in mechanisms for fault tolerance that are vital for spaces like aviation, industrial automation, and space missions. One of the key aspects of TTEthernet is its ability to provide triple-redundant communication, ensuring network reliability even in the case of multiple failures. Licensed for significant projects such as NASA's Orion spacecraft, TTEthernet demonstrates its efficacy in environments that require dual fault-tolerance. As part of the ECSS engineering standard, the protocol supports human spaceflight standards and integrates seamlessly into space-based and terrestrial networks. The application of TTEthernet spans across multiple domains due to its robust nature and compliance with industry standards. It is particularly esteemed in markets that emphasize the importance of precise time synchronization and high availability. By using TTEthernet, companies can secure communications in networks without compromising on the speed and flexibility inherent to Ethernet-based systems.

TTTech Computertechnik AG
Cell / Packet, Error Correction/Detection, Ethernet, FlexRay, IEEE1588, LIN, MIL-STD-1553, MIPI, Processor Core Independent, Safe Ethernet
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Ultra-Low-Power 64-Bit RISC-V Core

Micro Magic's Ultra-Low-Power 64-Bit RISC-V Core is a highly efficient design that operates with remarkably low power consumption, requiring only 10mW at 1GHz. This core exemplifies Micro Magic’s commitment to power efficiency, as it integrates advanced techniques to maintain high performance even at lower voltages. The core is engineered for applications where energy conservation is crucial, making it ideal for modern, power-sensitive devices. The architectural design of this RISC-V core utilizes innovative technology to ensure high-speed processing capabilities while minimizing power draw. This balance is achieved through precise engineering and the use of state-of-the-art design methodologies that reduce operational overhead without compromising performance. As a result, this core is particularly suited for applications in portable electronics, IoT devices, and other areas where low-power operation is a necessity. Micro Magic's experience in developing high-speed, low-power solutions is evident in this core's design, ensuring that it delivers reliable performance under various operational conditions. The Ultra-Low-Power 64-Bit RISC-V Core represents a significant advancement in processor efficiency, providing a robust solution for designers looking to enhance their products' capabilities while maintaining a low power footprint.

Micro Magic, Inc.
TSMC
28nm
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator represents the pinnacle of Next Silicon's innovative approach to computational resources. This state-of-the-art accelerator leverages the Intelligent Compute Architecture for software-defined adaptability, enabling it to autonomously tailor its real-time operations across various HPC and AI workloads. By optimizing performance using insights gained through real-time telemetry, Maverick-2 ensures superior computational efficiency and reduced power consumption, making it an ideal choice for demanding computational environments.\n\nMaverick-2 brings transformative performance enhancements to large-scale scientific research and data-heavy industries by dispensing with the need for codebase modifications or specialized software stacks. It supports a wide range of familiar development tools and frameworks, such as C/C++, FORTRAN, and Kokkos, simplifying the integration process for developers and reducing time-to-discovery significantly.\n\nEngineered with advanced features like high bandwidth memory (HBM3E) and built on TSMC's 5nm process technology, this accelerator provides not only unmatched adaptability but also an energy-efficient, eco-friendly computing solution. Whether embedded in single-die PCIe cards or dual-die OCP Accelerator Modules, the Maverick-2 is positioned as a future-proof solution capable of evolving with technological advancements in AI and HPC.

Next Silicon Ltd.
TSMC
5nm
11 Categories
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

NuLink Die-to-Die PHY for Standard Packaging

The NuLink Die-to-Die PHY for Standard Packaging by Eliyan is engineered to facilitate superior die-to-die interconnectivity on standard organic/laminate package substrates. This innovative PHY IP supports key industry standards such as UCIe and BoW, and includes proprietary technologies like UMI and SBD. The NuLink PHY delivers leading performance and power efficiency, comparable to advanced packaging technologies, but at a fraction of the cost. It features configurations with up to 64 data lanes, supporting a data rate per lane of up to 64Gbps, making it ideal for applications demanding high bandwidth and low latency. The implementation enhances system design while reducing the necessary area and thermal load, which significantly eases integration into existing hardware ecosystems.

Eliyan
TSMC
3nm, 10nm, 16nm
AMBA AHB / APB/ AXI, CXL, D2D, DDR, MIPI, Network on Chip, Processor Core Dependent, V-by-One
View Details

WiseEye2 AI Solution

The WiseEye2 AI Solution by Himax is a highly efficient processor tailored for AI applications, combining an ultra-low power CMOS image sensor with the HX6538 microcontroller. Designed for battery-powered devices requiring continuous operation, it significantly lowers power consumption while boosting performance. This processor leverages the Arm Cortex M55 CPU and Ethos U55 NPU to enhance inference speed and energy efficiency substantially, allowing execution of complex models with precision. Perfectly suited for applications like user-presence detection in laptops, WiseEye2 heightens security through sophisticated facial recognition and adaptive privacy settings. It automatically wakes or locks the device based on user proximity, thereby conserving power and safeguarding sensitive information. WiseEye2’s power management and neural processing units ensure constant device readiness, augmenting AI capabilities ranging from occupancy detection to smart security. This reflects Himax's dedication to providing versatile AI solutions that anticipate and respond to user needs seamlessly. Equipped with sensor fusion and sophisticated security engines, WiseEye2 maintains privacy and operational efficiency, marking a significant step forward in the integration of AI into consumer electronics, especially where energy conservation is pivotal.

Himax Technologies, Inc.
AI Processor, Embedded Security Modules, Processor Core Independent, Vision Processor
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

RISC-V CPU IP N Class

The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

StarFive's Tianqiao-70 is engineered to deliver superior performance in a power-efficient package. This 64-bit RISC-V CPU core is designed for commercial-grade applications, where consistent and reliable performance is mandatory, yet energy consumption must be minimized. The core's architecture integrates low power design principles without compromising its ability to execute complex instructions efficiently. It is particularly suited for mobile applications, desktop clients, and intelligent gadgets requiring sustained battery life. The Tianqiao-70's design focuses on extending the operational life of devices by ensuring minimal power draw during both active and idle states. It supports an array of advanced features that cater to the latest computational demands. As an ideal solution for devices that combine portability with intensive processing demands, the Tianqiao-70 offers an optimal balance of performance and energy conservation. Its capability to adapt to various operating environments makes it a versatile option for developers looking to maximize efficiency and functionality.

StarFive Technology
AI Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

Ncore Cache Coherent Interconnect

The Ncore Cache Coherent Interconnect is designed to tackle the complexities inherent in multicore SoC environments. By maintaining coherence across heterogeneous cores, it enables efficient data sharing and optimizes cache use. This in turn enhances the throughput of the system, ensuring reliable performance with reduced latency. The architecture supports a wide range of cores, making it a versatile option for many applications in high-performance computing. With Ncore, designers can address the challenges of maintaining data consistency across different processor cores without incurring significant power or performance penalties. The interconnect's capability to handle multicore scenarios means it is perfectly suited for advanced computing solutions where data integrity and speed are paramount. Additionally, its configuration options allow customization to meet specific project needs, maintaining flexibility in design applications. Its efficiency in multi-threading environments, coupled with robust data handling, marks it as a crucial component in designing state-of-the-art SoCs. By supporting high data throughput, Ncore keeps pace with the demands of modern processing needs, ensuring seamless integration and operation across a variety of sectors.

Arteris
15 Categories
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series offers versatile, low-power, and high-performance solutions tailored for various embedded applications. These cores ensure efficiency and reliability by incorporating RISC-V compliance and are verified through advanced methodologies. Known for their adaptability, these cores can cater to applications needing robust performance while maintaining stringent power and area requirements.

Codasip
AI Processor, Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Digital PreDistortion (DPD) Solution

Systems4Silicon's DPD solution enhances power efficiency in RF power amplifiers by using advanced predistortion techniques. This technology is part of a comprehensive subsystem known as FlexDPD, which is adaptive and scalable, independent of any particular hardware platform. It supports multiple radio standards, including 5G and O-RAN, and is ready for deployment on either ASICs or FPGA platforms. Engineered for field performance, it offers a perfect balance of reliability and adaptability across numerous applications, meeting broad technical requirements.

Systems4Silicon
3GPP-5G, 3GPP-LTE, CAN-FD, Coder/Decoder, Ethernet, HDLC, MIL-STD-1553, Modulation/Demodulation, Multiprocessor / DSP, PLL, RapidIO
View Details

RapidGPT - AI-Driven EDA Tool

RapidGPT by PrimisAI is a revolutionary AI-based tool that transforms the landscape of Electronic Design Automation (EDA). Using generative AI, RapidGPT facilitates a seamless transition from traditional design methods to a more dynamic and intuitive process. This tool is characterized by its ability to interpret natural language inputs, enabling hardware designers to communicate design intentions effortlessly and effectively. Through RapidGPT, engineers gain access to a powerful code assistant that simplifies the conversion of ideas into fully realized Verilog code. By integrating third-party semiconductor IP seamlessly, the tool extends beyond basic design needs to offer a comprehensive framework for accelerating development times. RapidGPT further distinguishes itself by guiding users through the entire design lifecycle, from initial concepts to complete bitstream and GDSII stages, thus redefining productivity in hardware design. With RapidGPT, PrimisAI supports a wide spectrum of interactions and is trusted by numerous companies, underscoring its reliability and impact in the field. The tool's ability to enhance productivity and reduce time-to-market makes it a preferred choice for engineers aiming to combine efficiency with innovation in their projects. Easy to integrate into existing workflows, RapidGPT sets new standards in EDA, empowering users with an unparalleled interface and experience.

PrimisAI
AMBA AHB / APB/ AXI, CPU, Ethernet, HDLC, Processor Core Independent
View Details

NMP-750

The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details

AI Inference Platform

The SEMIFIVE AI Inference Platform is engineered to facilitate rapid development and deployment of AI inference solutions within custom silicon environments. Utilizing seamless integration with silicon-proven IPs, this platform delivers a high-performance framework optimized for AI and machine learning tasks. By providing a strategic advantage in cost reduction and efficiency, the platform decreases time-to-market challenges through pre-configured model layers and extensive IP libraries tailored for AI applications. It also offers enhanced scalability through its support for various computational and network configurations, making it adaptable to both high-volume and specialized market segments. This platform supports complex AI workloads on scalable AI engines, ensuring optimized performance in data-intensive operations. The integration of advanced processors and memory solutions within the platform further enhances processing efficiency, positioning it as an ideal solution for enterprises focusing on breakthroughs in AI technologies.

SEMIFIVE
AI Processor, Cell / Packet, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

NPU

The Neural Processing Unit (NPU) from OPENEDGES is geared towards advancing AI applications, providing a dedicated processing unit for neural network computations. Engineered to alleviate the computational load from CPUs and GPUs, this NPU optimizes AI workloads, enhancing deep learning tasks and inference processes. Capable of accelerating neural network inference, the NPU supports various machine learning frameworks and is compatible with industry-standard AI models. Its architecture focuses on delivering high throughput for deep learning operations while maintaining low power consumption, making it suitable for a range of applications from mobile devices to data centers. This NPU integrates seamlessly with existing AI frameworks, supporting scalability and flexibility in design. Its dedicated resource management ensures swift data processing and execution, thereby translating into superior AI performance and efficiency in multitude application scenarios.

OPENEDGES Technology, Inc.
AI Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent
View Details

Digital Radio (GDR)

The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.

GIRD Systems, Inc.
3GPP-5G, 3GPP-LTE, 802.11, Coder/Decoder, CPRI, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Independent
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

A2e's H.264 FPGA Encoder and CODEC Micro Footprint Cores provide a customizable solution targeting FPGAs. Known for its small size and rapid execution, the core supports 1080p60 H.264 Baseline with a singular core, making it one of the industry's swiftest and most efficient FPGA offerings. The core is compliant with ITAR, offering options to adjust pixel depths and resolutions according to specific needs. Its high-performance capability includes offering a latency of just 1ms at 1080p30, which is crucial for applications demanding rapid processing speeds. This licensable core is ideal for developers needing robust video compression capabilities in a compact form factor. The H.264 cores can be finely tuned to meet unique project specifications, enabling developers to implement varied pixel resolutions and depths, further enhancing the core's versatility for different application requirements. With a licensable evaluation option available, prospective users can explore the core’s functionalities before opting for full integration. This flexibility makes it suitable for projects demanding customizable compression solutions without the burden of full-scale initial commitment. Furthermore, A2e provides comprehensive integration and custom design services, allowing these cores to be seamlessly absorbed into existing systems or developed into new solutions. This support ensures minimized risk and accelerated project timelines, allowing developers to focus on innovation and efficiency in their video-centric applications.

A2e Technologies
AI Processor, AMBA AHB / APB/ AXI, Arbiter, Audio Controller, DVB, GPU, H.264, H.265, HDMI, Multiprocessor / DSP, Other, TICO, USB, Wireless Processor
View Details

RISC-V Core IP

The RISC-V Core from AheadComputing is a state-of-the-art application processor, designed to drive next-generation computing solutions. Built on an open-source architecture, this processor core emphasizes enhanced instruction per cycle (IPC) performance, setting the stage for highly efficient computing capabilities. As part of the company's commitment to delivering world-leading performance, the RISC-V Core provides a reliable backbone for advanced computing tasks across various applications. This core's design harnesses the power of 64-bit architecture, providing significant improvements in data handling and processing speed. The focus on 64-bit processing facilitates better computational tasks, ensuring robust performance in data-intensive applications. With AheadComputing's emphasis on superior compute solutions, the RISC-V Core exemplifies their commitment to power, performance, and flexibility. As a versatile computing component, the RISC-V Core suits a range of applications from consumer electronics to enterprise-level computing. It is designed to integrate seamlessly into diverse systems, meeting complex computational demands with finesse. This core stands out in the industry, underpinned by AheadComputing's dedication to pushing the boundaries of what a processor can achieve.

AheadComputing Inc.
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is an ultra-low power processor developed specifically for enhancing sensor capabilities at the edge. By leveraging advanced Spiking Neural Networks (SNNs), the T1 efficiently deciphers patterns in sensor data with minimal latency and power usage. This processor is especially beneficial in real-time applications, such as audio recognition, where it can discern speech from audio inputs with sub-millisecond latency and within a strict power budget, typically under 1mW. Its mixed-signal neuromorphic architecture ensures that pattern recognition functions can be continually executed without draining resources. In terms of processing capabilities, the T1 resembles a dedicated engine for sensor tasks, offering functionalities like signal conditioning, filtering, and classification independent of the main application processor. This means tasks traditionally handled by general-purpose processors can now be offloaded to the T1, conserving energy and enhancing performance in always-on scenarios. Such functionality is crucial for pervasive sensing tasks across a range of industries. With an architecture that balances power and performance impeccably, the T1 is prepared for diverse applications spanning from audio interfaces to the rapid deployment of radar-based touch-free interactions. Moreover, it supports presence detection systems, activity recognition in wearables, and on-device ECG processing, showcasing its versatility across various technological landscapes.

Innatera Nanosystems
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

Tyr AI Processor Family

The Tyr AI Processor Family revolutionizes edge AI by executing data processing directly where data is generated, instead of relying on cloud solutions. This empowers industries with real-time decision-making capabilities by bringing intelligence closer to devices, machines, and sensors. The Tyr processors integrate cutting-edge AI capabilities into compact, efficient designs, achieving high performance akin to data center class levels with much lower power needs. The edge processors from the Tyr line ensure reduced latency and enhanced privacy, making them suitable for autonomous vehicles, smart factories, and other real-time applications demanding immediate, secure insights. They feature robust local data processing options, ensuring minimal reliance on cloud services, which contributes to lower costs and improved compliance with privacy standards. With a focus on multi-modal input handling and sustainability, the Tyr processors provide balanced compute power, memory utilization, and intelligent features that align with the needs of highly dynamic and bandwidth-restricted environments. Using RISC-V cores, they facilitate versatile AI model deployment across edge devices, ensuring high adaptability to the latest technological advances and market demands.

VSORA
TSMC
20nm
AI Processor, DSP Core, Processor Core Dependent, Processor Cores
View Details

Codasip L-Series DSP Core

The Codasip L-Series DSP Core is designed to handle demanding signal processing tasks, offering an exemplary balance of computational power and energy efficiency. This DSP core is particularly suitable for applications involving audio processing and sensor data fusion, where performance is paramount. Codasip enriches this product with their extensive experience in RISC-V architectures, ensuring robust and optimized performance.

Codasip
AI Processor, Audio Processor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Cores
View Details

Time-Triggered Protocol

The Time-Triggered Protocol (TTP) is a cornerstone of TTTech's offerings, designed for high-reliability environments such as aviation. TTP ensures precise synchronization and communication between systems, leveraging a time-controlled approach to data exchange. This makes it particularly suitable for safety-critical applications where timing and order of operations are paramount. The protocol minimizes risks associated with communication errors, thus enhancing operational reliability and determinism. TTP is deployed in various platforms, providing the foundation for time-deterministic operations necessary for complex systems. Whether in avionics or in industries requiring strict adherence to real-time data processing, TTP adapts to the specific demands of each application. By using this protocol, industries can achieve dependable execution of interconnected systems, promoting increased safety and reliability. In particular, TTP's influence extends into integrated circuits where certifiable IP cores are essential, ensuring compliance with stringent industry standards such as RTCA DO-254. Ongoing developments in TTP also include tools and methodologies that facilitate verification and qualification, ensuring that all system components communicate effectively and as intended across all operating conditions.

TTTech Computertechnik AG
AMBA AHB / APB/ AXI, CAN, CAN XL, CAN-FD, Cell / Packet, Error Correction/Detection, Ethernet, FlexRay, LIN, MIPI, Processor Core Dependent, Safe Ethernet, Temperature Sensor
View Details

eSi-3200

The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt