Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Processor > CPU

CPU Semiconductor IP for Advanced Processing Solutions

The CPU, or Central Processing Unit, is the central component of computer systems, acting as the brain that executes instructions and processes data. Our category of CPU semiconductor IPs offers a diverse selection of intellectual properties that enable the development of highly efficient and powerful processors for a wide array of applications, from consumer electronics to industrial systems. Semiconductor IPs in this category are designed to meet the needs of modern computing, offering adaptable and scalable solutions for different technology nodes and design requirements.

These CPU semiconductor IPs provide the core functionalities required for the development of processors capable of handling complex computations and multitasking operations. Whether you're developing systems for mobile devices, personal computers, or embedded systems, our IPs offer optimized solutions that cater to the varying demands of power consumption, processing speed, and operational efficiency. This ensures that you can deliver cutting-edge products that meet the market's evolving demands.

Within the CPU semiconductor IP category, you'll find a range of products including RISC (Reduced Instruction Set Computer) processors, multi-core processors, and customizable processor cores among others. Each product is designed to integrate seamlessly with other system components, offering enhanced compatibility and flexibility in system design. These IP solutions are developed with the latest architectural advancements and technological improvements to support next-generation computing needs.

Selecting the right CPU semiconductor IP is crucial for achieving target performance and efficiency in your applications. Our offerings are meticulously curated to provide comprehensive solutions that are robust, reliable, and capable of supporting diverse computing applications. Explore our CPU semiconductor IP portfolio to find the perfect components that will empower your innovative designs and propel your products into the forefront of technology.

All semiconductor IP
201
IPs available

Akida Neural Processor IP

The Akida Neural Processor IP by BrainChip is a versatile AI solution that melds neural processing capabilities with scalable digital architecture, delivering high performance with minimal power consumption. At its core, this processor is engineered using principles from neuromorphic computing to address the demands of AI workloads with precision and speed. By enabling efficient computations with sparse data, the Akida Neural Processor optimizes sparse data, weights, and activations, making it especially suitable for AI applications that demand real-time processing with low latency. It provides a flexible solution for implementing neural networks with varying complexities and is adaptable to a wide array of use cases from audio processing to visual recognition. The IP core’s configurable framework supports the execution of complex neural models on edge devices, effectively running sophisticated neural algorithms like Convolutional Neural Networks (CNNs) without the need for complementary computing resources. This standalone operation capability reduces dependency on external CPUs, driving down power consumption and liberating devices from constant network connections.

BrainChip
AI Processor, Coprocessor, CPU, Digital Video Broadcast, Network on Chip, Platform Security, Vision Processor
View Details

Akida 2nd Generation

The Akida 2nd Generation processor further advances BrainChip's AI capabilities with enhanced programmability and efficiency for complex neural network operations. Building on the principles of its predecessor, this generation is optimized for 8-, 4-, and 1-bit weights and activations, offering more robust activation functions and support for advanced temporal and spatial neural networks. A standout feature of the Akida 2nd Generation is its enhanced teaching capability, which includes learning directly on the chip. This enables the system to perform one-shot and few-shot learning, significantly boosting its ability to adapt to new tasks without extensive reprogramming. Its architecture supports more sophisticated machine learning models such as Convolutional Neural Networks (CNNs) and Spatio-Temporal Event-Based Neural Networks, optimizing them for energy-efficient application at the edge. The processor's design reduces the necessity for host CPU involvement, thus minimizing communication overhead and conserving energy. This makes it particularly suitable for real-time data processing applications where quick and efficient data handling is crucial. With event-based hardware that accelerates processing, the Akida 2nd Generation is designed for scalability, providing flexible solutions across a wide range of AI-driven tasks.

BrainChip
AI Processor, CPU, Digital Video Broadcast, GPU, Input/Output Controller, IoT Processor, Multiprocessor / DSP, Network on Chip, Security Protocol Accelerators, Vision Processor
View Details

KL730 AI SoC

The KL730 AI SoC is an advanced powerhouse, utilizing third-generation NPU architecture to deliver up to 8 TOPS of efficient computing. This architecture excels in both CNN and transformer applications, optimizing DDR bandwidth usage. Its robust video processing features include 4K 60FPS video output, with exceptional performance in noise reduction, dynamic range, and low-light scenarios. With versatile application support ranging from intelligent security to autonomous driving, the KL730 stands out by delivering exceptional processing capabilities.

Kneron
TSMC
28nm
2D / 3D, A/D Converter, AI Processor, Amplifier, Audio Interfaces, Camera Interface, Clock Generator, CPU, CSC, GPU, Image Conversion, JPEG, USB, VGA, Vision Processor
View Details

Metis AIPU PCIe AI Accelerator Card

The Metis AIPU PCIe AI Accelerator Card by Axelera AI is designed for developers seeking top-tier performance in vision applications. Powered by a single Metis AIPU, this PCIe card delivers up to 214 TOPS, handling demanding AI tasks with ease. It is well-suited for high-performance AI inference, featuring two configurations: 4GB and 16GB memory options. The card benefits from the Voyager SDK, which enhances the developer experience by simplifying the deployment of applications and extending the card's capabilities. This accelerator PCIe card is engineered to run multiple AI models and support numerous parallel neural networks, enabling significant processing power for advanced AI applications. The Metis PCIe card performs at an industry-leading level, achieving up to 3,200 frames per second for ResNet-50 tasks and offering exceptional scalability. This makes it an excellent choice for applications demanding high throughput and low latency, particularly in computer vision fields.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

Yitian 710 Processor

The Yitian 710 Processor is T-Head's flagship ARM-based server chip that represents the pinnacle of their technological expertise. Designed with a pioneering architecture, it is crafted for high efficiency and superior performance metrics. This processor is built using a 2.5D packaging method, integrating two dies and boasting a substantial 60 billion transistors. The core of the Yitian 710 consists of 128 high-performance Armv9 CPU cores, each accompanied by advanced memory configurations that streamline instruction and data caching processes. Each CPU integrates 64KB of L1 instruction cache, 64KB of L1 data cache, and 1MB of L2 cache, supplemented by a robust 128MB system-level cache on the chip. To support expansive data operations, the processor is equipped with an 8-channel DDR5 memory system, enabling peak memory bandwidth of up to 281GB/s. Its I/O subsystem is formidable, featuring 96 PCIe 5.0 channels capable of achieving dual-direction bandwidth up to 768GB/s. With its multi-layered design, the Yitian 710 Processor is positioned as a leading solution for cloud services, data analytics, and AI operations.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

xcore.ai

xcore.ai stands as a cutting-edge processor that brings sophisticated intelligence, connectivity, and computation capabilities to a broad range of smart products. Designed to deliver optimal performance for applications in consumer electronics, industrial control, and automotive markets, it efficiently handles complex processing tasks with low power consumption and rapid execution speeds. This processor facilitates seamless integration of AI capabilities, enhancing voice processing, audio interfacing, and real-time analytics functions. It supports various interfacing options to accommodate different peripheral and sensor connections, thus providing flexibility in design and deployment across multiple platforms. Moreover, the xcore.ai ensures robust performance in environments requiring precise control and high data throughput. Its compatibility with a wide array of software tools and libraries enables developers to swiftly create and iterate applications, reducing the time-to-market and optimizing the design workflows.

XMOS Semiconductor
21 Categories
View Details

Veyron V2 CPU

Veyron V2 represents the next generation of Ventana's high-performance RISC-V CPU. It significantly enhances compute capabilities over its predecessor, designed specifically for data center, automotive, and edge deployment scenarios. This CPU maintains compatibility with the RVA23 RISC-V specification, making it a powerful alternative to the latest ARM and x86 counterparts within similar domains. Focusing on seamless integration, the Veyron V2 offers clean, portable RTL implementations with a standardized interface, optimizing its use for custom SoCs with high-core counts. With a robust 512-bit vector unit, it efficiently supports workloads requiring both INT8 and BF16 precision, making it highly suitable for AI and ML applications. The Veyron V2 is adept in handling cloud-native and virtualized workloads due to its full architectural virtualization support. The architectural advancements offer significant performance-per-watt improvements, and advanced cache and virtualization features ensure a secure and reliable computing environment. The Veyron V2 is available as both a standalone IP and a complete hardware platform, facilitating diverse integration pathways for customers aiming to harness Ventana’s innovative RISC-V solutions.

Ventana Micro Systems
TSMC
16nm, 28nm
AI Processor, CPU, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

The Tianqiao-70 is a low-power RISC-V CPU designed for commercial-grade applications where power efficiency is paramount. Suitable for mobile and desktop applications, artificial intelligence, as well as various other technology sectors, this processor excels in maintaining high performance while minimizing power consumption. Its design offers great adaptability to meet the requirements of different operational environments.

StarFive Technology
AI Processor, CPU, Multiprocessor / DSP, Processor Cores
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Chimera GPNPU

Chimera GPNPU provides a groundbreaking architecture, melding the efficiency of neural processing units with the flexibility and programmability of processors. It supports a full range of AI and machine learning workloads autonomously, eliminating the need for supplementary CPUs or GPUs. The processor is future-ready, equipped to handle new and emerging AI models with ease, thanks to its C++ programmability. What makes Chimera stand out is its ability to manage a diverse array of workloads within a singular processor framework that combines matrix, vector, and scalar operations. This harmonization ensures maximum performance for applications across various market sectors, such as automotive, mobile devices, and network edge systems. These capabilities are designed to streamline the AI development process and facilitate high-performance inference tasks, crucial for modern gadget ecosystems. The architecture is fully synthesizable, allowing it to be implemented in any process technology, from current to advanced nodes, adjusting to desired performance targets. The adoption of a hybrid Von Neuman and 2D SIMD matrix design supports a broad suite of DSP operations, providing a comprehensive toolkit for complex graph and AI-related processing.

Quadric
15 Categories
View Details

Jotunn8 AI Accelerator

The Jotunn 8 is heralded as the world's most efficient AI inference chip, designed to maximize AI model deployment with lightning-fast speeds and scalability. This powerhouse is crafted to efficiently operate within modern data centers, balancing critical factors such as high throughput, low latency, and optimization of power use, all while maintaining a sustainable infrastructure. With the Jotunn 8, AI investments reach their full potential through high-performance inference solutions that significantly reduce operational costs while committing to environmental sustainability. Its ultra-low latency feature is crucial for real-time applications such as chatbots and fraud detection systems. Not only does it deliver high throughput needed for demanding services like recommendation engines, but it also proves cost-efficient, aiming to lower the cost per inference crucial for businesses operating at a large scale. Additionally, the Jotunn 8 boasts performance per watt efficiency, a major factor considering that power is a significant operational expense and a driver of the carbon footprint. By implementing the Jotunn 8, businesses can ensure their AI models deliver maximum impact while staying competitive in the growing real-time AI services market. This chip lays down a new foundation for scalable AI, enabling organizations to optimize their infrastructures without compromising on performance.

VSORA
13 Categories
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

eSi-3250

Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

Akida IP

The Akida IP platform is a revolutionary neural processor inspired by the workings of the human brain to achieve unparalleled cognitive capabilities and energy efficiency. This self-contained neural processor utilizes a scalable architecture that can be configured from 1 to 128 nodes, each capable of supporting 128 MAC operations. It allows for the execution of complex neural network operations with minimal power and latency, making it ideal for edge AI applications in vision, audio, and sensor fusion. The Akida IP supports multiple data formats including 4-, 2-, and 1-bit weights and activations, enabling the seamless execution of various neural networks across multiple layers. Its convolutional and fully-connected neural processors can perform multi-layered executions independently of a host CPU, enhancing flexibility in diverse applications. Additionally, its event-based hardware acceleration significantly reduces computation and communication loads, preserving host CPU resources and optimizing overall system efficiency. Silicon-proven, the Akida platform provides a cost-effective and secure solution due to its on-chip learning capabilities, supporting one-shot and few-shot learning methods. By maintaining sensitive data on-chip, the system offers improved security and privacy. Its extensive configurability ensures adaptability for post-silicon applications, making Akida an intelligent and scalable choice for developers. It is especially suited for implementations that require real-time processing and sophisticated AI functionalities at the edge.

BrainChip
AI Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Vision Processor
View Details

KL520 AI SoC

The KL520 AI SoC introduces edge AI with efficiency in size and power, setting a standard in the market for such technologies. Featuring a dual ARM Cortex M4 CPU, it serves as a versatile AI co-processor, supporting an array of smart devices. It’s designed for compatibility with various sensor technologies, enabling powerful 3D sensing capabilities.

Kneron
TSMC
28nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Receiver/Transmitter, Vision Processor
View Details

RV12 RISC-V Processor

The RV12 RISC-V Processor is a highly adaptable single-core CPU that adheres to the RV32I and RV64I specifications of the RISC-V instruction set, aimed at the embedded systems market. This processor supports a variety of standard and custom configurations, making it suitable for diverse application needs. Its inherent flexibility allows it to be implemented efficiently in both FPGA and ASIC environments, ensuring that it meets the performance and resource constraints typical of embedded applications. Designed with an emphasis on configurability, the RV12 Processor can be tailored to include only the necessary components, optimizing both area and power consumption. It comes with comprehensive documentation and verification testbenches, providing a complete solution for developers looking to integrate a RISC-V CPU into their design. Whether for educational purposes or commercial deployment, the RV12 stands out for its robust design and adaptability, making it an ideal choice for modern embedded system solutions.

Roa Logic BV
AI Processor, CPU, Cryptography Software Library, IoT Processor, Microcontroller, Processor Cores
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator from EdgeCortix is a sophisticated solution designed to propel generative AI to new frontiers with impressive energy efficiency. This advanced accelerator provides unparalleled performance with high flexibility for a wide variety of applications, leveraging EdgeCortix's dedicated Dynamic Neural Accelerator architecture. SAKURA-II is optimized for real-time, low-latency AI inference on the edge, tackling demanding generative AI tasks efficiently in constrained environments. The accelerator boasts up to 60 TOPS (Tera Operations Per Second) INT8 performance, allowing it to process large neural networks with complex parameters such as Llama 2 and Stable Diffusion effectively. It supports applications across vision, language, audio, and beyond, by utilizing robust DRAM capabilities and enhanced data throughput. This allows it to outperform other solutions while maintaining a low power consumption profile typically around 8 watts. Designed for integration into small silicon spaces, SAKURA-II caters to the needs of highly efficient AI models, providing dynamic capabilities to meet the stringent requirements of next-gen applications. Thus, the SAKURA-II AI Accelerator stands out as a top choice for developers seeking seamless deployment of cutting-edge AI applications at the edge, underscoring EdgeCortix's leadership in energy-efficient AI processing.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

KL630 AI SoC

With cutting-edge NPU architecture, the KL630 AI SoC pushes the boundaries of performance efficiency and low energy consumption. It stands as a pioneering solution supporting Int4 precision and transformer neural networks, offering noteworthy performance for diverse applications. Anchored by an ARM Cortex A5 CPU, it boasts compute efficiency and energy savings, making it ideal for various edge devices.

Kneron
TSMC
28nm
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, USB, VGA, Vision Processor
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

aiWare

aiWare is engineered as a high-performance neural processing unit tailored for automotive AI applications, delivering exceptional power efficiency and computational capability across a broad spectrum of neural network tasks. Its design centers around achieving the utmost efficiency in AI inference, providing flexibility and scalability for various levels of autonomous driving, from basic L2 assistance systems to complex L4 self-driving operations. The aiWare architecture exemplifies leading-edge NPU efficiencies, reaching up to 98% across diverse neural network workloads like CNNs and RNNs, making it a premier choice for AI tasks in the automotive sector. It boasts an industry-leading 1024 TOPS capability, making it suitable for multi-sensor and multi-camera setups required by advanced autonomous vehicle systems. The NPU's hardware determinism aids in achieving high ISO 26262 ASIL B certification standards, ensuring it meets the rigorous safety specifications essential in automotive applications. Incorporating an easy-to-integrate RTL design and a comprehensive SDK, aiWare simplifies system integration and accelerates development timelines for automotive manufacturers. Its highly optimized dataflow and minimal external memory traffic significantly enhance system power economy, providing crucial benefits in reducing operational costs for deployed automotive AI solutions. Vibrant with efficiency, aiWare assures OEMs the capabilities needed to handle modern automotive workloads while maintaining minimal system constraints.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, FlexRay, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

eSi-1600

The eSi-1600 is a 16-bit CPU core designed for cost-sensitive and power-efficient applications. It accords performance levels similar to that of 32-bit CPUs while maintaining a system cost comparable to 8-bit processors. This IP is particularly well-suited for control applications needing limited memory resources, demonstrating excellent compatibility with mature mixed-signal technologies.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, Microcontroller, Processor Cores
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is engineered to propel artificial intelligence tasks to new heights with its cutting-edge architecture. This accelerator enhances machine learning tasks by speeding up neural network processing, making it a key player in the burgeoning AI sector. Its innovative design is optimized for low latency and high throughput, facilitating real-time AI application performance and enabling advanced machine learning model implementations. Harnessing an extensive array of computing cores, the Hanguang 800 ensures parallel processing capabilities that significantly reduce training times for large-scale AI models. Its application scope covers diverse sectors, including autonomous driving, smart city infrastructure, and intelligent robotics, underscoring its versatility and adaptability. Built with energy efficiency in mind, this AI accelerator prioritizes minimal power consumption, making it ideal for data centers looking to maximize computational power without overextending their energy footprint. By integrating seamlessly with existing frameworks, the Hanguang 800 offers a ready-to-deploy solution for enterprises seeking to enhance their AI-driven services and operations.

T-Head Semiconductor
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

SiFive Intelligence X280

The SiFive Intelligence X280 is designed to address the burgeoning needs of AI and machine learning at the edge. Emphasizing a software-first methodology, this family of processors is crafted to offer scalable vector and matrix compute capabilities. By integrating broad vector processing features and high-bandwidth interfaces, it can adapt to the ever-evolving landscape of AI workloads, providing both high performance and efficient scalability. Built on the RISC-V foundation, the X280 features comprehensive vector compute engines that cater to modern AI demands, making it a powerful tool for edge computing applications where space and energy efficiency are critical. Its versatility allows it to seamlessly manage diverse AI tasks, from low-latency inferences to complex machine learning models, thanks to its support for RISC-V Vector Extensions (RVV). The X280 family is particularly robust for applications requiring rapid AI deployment and adaptation like IoT devices and smart infrastructure. Through extensive compatibility with machine learning frameworks such as TensorFlow Lite, it ensures ease of deployment, enhanced by its focus on energy-efficient inference solutions and support for legacy systems, making it a comprehensive solution for future AI technologies.

SiFive, Inc.
AI Processor, CPU, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

KL530 AI SoC

The KL530 is Kneron's state-of-the-art AI chip with a unique NPU architecture, leading the market in INT4 precision and transformers. Designed for higher efficiency, it features lower power consumption while maintaining robust performance. The chip supports various AI models and configurations, making it adaptable across AIoT and other technology landscapes.

Kneron
TSMC
28nm
AI Processor, Camera Interface, Clock Generator, CPU, CSC, GPU, Peripheral Controller, Vision Processor
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

SCR9 Processor Core

Poised to deliver exceptional performance in advanced applications, the SCR9 processor core epitomizes modern processing standards with its 12-stage dual-issue out-of-order pipeline and hypervisor support. Its inclusion of a vector processing unit (VPU) positions it as essential for high-performance computing tasks that require extensive parallel data processing. Suitable for high-demand environments such as enterprise data systems, AI workloads, and computationally intensive mobile applications, the SCR9 core is tailored to address high-throughput demands while maintaining reliability and accuracy. With support for symmetric multiprocessing (SMP) of up to 16 cores, this core stands as a configurable powerhouse, enabling developers to maximize processing efficiency and throughput. The SCR9's capabilities are bolstered by Syntacore’s dedication to supporting developers with comprehensive tools and documentation, ensuring efficient design and implementation. Through its blend of sophisticated features and support infrastructure, the SCR9 processor core paves the way for advancing technological innovation across numerous fields, establishing itself as a robust solution in the rapidly evolving landscape of high-performance computing.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

eSi-3200

The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

Ultra-Low-Power 64-Bit RISC-V Core

The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic is engineered to deliver exceptional energy efficiency while maintaining high performance. This core is specifically designed to operate at 1GHz while consuming a mere 10mW of power, making it ideal for today's power-conscious applications. Utilizing advanced design techniques, this processor achieves its high performance at lower voltages, ensuring reduced power consumption without sacrificing speed. Constructed with a focus on optimizing processing capabilities, this RISC-V core is built to cater to demanding environments where energy efficiency is critical. Whether used as a standalone processor or integrated into larger systems, its low power requirements and robust performance make it highly versatile. This core also supports scalable processing with its architecture, accommodating a broad spectrum of applications from IoT devices to performance-intensive computing tasks, aligning with industry standards for modern electronic products.

Micro Magic, Inc.
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Y180

The Y180 is a CPU-focused IP core that efficiently replicates the functionality of the Zilog Z180 CPU, comprising around 8k gates. This implementation showcases Systemyde’s commitment to detail, ensuring a consistent and reliable performance within a minimized footprint. With a core dedicated to sustaining traditional CPU operations, the Y180 is notably small yet potent, suiting designs requiring streamlined CPU cores. It remains resilient in environments that demand traditional computing interfaces, providing a dependable platform for basic process tasks. Its silicon-proven design attests to its dependability and functionality across various implementations. As a go-to standard, the Y180 supports standard CPU applications seamlessly, acting as an accessible solution for Zilog’s architectural compatibility.

Systemyde International Corp.
CPU, IoT Processor, Microcontroller, Processor Cores
View Details

C100 IoT Control and Interconnection Chip

The Chipchain C100 is a pioneering solution in IoT applications, providing a highly integrated single-chip design that focuses on low power consumption without compromising performance. Its design incorporates a powerful 32-bit RISC-V CPU which can reach speeds up to 1.5GHz. This processing power ensures efficient and capable computing for diverse IoT applications. This chip stands out with its comprehensive integrated features including embedded RAM and ROM, making it efficient in both processing and computing tasks. Additionally, the C100 comes with integrated Wi-Fi and multiple interfaces for transmission, broadening its application potential significantly. Other notable features of the C100 include an ADC, LDO, and a temperature sensor, enabling it to handle a wide array of IoT tasks more seamlessly. With considerations for security and stability, the Chipchain C100 facilitates easier and faster development in IoT applications, proving itself as a versatile component in smart devices like security systems, home automation products, and wearable technology.

Shenzhen Chipchain Technologies Co., Ltd.
TSMC
7nm LPP, 16nm, 28nm
20 Categories
View Details

SiFive Essential

The SiFive Essential family stands out as a versatile solution, delivering a wide range of pre-defined embedded CPU cores suitable for a variety of industrial applications. Whether you're designing for minimal area and power consumption or maximum feature capabilities, Essential offers configurations that adapt to diverse industrial needs. From compact microcontrollers to rich OS-compatible CPUs, Essential supports 32-bit and 64-bit pipelines, ensuring an optimal balance between performance and efficiency. This flexibility is enhanced by advanced tracing and debugging features, robust SoC security through WorldGuard support, and a broad array of interface options for seamless SoC integration. These comprehensive support mechanisms assure developers of maximum adaptability and accelerated integration within their designs, whether in IoT devices or control plane applications. SiFive Essential’s power efficiency and adaptability make it particularly suited for deploying customizable solutions in embedded applications. Whether the requirement is for intense computational capacity or low-power, battery-efficient tasks, Essential cores help accelerate time-to-market while offering robust performance in compact form factors, emphasizing scalable and secure solutions for a variety of applications.

SiFive, Inc.
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

SCR1 Microcontroller Core

The SCR1 microcontroller core is a compact, open-source offering designed for deeply embedded applications. It operates with a 4-stage in-order pipeline, ensuring efficient processing in space-constrained environments. Notably, it supports configurations that cater to various industrial needs, making it an ideal solution for projects requiring small form factors without compromising on power efficiency. This core is particularly effective for Internet of Things (IoT) devices and sensor hubs, where low power consumption and high reliability are critical. Its silicon-proven design further attests to its robustness, guaranteeing seamless integration into diverse operational settings. Delivering exceptional performance within constrained resources, the SCR1 stands as a versatile option for industries looking to leverage RISC-V's capabilities in microcontroller applications. Key features of the SCR1 include its ability to function within deeply embedded networks, addressing the needs of sectors like industrial automation and home automation. The in-order pipeline architecture of the SCR1 microcontroller provides predictable performance and straightforward debugging, ideal for critical applications requiring stability and efficiency. Its capability to pair with a variety of software tools enhances usability, offering designers a flexible platform for intricate embedded systems. Moreover, the SCR1 microcontroller benefits from community-driven development, ensuring continuous improvements and updates. This collaborative advancement fosters innovation, facilitating the deployment of advanced features while maintaining low energy requirements. As technology evolution demands more efficient solutions, the SCR1 continues to adapt, contributing significantly to the expanding RISC-V ecosystem. Increasingly indispensable, it offers a sustainable, cost-effective solution for manufacturers aiming to implement cutting-edge technology in their products.

Syntacore
Building Blocks, CPU, Microcontroller, Processor Cores
View Details

SCR7 Application Core

Pushing the envelope of application processing, the SCR7 application core integrates a 12-stage dual-issue out-of-order pipeline for high-performance computing tasks. It is equipped with advanced cache coherency and a robust memory subsystem ideal for modern applications demanding exceptional compute power and scalability. This application core serves large-scale computing environments, addressing needs within sectors such as data centers, enterprise solutions, and AI-enhanced applications. Supporting symmetric multiprocessing (SMP) with configurations up to eight cores, the SCR7 ensures smooth and simultaneous execution of complex tasks, significantly improving throughput and system efficiency. Syntacore complements this architecture with a rich toolkit that facilitates development across diverse platforms, enhancing its adaptability to specific commercial needs. The SCR7 embodies the future of application processing with its ability to seamlessly integrate into existing infrastructures while delivering outperforming results rooted in efficient architectural design and robust support systems.

Syntacore
AI Processor, CPU, IoT Processor, Microcontroller, Processor Core Independent, Processor Cores
View Details

RAIV General Purpose GPU

The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

RISC-V CPU IP N Class

Nuclei's RISC-V CPU IP N Class is engineered with a 32-bit architecture specifically targeting microcontroller and AIoT applications. Tailored for high performance, it offers exceptional configurability, allowing integration into diverse system environments by selecting only the necessary features. The N Class series is part of Nuclei's robust coding framework, built with Verilog for enhanced readability and optimized for debugging and performance-power-area (PPA) considerations. This IP ensures scalability through support for RISC-V extensions including B, K, P, and V, as well as the flexibility of user-defined instruction extensions. Nuclei addresses comprehensive security through information security solutions like TEE and physical security packages. Meanwhile, its safety functionalities align with standards such as ASIL-B and ASIL-D, vital for applications demanding high safety protocols. The N Class is further supported by a wide range of ecosystem resources, facilitating seamless integration into various industrial applications. In summary, the N Class IP not only provides powerful performance capabilities but is also structured to accommodate a broad range of applications while adhering to necessary safety and security frameworks. Its user-friendly customization makes it particularly suitable for applications in rapidly evolving fields such as AIoT.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

KL720 AI SoC

Optimized for performance-to-power, the KL720 AI SoC is a formidable choice for high-end applications demanding power efficiency. It supports extensive real-world use cases such as smart TVs and AI glasses, featuring a powerful architecture designed for seamless 4K video and complex AI processes, including facial recognition and gaming interfaces.

Kneron
TSMC
28nm
2D / 3D, AI Processor, Audio Interfaces, AV1, Camera Interface, CPU, GPU, Image Conversion, TICO, Vision Processor
View Details

eSi-3264

The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 represents a significant leap in neuromorphic microcontroller technology, blending ultra-low power consumption with advanced spiking neural network capabilities. This microcontroller stands as a complete solution for processing sensor data with unprecedented efficiency and speed, bringing intelligence directly to the sensor. Incorporating a nimble RISC-V processor core alongside its spiking neural network engine, the T1 is engineered for seamless integration into next-generation AI applications. Within a tightly constrained power envelope, it excels at signal processing tasks that are crucial for battery-operated, latency-sensitive devices. The T1's architecture allows for fast, sub-1mW pattern recognition, enabling real-time sensory data processing akin to the human brain's capabilities. This microcontroller facilitates complex event-driven processing with remarkable efficiency, reducing the burden on application processors by offloading sensor data processing tasks. It is an enabler of groundbreaking developments in wearables, ambient intelligence, and smart devices, particularly in scenarios where power and response time are critical constraints. With flexible interface support, including QSPI, I2C, UART, and more, the T1 is designed for easy integration into existing systems. Its compact package size further enhances its suitability for embedded applications, while its comprehensive Evaluation Kit (EVK) supports developers in accelerating application development. The EVK provides extensive performance profiling tools, enabling the exploration of the T1's multifaceted processing capabilities. Overall, the T1 stands at the forefront of bringing brain-inspired intelligence to the edge, setting a new standard for smart sensor technology.

Innatera Nanosystems
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

Veyron V1 CPU

The Veyron V1 is a high-performance RISC-V CPU aimed at data centers and similar applications that require robust computing power. It integrates with various chiplet and IP cores, making it a versatile choice for companies looking to create customized solutions. The Veyron V1 is designed to offer competitive performance against x86 and ARM counterparts, providing a seamless transition between different node process technologies. This CPU benefits from Ventana's innovation in RISC-V technology, where efforts are placed on providing an extensible architecture that facilitates domain-specific acceleration. With capabilities stretching from hyperscale computing to edge applications, the Veyron V1 supports extensive instruction sets for high-throughput operations. It also boasts leading-edge chiplet interfaces, opening up numerous opportunities for rapid productization and cost-effective deployment. Ventana's emphasis on open standards ensures that the Veyron V1 remains an adaptable choice for businesses aiming at bespoke solutions. Its compatibility with system IP and its provision in multiple platform formats—including chiplets—enable businesses to leverage the latest technological advancements in RISC-V. Additionally, the ecosystem surrounding the Veyron series ensures support for both modern software frameworks and cross-platform integration.

Ventana Micro Systems
TSMC
10nm, 16nm
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

NeuroMosAIc Studio

NeuroMosAIc Studio is a comprehensive software platform designed to maximize AI processor utilization through intuitive model conversion, mapping, simulation, and profiling. This advanced software suite supports Edge AI models by optimizing them for specific application needs. It offers precision analysis, network compression, and quantization tools to streamline the process of deploying AI models across diverse hardware setups. The platform is notably adept at integrating multiple AI functions and facilitating edge training processes. With tools like the NMP Compiler and Simulator, it allows developers to optimize functions at different stages, from quantization to training. The Studio's versatility is crucial for developers seeking to enhance AI solutions through customized model adjustments and optimization, ensuring high performance across AI systems. NeuroMosAIc Studio is particularly valuable for its edge training support and comprehensive optimization capabilities, paving the way for efficient AI deployment in various sectors. It offers a robust toolkit for AI model developers aiming to extract the maximum performance from hardware in dynamic environments.

AiM Future
AI Processor, CPU, IoT Processor
View Details

SCR3 Microcontroller Core

The SCR3 microcontroller core serves as an efficient platform for a range of embedded applications, characterized by its ability to handle both 32/64-bit constructs. Capable of supporting up to four symmetric multiprocessing (SMP) cores, this core is perfect for applications demanding enhanced computational power and multitasking abilities. It operates with a 5-stage in-order pipeline, which, coupled with privilege mode support, ensures that it can manage multiple tasks smoothly while maintaining operational integrity. Such capabilities make the SCR3 microcontroller core particularly well-suited for domains like industrial control systems and automotive applications, where precision and reliability are paramount. The inclusion of a memory protection unit (MPU) and layered L1 and L2 caches significantly boosts data processing rates, optimizing system performance. Bringing these features together, the core maintains high functionality while ensuring energy efficiency—an essential factor for high-demand embedded systems. A prominent feature of the SCR3 core is its flexibility. It can be extensively configured to match specific project requirements, from simple embedded devices to complex sensor networks. The provision of comprehensive documentation and development toolkits simplifies the integration process, supporting designers in developing robust and scalable solutions. Continued innovation and customization potential solidify the SCR3's position as a pivotal component in harnessing the power of RISC-V architectures.

Syntacore
Building Blocks, CPU, DSP Core, Microcontroller, Processor Cores
View Details

eSi-1650

The eSi-1650 is a compact, low-power 16-bit CPU core integrating an instruction cache, making it an ideal choice for mature process nodes reliant on OTP or Flash program memory. By omitting large on-chip RAMs, the IP core optimizes power and area efficiency and permits the CPU to capitalize on its maximum operational frequency beyond OTP/Flash constraints.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, Microcontroller, Processor Cores
View Details

ReRAM Memory

CrossBar's ReRAM Memory technology introduces a revolutionary approach to non-volatile memory that transcends the limitations of traditional memory solutions. ReRAM, or Resistive RAM, distinguishes itself through its simple architectural design, enabling manufacturers to scale it down to sizes smaller than 10nm and integrate it seamlessly with existing logic processes in a single foundry. This advancement allows for unprecedented energy efficiency, with ReRAM consuming just 1/20th of the energy compared to traditional flash memory solutions, while also offering dramatically improved endurance and performance metrics. The scalability of ReRAM supports high-density memory applications, including its potential for 3D stacking, which allows terabytes of storage to be integrated on-chip. ReRAM excels in delivering low latency and high-speed operations, making it especially suitable for applications requiring rapid data access and processing, such as in data centers and IoT devices. Its robust performance characteristics make it an ideal solution for modern computing demands, offering both hard macros and architectural licenses depending on customer needs. Another key benefit of ReRAM is enhanced security, essential in applications ranging from automotive to secure computing. By providing low power consumption combined with high data integrity, ReRAM is positioned as a pivotal technology in future-proofing data storage solutions. It has proven to be a secure alternative to flash memory, with superior operational characteristics that address the diverse needs of contemporary electronic and computing environments.

CrossBar Inc.
TSMC
350nm
CPU, Embedded Memories, Embedded Security Modules, Flash Controller, Mobile SDR Controller, NAND Flash, SDRAM Controller, Security Processor, SRAM Controller, Standard cell
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt