All IPs > Processor > Processor Cores
Processor cores are fundamental components in central processing units (CPUs) and systems-on-chip (SoCs) for a myriad of digital devices ranging from personal computers and smartphones to more specialized equipment like embedded systems. Within the category of Processor Cores, you'll find a diverse selection of semiconductor IPs tailored to meet the varying demands of speed, power efficiency, and processing capability required by today's technology-driven world.
Our Processor Cores category provides an extensive library of semiconductor IPs, enabling designers to integrate powerful, efficient, and scalable cores into their projects. These IPs are essential for firms aiming to innovate and achieve a competitive edge within the fast-evolving tech landscape. Whether you're developing high-performance computing solutions or aiming for energy-efficient mobile gadgets, our processor core IP offerings are designed to support a wide range of architectures, from single-core microcontrollers to multi-core, multi-threaded processors.
One of the primary uses of processor core IPs is to define the architecture and functions of a core within a chip. These IPs provide the blueprint for building custom processors that can handle specific applications efficiently. They cover a broad spectrum of processing needs, including general-purpose processing, digital signal processing, and application-specific processing tasks. This flexibility allows developers to choose IPs that align perfectly with their product specifications, ensuring optimal performance and power usage.
In our Processor Cores category, you'll discover IPs suited for creating processors that power everything from wearables and IoT devices to servers and network infrastructure hardware. By leveraging these semiconductor IPs, businesses can significantly reduce time-to-market, lower development costs, and ensure that their products remain at the forefront of technology innovation. Each IP in this category is crafted to meet industry standards, providing robust solutions that integrate seamlessly into various technological environments.
The Metis AIPU PCIe AI Accelerator Card by Axelera AI is designed for developers seeking top-tier performance in vision applications. Powered by a single Metis AIPU, this PCIe card delivers up to 214 TOPS, handling demanding AI tasks with ease. It is well-suited for high-performance AI inference, featuring two configurations: 4GB and 16GB memory options. The card benefits from the Voyager SDK, which enhances the developer experience by simplifying the deployment of applications and extending the card's capabilities. This accelerator PCIe card is engineered to run multiple AI models and support numerous parallel neural networks, enabling significant processing power for advanced AI applications. The Metis PCIe card performs at an industry-leading level, achieving up to 3,200 frames per second for ResNet-50 tasks and offering exceptional scalability. This makes it an excellent choice for applications demanding high throughput and low latency, particularly in computer vision fields.
Speedcore embedded FPGA (eFPGA) IP represents a notable advancement in integrating programmable logic into ASICs and SoCs. Unlike standalone FPGAs, eFPGA IP lets designers tailor the exact dimensions of logic, DSP, and memory needed for their applications, making it an ideal choice for areas like AI, ML, 5G wireless, and more. Speedcore eFPGA can significantly reduce system costs, power requirements, and board space while maintaining flexibility by embedding only the necessary features into production. This IP is programmable using the same Achronix Tool Suite employed for standalone FPGAs. The Speedcore design process is supported by comprehensive resources and guidance, ensuring efficient integration into various semiconductor projects.
The Yitian 710 Processor is T-Head's flagship ARM-based server chip that represents the pinnacle of their technological expertise. Designed with a pioneering architecture, it is crafted for high efficiency and superior performance metrics. This processor is built using a 2.5D packaging method, integrating two dies and boasting a substantial 60 billion transistors. The core of the Yitian 710 consists of 128 high-performance Armv9 CPU cores, each accompanied by advanced memory configurations that streamline instruction and data caching processes. Each CPU integrates 64KB of L1 instruction cache, 64KB of L1 data cache, and 1MB of L2 cache, supplemented by a robust 128MB system-level cache on the chip. To support expansive data operations, the processor is equipped with an 8-channel DDR5 memory system, enabling peak memory bandwidth of up to 281GB/s. Its I/O subsystem is formidable, featuring 96 PCIe 5.0 channels capable of achieving dual-direction bandwidth up to 768GB/s. With its multi-layered design, the Yitian 710 Processor is positioned as a leading solution for cloud services, data analytics, and AI operations.
xcore.ai stands as a cutting-edge processor that brings sophisticated intelligence, connectivity, and computation capabilities to a broad range of smart products. Designed to deliver optimal performance for applications in consumer electronics, industrial control, and automotive markets, it efficiently handles complex processing tasks with low power consumption and rapid execution speeds. This processor facilitates seamless integration of AI capabilities, enhancing voice processing, audio interfacing, and real-time analytics functions. It supports various interfacing options to accommodate different peripheral and sensor connections, thus providing flexibility in design and deployment across multiple platforms. Moreover, the xcore.ai ensures robust performance in environments requiring precise control and high data throughput. Its compatibility with a wide array of software tools and libraries enables developers to swiftly create and iterate applications, reducing the time-to-market and optimizing the design workflows.
Veyron V2 represents the next generation of Ventana's high-performance RISC-V CPU. It significantly enhances compute capabilities over its predecessor, designed specifically for data center, automotive, and edge deployment scenarios. This CPU maintains compatibility with the RVA23 RISC-V specification, making it a powerful alternative to the latest ARM and x86 counterparts within similar domains. Focusing on seamless integration, the Veyron V2 offers clean, portable RTL implementations with a standardized interface, optimizing its use for custom SoCs with high-core counts. With a robust 512-bit vector unit, it efficiently supports workloads requiring both INT8 and BF16 precision, making it highly suitable for AI and ML applications. The Veyron V2 is adept in handling cloud-native and virtualized workloads due to its full architectural virtualization support. The architectural advancements offer significant performance-per-watt improvements, and advanced cache and virtualization features ensure a secure and reliable computing environment. The Veyron V2 is available as both a standalone IP and a complete hardware platform, facilitating diverse integration pathways for customers aiming to harness Ventana’s innovative RISC-V solutions.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The Tianqiao-70 is a low-power RISC-V CPU designed for commercial-grade applications where power efficiency is paramount. Suitable for mobile and desktop applications, artificial intelligence, as well as various other technology sectors, this processor excels in maintaining high performance while minimizing power consumption. Its design offers great adaptability to meet the requirements of different operational environments.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
Chimera GPNPU provides a groundbreaking architecture, melding the efficiency of neural processing units with the flexibility and programmability of processors. It supports a full range of AI and machine learning workloads autonomously, eliminating the need for supplementary CPUs or GPUs. The processor is future-ready, equipped to handle new and emerging AI models with ease, thanks to its C++ programmability. What makes Chimera stand out is its ability to manage a diverse array of workloads within a singular processor framework that combines matrix, vector, and scalar operations. This harmonization ensures maximum performance for applications across various market sectors, such as automotive, mobile devices, and network edge systems. These capabilities are designed to streamline the AI development process and facilitate high-performance inference tasks, crucial for modern gadget ecosystems. The architecture is fully synthesizable, allowing it to be implemented in any process technology, from current to advanced nodes, adjusting to desired performance targets. The adoption of a hybrid Von Neuman and 2D SIMD matrix design supports a broad suite of DSP operations, providing a comprehensive toolkit for complex graph and AI-related processing.
The Speedster7t FPGA family is crafted for high-bandwidth tasks, tackling the usual restrictions seen in conventional FPGAs. Manufactured using the TSMC 7nm FinFET process, these FPGAs are equipped with a pioneering 2D network-on-chip architecture and a series of machine learning processors for optimal high-bandwidth performance and AI/ML workloads. They integrate interfaces for high-paced GDDR6 memory, 400G Ethernet, and PCI Express Gen5 ports. This 2D network-on-chip connects various interfaces to upward of 80 access points in the FPGA fabric, enabling ASIC-like performance, yet retaining complete programmability. The product encourages users to start with the VectorPath accelerator card which houses the Speedster7t FPGA. This family offers robust tools for applications such as 5G infrastructure, computational storage, and test and measurement.
The RV12 RISC-V Processor is a highly adaptable single-core CPU that adheres to the RV32I and RV64I specifications of the RISC-V instruction set, aimed at the embedded systems market. This processor supports a variety of standard and custom configurations, making it suitable for diverse application needs. Its inherent flexibility allows it to be implemented efficiently in both FPGA and ASIC environments, ensuring that it meets the performance and resource constraints typical of embedded applications. Designed with an emphasis on configurability, the RV12 Processor can be tailored to include only the necessary components, optimizing both area and power consumption. It comes with comprehensive documentation and verification testbenches, providing a complete solution for developers looking to integrate a RISC-V CPU into their design. Whether for educational purposes or commercial deployment, the RV12 stands out for its robust design and adaptability, making it an ideal choice for modern embedded system solutions.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
The eSi-1600 is a 16-bit CPU core designed for cost-sensitive and power-efficient applications. It accords performance levels similar to that of 32-bit CPUs while maintaining a system cost comparable to 8-bit processors. This IP is particularly well-suited for control applications needing limited memory resources, demonstrating excellent compatibility with mature mixed-signal technologies.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
Poised to deliver exceptional performance in advanced applications, the SCR9 processor core epitomizes modern processing standards with its 12-stage dual-issue out-of-order pipeline and hypervisor support. Its inclusion of a vector processing unit (VPU) positions it as essential for high-performance computing tasks that require extensive parallel data processing. Suitable for high-demand environments such as enterprise data systems, AI workloads, and computationally intensive mobile applications, the SCR9 core is tailored to address high-throughput demands while maintaining reliability and accuracy. With support for symmetric multiprocessing (SMP) of up to 16 cores, this core stands as a configurable powerhouse, enabling developers to maximize processing efficiency and throughput. The SCR9's capabilities are bolstered by Syntacore’s dedication to supporting developers with comprehensive tools and documentation, ensuring efficient design and implementation. Through its blend of sophisticated features and support infrastructure, the SCR9 processor core paves the way for advancing technological innovation across numerous fields, establishing itself as a robust solution in the rapidly evolving landscape of high-performance computing.
The SiFive Intelligence X280 is designed to address the burgeoning needs of AI and machine learning at the edge. Emphasizing a software-first methodology, this family of processors is crafted to offer scalable vector and matrix compute capabilities. By integrating broad vector processing features and high-bandwidth interfaces, it can adapt to the ever-evolving landscape of AI workloads, providing both high performance and efficient scalability. Built on the RISC-V foundation, the X280 features comprehensive vector compute engines that cater to modern AI demands, making it a powerful tool for edge computing applications where space and energy efficiency are critical. Its versatility allows it to seamlessly manage diverse AI tasks, from low-latency inferences to complex machine learning models, thanks to its support for RISC-V Vector Extensions (RVV). The X280 family is particularly robust for applications requiring rapid AI deployment and adaptation like IoT devices and smart infrastructure. Through extensive compatibility with machine learning frameworks such as TensorFlow Lite, it ensures ease of deployment, enhanced by its focus on energy-efficient inference solutions and support for legacy systems, making it a comprehensive solution for future AI technologies.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic is engineered to deliver exceptional energy efficiency while maintaining high performance. This core is specifically designed to operate at 1GHz while consuming a mere 10mW of power, making it ideal for today's power-conscious applications. Utilizing advanced design techniques, this processor achieves its high performance at lower voltages, ensuring reduced power consumption without sacrificing speed. Constructed with a focus on optimizing processing capabilities, this RISC-V core is built to cater to demanding environments where energy efficiency is critical. Whether used as a standalone processor or integrated into larger systems, its low power requirements and robust performance make it highly versatile. This core also supports scalable processing with its architecture, accommodating a broad spectrum of applications from IoT devices to performance-intensive computing tasks, aligning with industry standards for modern electronic products.
The Y180 is a CPU-focused IP core that efficiently replicates the functionality of the Zilog Z180 CPU, comprising around 8k gates. This implementation showcases Systemyde’s commitment to detail, ensuring a consistent and reliable performance within a minimized footprint. With a core dedicated to sustaining traditional CPU operations, the Y180 is notably small yet potent, suiting designs requiring streamlined CPU cores. It remains resilient in environments that demand traditional computing interfaces, providing a dependable platform for basic process tasks. Its silicon-proven design attests to its dependability and functionality across various implementations. As a go-to standard, the Y180 supports standard CPU applications seamlessly, acting as an accessible solution for Zilog’s architectural compatibility.
The SiFive Essential family stands out as a versatile solution, delivering a wide range of pre-defined embedded CPU cores suitable for a variety of industrial applications. Whether you're designing for minimal area and power consumption or maximum feature capabilities, Essential offers configurations that adapt to diverse industrial needs. From compact microcontrollers to rich OS-compatible CPUs, Essential supports 32-bit and 64-bit pipelines, ensuring an optimal balance between performance and efficiency. This flexibility is enhanced by advanced tracing and debugging features, robust SoC security through WorldGuard support, and a broad array of interface options for seamless SoC integration. These comprehensive support mechanisms assure developers of maximum adaptability and accelerated integration within their designs, whether in IoT devices or control plane applications. SiFive Essential’s power efficiency and adaptability make it particularly suited for deploying customizable solutions in embedded applications. Whether the requirement is for intense computational capacity or low-power, battery-efficient tasks, Essential cores help accelerate time-to-market while offering robust performance in compact form factors, emphasizing scalable and secure solutions for a variety of applications.
The SCR1 microcontroller core is a compact, open-source offering designed for deeply embedded applications. It operates with a 4-stage in-order pipeline, ensuring efficient processing in space-constrained environments. Notably, it supports configurations that cater to various industrial needs, making it an ideal solution for projects requiring small form factors without compromising on power efficiency. This core is particularly effective for Internet of Things (IoT) devices and sensor hubs, where low power consumption and high reliability are critical. Its silicon-proven design further attests to its robustness, guaranteeing seamless integration into diverse operational settings. Delivering exceptional performance within constrained resources, the SCR1 stands as a versatile option for industries looking to leverage RISC-V's capabilities in microcontroller applications. Key features of the SCR1 include its ability to function within deeply embedded networks, addressing the needs of sectors like industrial automation and home automation. The in-order pipeline architecture of the SCR1 microcontroller provides predictable performance and straightforward debugging, ideal for critical applications requiring stability and efficiency. Its capability to pair with a variety of software tools enhances usability, offering designers a flexible platform for intricate embedded systems. Moreover, the SCR1 microcontroller benefits from community-driven development, ensuring continuous improvements and updates. This collaborative advancement fosters innovation, facilitating the deployment of advanced features while maintaining low energy requirements. As technology evolution demands more efficient solutions, the SCR1 continues to adapt, contributing significantly to the expanding RISC-V ecosystem. Increasingly indispensable, it offers a sustainable, cost-effective solution for manufacturers aiming to implement cutting-edge technology in their products.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
Pushing the envelope of application processing, the SCR7 application core integrates a 12-stage dual-issue out-of-order pipeline for high-performance computing tasks. It is equipped with advanced cache coherency and a robust memory subsystem ideal for modern applications demanding exceptional compute power and scalability. This application core serves large-scale computing environments, addressing needs within sectors such as data centers, enterprise solutions, and AI-enhanced applications. Supporting symmetric multiprocessing (SMP) with configurations up to eight cores, the SCR7 ensures smooth and simultaneous execution of complex tasks, significantly improving throughput and system efficiency. Syntacore complements this architecture with a rich toolkit that facilitates development across diverse platforms, enhancing its adaptability to specific commercial needs. The SCR7 embodies the future of application processing with its ability to seamlessly integrate into existing infrastructures while delivering outperforming results rooted in efficient architectural design and robust support systems.
The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.
Nuclei's RISC-V CPU IP N Class is engineered with a 32-bit architecture specifically targeting microcontroller and AIoT applications. Tailored for high performance, it offers exceptional configurability, allowing integration into diverse system environments by selecting only the necessary features. The N Class series is part of Nuclei's robust coding framework, built with Verilog for enhanced readability and optimized for debugging and performance-power-area (PPA) considerations. This IP ensures scalability through support for RISC-V extensions including B, K, P, and V, as well as the flexibility of user-defined instruction extensions. Nuclei addresses comprehensive security through information security solutions like TEE and physical security packages. Meanwhile, its safety functionalities align with standards such as ASIL-B and ASIL-D, vital for applications demanding high safety protocols. The N Class is further supported by a wide range of ecosystem resources, facilitating seamless integration into various industrial applications. In summary, the N Class IP not only provides powerful performance capabilities but is also structured to accommodate a broad range of applications while adhering to necessary safety and security frameworks. Its user-friendly customization makes it particularly suitable for applications in rapidly evolving fields such as AIoT.
The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
The SCR3 microcontroller core serves as an efficient platform for a range of embedded applications, characterized by its ability to handle both 32/64-bit constructs. Capable of supporting up to four symmetric multiprocessing (SMP) cores, this core is perfect for applications demanding enhanced computational power and multitasking abilities. It operates with a 5-stage in-order pipeline, which, coupled with privilege mode support, ensures that it can manage multiple tasks smoothly while maintaining operational integrity. Such capabilities make the SCR3 microcontroller core particularly well-suited for domains like industrial control systems and automotive applications, where precision and reliability are paramount. The inclusion of a memory protection unit (MPU) and layered L1 and L2 caches significantly boosts data processing rates, optimizing system performance. Bringing these features together, the core maintains high functionality while ensuring energy efficiency—an essential factor for high-demand embedded systems. A prominent feature of the SCR3 core is its flexibility. It can be extensively configured to match specific project requirements, from simple embedded devices to complex sensor networks. The provision of comprehensive documentation and development toolkits simplifies the integration process, supporting designers in developing robust and scalable solutions. Continued innovation and customization potential solidify the SCR3's position as a pivotal component in harnessing the power of RISC-V architectures.
The Veyron V1 is a high-performance RISC-V CPU aimed at data centers and similar applications that require robust computing power. It integrates with various chiplet and IP cores, making it a versatile choice for companies looking to create customized solutions. The Veyron V1 is designed to offer competitive performance against x86 and ARM counterparts, providing a seamless transition between different node process technologies. This CPU benefits from Ventana's innovation in RISC-V technology, where efforts are placed on providing an extensible architecture that facilitates domain-specific acceleration. With capabilities stretching from hyperscale computing to edge applications, the Veyron V1 supports extensive instruction sets for high-throughput operations. It also boasts leading-edge chiplet interfaces, opening up numerous opportunities for rapid productization and cost-effective deployment. Ventana's emphasis on open standards ensures that the Veyron V1 remains an adaptable choice for businesses aiming at bespoke solutions. Its compatibility with system IP and its provision in multiple platform formats—including chiplets—enable businesses to leverage the latest technological advancements in RISC-V. Additionally, the ecosystem surrounding the Veyron series ensures support for both modern software frameworks and cross-platform integration.
The eSi-1650 is a compact, low-power 16-bit CPU core integrating an instruction cache, making it an ideal choice for mature process nodes reliant on OTP or Flash program memory. By omitting large on-chip RAMs, the IP core optimizes power and area efficiency and permits the CPU to capitalize on its maximum operational frequency beyond OTP/Flash constraints.
The Codasip RISC-V BK Core Series represents a family of processor cores that bring advanced customization to the forefront of embedded designs. These cores are optimized for power and performance, striking a fine balance that suits an array of applications, from sensor controllers in IoT devices to sophisticated automotive systems. Their modular design allows developers to tailor instructions and performance levels directly to their needs, providing a flexible platform that enhances both existing and new applications. Featuring high degrees of configurability, the BK Core Series facilitates designers in achieving superior performance and efficiency. By supporting a broad spectrum of operating requirements, including low-power and high-performance scenarios, these cores stand out in the processor IP marketplace. The series is verified through industry-leading practices, ensuring robust and reliable operation in various application environments. Codasip has made it straightforward to use and adapt the BK Core Series, with an emphasis on simplicity and productivity in customizing processor architecture. This ease of use allows for swift validation and deployment, enabling quicker time to market and reducing costs associated with custom hardware design.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
A high-performance solution for microcontroller applications, the SCR6 core is created to operate efficiently within deeply embedded environments. It boasts a 12-stage dual-issue out-of-order pipeline, facilitating advanced computation by optimizing instruction scheduling and execution. This core also incorporates a highly capable floating-point unit (FPU), further enhancing its ability to handle complex numerical operations with precision. The SCR6 microcontroller core fits seamlessly into various industrial and consumer electronics applications, where high performance mixed with power efficiency is crucial. Its unique architectural composition allows it to execute tasks at minimal energy expenditure, vital for battery-operated devices or systems requiring prolonged uptime. This capability is further augmented by its high-frequency operation and data management efficiency. For developers, the SCR6 merges flexibility with simplicity, offering a customizable platform supported by Syntacore’s extensive toolkit and documentation. It addresses the growing demand for intelligent, connected devices in sectors such as IoT and automation, making it an essential component in the development of cutting-edge technology solutions that require reliable computational power within limited resource constraints.
The RISC-V Core IP by AheadComputing Inc. exemplifies cutting-edge processor technology, particularly in the realm of 64-bit application processing. Designed for superior IPC (Instructions Per Cycle) performance, this core is engineered to enhance per-core computing capabilities, catering to high-performance computing needs. It stands as a testament to AheadComputing's commitment to achieving the pinnacle of processor speed, setting new industry standards. This processor core is instrumental for various applications requiring robust processing power. It allows for seamless performance in a multitude of environments, whether in consumer electronics, enterprise solutions, or advanced computational fields. The innovation behind this IP reflects the deep expertise and forward-thinking approach of AheadComputing's experienced team. Furthermore, the RISC-V Core IP supports diverse computing needs by enabling adaptable and scalable solutions. AheadComputing leverages the open-source RISC-V architecture to offer customizable computing power, ensuring that their solutions are both versatile and future-ready. This IP is aimed at delivering efficiency and power optimization, supporting sophisticated applications with precision.
The SiFive Performance family is at the forefront of providing maximum throughput and performance across a spectrum of computing requirements, from datacenter workloads to consumer applications. These 64-bit, out-of-order cores incorporate advanced vector processing capabilities up to 256-bit, supporting a diversity of workloads including AI. The architecture spans from three to six-wide out-of-order cores, optimized for either dedicated vector engines or a balanced energy-efficient setup, making it a versatile choice for high-performance needs. Engineered for modern AI workloads, the Performance series offers a robust compute density and performance efficiency that is ideal for both mobile and stationary infrastructure. Customers can take advantage of flexible configuration options to balance power and area constraints, thanks to SiFive's state-of-the-art RISC-V solutions. The family’s cores, such as the P400, P600, and P800 Series, offer scalability from low-power tasks to demanding datacenter applications. The series is particularly adept at handling AI workloads, making it suitable for applications that demand high-speed data processing and analysis, such as internet of things (IoT) devices, network infrastructure, and high-volume consumer electronics. Customers benefit from the ability to combine various performance cores into a unified, high-performance CPU optimized for minimal power consumption, making it possible to design systems that balance performance and efficiency.
The Y51 processor employs the 8051 Instruction Set Architecture designed as a 2-clock machine cycle unit, building efficiency into its operation across compact environments. Crafted for projects requiring the enduring and well-understood 8051 architecture, Y51 aligns itself with established norms in processor functionality. Equipped for direct compatibility, the Y51 serves legacy system necessities by maintaining a connection to traditional frameworks, enhancing performance without complicating its architectural premise. It is a cost-effective solution acknowledging the persisting demand for traditional instruction sets. Expectations for high integration and seamless operation find a fitting response in Y51’s reliable structure. It supports continuous, effective processing within systems relying on substantial historical computing protocols, maintaining high standards of processor core efficiency.
Tensix Neo represents a transformative leap in enhancing AI computational efficiency, specifically designed to empower developers working on sophisticated AI networks and applications. Built around a Network-on-Chip (NoC) framework, Tensix Neo optimizes performance-per-watt, a critical factor for AI processing. It supports multiple precision formats to adapt to diverse AI workloads efficiently, allowing seamless integration with existing models and enabling scalability. Careful design ensures that Tensix Neo delivers consistent high performance across varied AI tasks, from image recognition algorithms to advanced analytics, making it an essential component in the AI development toolkit. Its capability to connect with an expanding library of AI models allows developers to leverage its full potential across multiple cutting-edge applications. This synthesis of performance and efficiency makes Tensix Neo a vital player in fields requiring high adaptability and rapid processing, such as autonomous vehicles, smart devices, and dynamic data centers. Moreover, the compatibility of Tensix Neo with Tenstorrent's other solutions underscores its importance as a flexible and powerful processing core. Designed with the contemporary developer in mind, Tensix Neo integrates seamlessly with open-source resources and tools, ensuring that developers have the support and flexibility needed to meet the challenges of tomorrow's AI solutions.
As the third generation in the Rabbit series, the Rabbit 4000 integrates 161k gates with 128 pins, an evolution that supports intensive computational applications. Its foundation is technology-independent, ensuring extensive adaptability in design applications requiring high throughput and robust reliability. This model comes with a comprehensive package supporting full silicon verification, backed by Systemyde’s thorough documentation. It extends usability into challenging environments, blending synthesized Verilog design with significant processing power across ASIC, FPGA, and custom scenarios. Superior architectural design underpins the Rabbit 4000, emphasizing seamless integration within diverse use cases, including high-performance embedded systems. It supports sophisticated data handling and processing, vital in achieving complex digital operations and ensuring long-term viability in excessive functional demands.
Designed to cater to the needs of edge computing, the Neural Network Accelerator by Gyrus AI is a powerhouse of performance and efficiency. With a focus on graph processing capabilities, this product excels in implementing neural networks by providing native graph processing. The accelerator attains impressive speeds, achieving 30 TOPS/W, while offering efficient computational power with significantly reduced clock cycles, ranging between 10 to 30 times less compared to traditional models. The design ensures that power consumption is kept at a minimum, being 10-20 times lower due to its low memory usage configuration. Beyond its power efficiency, this accelerator is designed to maximize space with a smaller die area, ensuring an 8-10 times reduction in size while maintaining high utilization rates of over 80% for various model structures. Such design optimizations make it an ideal choice for applications requiring a compact, high-performance solution capable of delivering fast computations without compromising on energy efficiency. The Neural Network Accelerator is a testament to Gyrus AI's commitment to enabling smarter edge computing solutions. Additionally, Gyrus AI has paired this technology with software tools that facilitate the execution of neural networks on the IP, simplifying integration and use in various applications. This seamless integration is part of their broader strategy to augment human intelligence, providing solutions that enhance and expand the capabilities of AI-driven technologies across industries.
Designed for efficiency in microcontroller applications, the SCR4 microcontroller core offers a balance between capabilities and resource management. Its 5-stage in-order pipeline ensures robust task management, featuring privilege modes for comprehensive application control, while boasting both 32 and 64-bit support. This core stands out due to its floating-point unit (FPU) integration, enhancing its capacity for handling complex computational tasks effectively. The SCR4 is highly applicable in industrial settings, specifically in environments requiring real-time data handling and swift computational responses. Its architectural design includes memory protection and cache layers, ensuring that data processes are not only fast but also secure, mitigating risks of data loss or corruption. Whether deployed in advanced control systems or real-time monitoring devices, the SCR4 adapts flexibly to meet stringent performance benchmarks. Accompanied by complete development support through comprehensive toolkits and resources, this core reduces the time needed for deployment and testing, allowing for quicker iteration cycles in product development. The SCR4's adaptability and efficiency contribute to its practicality in a variety of applications, from automotive to IoT devices, consistently delivering high-performance outcomes tailored to the modern technological landscape.
Crafted to support Linux-based applications, the SCR5 application core is a high-efficiency core designed with a nine-stage in-order processing pipeline. Featuring an integrated MMU (Memory Management Unit), L1 and L2 caches, and maintaining cache coherency, the core exhibits robust data handling capabilities essential for powerful application processing. This application core is engineered for sectors like artificial intelligence, high-performance computing, and mobile technology, where processing power is a requisite. Its design enables effective data management and multitasking, accommodating various applications which require seamless transitions and high compute capacities. The SCR5 also supports symmetric multiprocessing (SMP) with up to four cores, offering scalable performance for increased data demands. In addition to its architectural strengths, the core is supported by Syntacore’s extensive suite of development tools, ensuring a comprehensive environment for developers to build, test, and optimize applications. Its adaptable nature and robust processing capabilities extend its use across multiple domains, establishing the SCR5 as an indispensable asset for developers seeking to drive sophisticated Linux-based systems.
The M8051EW expands upon the M8051W's impressive performance by incorporating on-chip debugging capabilities. This microcontroller core offers not only rapid execution but also integrates a JTAG debug port for compatibility with external debugging tools. Additionally, this core is designed with hardware breakpoints and instruction tracebacks, providing full read and write access across all register and memory locations. Such capabilities, together with its fast execution cycle, make it an ideal choice for designs requiring advanced debugging and real-time control.
Nuclei's RISC-V CPU IP UX Class is a cutting-edge solution designed for 64-bit computing, particularly in data center operations, network systems, and Linux environments. Engineered with Verilog, the UX Class boasts outstanding readability and is tailored for effective debugging and PPA optimization, thus streamlining its deployment in performance-centric applications. Its comprehensive configurability allows for precise system incorporation by selecting features pertinent to specific operational needs. This processor IP is fortified with extensive RISC-V extension support, enhancing its applicability in various domains. Noteworthy are its security features, including TEE support and a robust physical security package, critical for maintaining information security integrity. Additionally, its alignment with safety protocols like ASIL-B and ASIL-D underscores its reliability in environments that demand stringent safety measures. The UX Class represents Nuclei's flagship offering for enterprises requiring powerful, flexible, and secure processing capabilities. By providing essential integration into Linux and network-driven systems, the UX Class solidifies its place as a cornerstone for modern, high-performance computing infrastructure.
The Nuclei N300 Series Processor Core is a commercial RISC-V Processor Core Series designed by Nuclei System Technology for microcontroller, IoT, or other low-power applications. The N300 Series offers advanced features such as dual-issue capability, configurable instruction sets including ISA extensions, low-power management modes, and comprehensive debug support. It also includes support for ECC, TEE, and scalable local memory interfaces. Enhanced features like ETRACE and customizable instructions via the NICE interface further extend its capabilities.
EverOn is a groundbreaking Single Port Ultra Low Voltage SRAM IP that delivers exemplary dynamic and static power savings. It is particularly suited for IoT and wearable device markets, where energy efficiency and performance are both critically required. Silicon-proven on 40ULP BULK CMOS processes, EverOn brings forth a new paradigm in power savings, offering up to 80% reduction in dynamic power and 75% in static power. Operating in a broad voltage range from 0.6V to 1.21V, EverOn achieves record-setting operational capabilities with its pioneering cycle time dynamics. Boasting a 20MHz cycle time at the ultra-low 0.6V mark, scaling to 300MHz at its upper voltage threshold, this IP is instrumental in the development of future-facing applications. The ULV compiler supports flexible configurations, enabling tailor-fit solutions suiting specific system requirements. EverOn’s SMART-Assist technology allows for robust operation down to retention voltages, complimenting its ultra-low power profile with considerable system flexibility. Features include memory bank subdivision and advanced sleep modes, tuned for maximum energy efficiency. Such characteristics make EverOn a frontrunner for products needing extended operational times and robust battery longevity.
The Low Power RISC-V CPU IP from SkyeChip is designed for energy-efficient processing applications, leveraging the RISC-V RV32 instruction set. It fully supports the 'I' and 'C' extensions, with partial support for the 'M' extension, operating exclusively in machine mode to minimize complexity and enhance security. Equipped with 32 vectorized interrupts and standard debugging capabilities defined per RISC-V specifications, this CPU IP excels in environments demanding quick interrupt handling and real-time processing. This CPU is especially suitable for battery-powered and embedded systems where power efficiency is critical. Tailored for modular integration in diverse software stacks, this IP offers a flexible architecture, ensuring longevity and adaptability in rapidly evolving technological ecosystems. It stands as a formidable component in the development of next-generation smart devices and IoT applications.
Known for its powerful design, the Rabbit 6000 boasts around 760k gates with a 292-pin configuration, representing a pinnacle of capability within the Rabbit series. It extends sophisticated processing power to each design, maintaining robustness across various demanding operations. The Rabbit 6000 addresses the complexities of extensive hardware systems, embedding high customization within its broad application spectrum. Systemyde throws significant resources into ensuring the core’s adaptability and durability, making it an indispensable option for sizeable computational solutions. Marked by superior Verilog development, this processor ensures compatibility and integration in large system infrastructures. It is well-positioned to manage advanced interaction requirements, solidifying its role in substantial digital projects requiring maximum efficiency and performance reliability.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!