All IPs > Processor > Processor Cores
Processor cores are fundamental components in central processing units (CPUs) and systems-on-chip (SoCs) for a myriad of digital devices ranging from personal computers and smartphones to more specialized equipment like embedded systems. Within the category of Processor Cores, you'll find a diverse selection of semiconductor IPs tailored to meet the varying demands of speed, power efficiency, and processing capability required by today's technology-driven world.
Our Processor Cores category provides an extensive library of semiconductor IPs, enabling designers to integrate powerful, efficient, and scalable cores into their projects. These IPs are essential for firms aiming to innovate and achieve a competitive edge within the fast-evolving tech landscape. Whether you're developing high-performance computing solutions or aiming for energy-efficient mobile gadgets, our processor core IP offerings are designed to support a wide range of architectures, from single-core microcontrollers to multi-core, multi-threaded processors.
One of the primary uses of processor core IPs is to define the architecture and functions of a core within a chip. These IPs provide the blueprint for building custom processors that can handle specific applications efficiently. They cover a broad spectrum of processing needs, including general-purpose processing, digital signal processing, and application-specific processing tasks. This flexibility allows developers to choose IPs that align perfectly with their product specifications, ensuring optimal performance and power usage.
In our Processor Cores category, you'll discover IPs suited for creating processors that power everything from wearables and IoT devices to servers and network infrastructure hardware. By leveraging these semiconductor IPs, businesses can significantly reduce time-to-market, lower development costs, and ensure that their products remain at the forefront of technology innovation. Each IP in this category is crafted to meet industry standards, providing robust solutions that integrate seamlessly into various technological environments.
Designed for high-performance applications, the Metis AIPU PCIe AI Accelerator Card by Axelera AI offers powerful AI processing capabilities in a PCIe card format. This card is equipped with the Metis AI Processing Unit, capable of delivering up to 214 TOPS, making it ideal for intensive AI tasks and vision applications that require substantial computational power. With support for the Voyager SDK, this card ensures seamless integration and rapid deployment of AI models, helping developers leverage existing infrastructures efficiently. It's tailored for applications that demand robust AI processing like high-resolution video analysis and real-time object detection, handling complex networks with ease. Highlighted for its performance in ResNet-50 processing, which it can execute at a rate of up to 3,200 frames per second, the PCIe AI Accelerator Card perfectly meets the needs of cutting-edge AI applications. The software stack enhances the developer experience, simplifying the scaling of AI workloads while maintaining cost-effectiveness and energy efficiency for enterprise-grade solutions.
Speedcore embedded FPGA (eFPGA) IP represents a notable advancement in integrating programmable logic into ASICs and SoCs. Unlike standalone FPGAs, eFPGA IP lets designers tailor the exact dimensions of logic, DSP, and memory needed for their applications, making it an ideal choice for areas like AI, ML, 5G wireless, and more. Speedcore eFPGA can significantly reduce system costs, power requirements, and board space while maintaining flexibility by embedding only the necessary features into production. This IP is programmable using the same Achronix Tool Suite employed for standalone FPGAs. The Speedcore design process is supported by comprehensive resources and guidance, ensuring efficient integration into various semiconductor projects.
The Ventana Veyron V2 CPU represents a substantial upgrade in processing power, setting a new standard in AI and data center performance with its RISC-V architecture. Created for applications that demand intensive computing resources, the Veyron V2 excels in providing high throughput and superior scalability. It is aimed at cloud-native operations and intensive data processing tasks requiring robust, reliable compute power. This CPU is finely tuned for modern, virtualized environments, delivering a server-class performance tailored to manage cloud-native workloads efficiently. The Veyron V2 supports a range of integration options, making it dependably adaptable for custom silicon platforms and high-performance system infrastructures. Its design incorporates an IOMMU compliant with RISC-V standards, enabling seamless interoperability with third-party IPs and modules. Ventana's innovation is evident in the Veyron V2's capacity for heterogeneous computing configurations, allowing diverse workloads to be managed effectively. Its architecture features advanced cluster and cache infrastructures, ensuring optimal performance across large-scale deployment scenarios. With a commitment to open standards and cutting-edge technologies, the Veyron V2 is a critical asset for organizations pursuing the next level in computing performance and efficiency.
The Yitian 710 processor from T-Head represents a significant advancement in server chip technology, featuring an ARM-based architecture optimized for cloud applications. With its impressive multi-core design and high-speed memory access, this processor is engineered to handle intensive data processing tasks with efficiency and precision. It incorporates advanced fabrication techniques, offering high throughput and low latency to support next-generation cloud computing environments. Central to its architecture are 128 high-performance CPU cores utilizing the Armv9 structure, which facilitate superior computational capabilities. These cores are paired with substantial cache size and high-speed DDR5 memory interfaces, optimizing the processor's ability to manage massive workloads effectively. This attribute makes it an ideal choice for data centers looking to enhance processing speed and efficiency. In addition to its hardware prowess, the Yitian 710 is designed to deliver excellent energy efficiency. It boasts a sophisticated power management system that minimizes energy consumption without sacrificing performance, aligning with green computing trends. This combination of power, efficiency, and environmentally friendly design positions the Yitian 710 as a pivotal choice for enterprises propelling into the future of computing.
xcore.ai is a versatile and powerful processing platform designed for AIoT applications, delivering a balance of high performance and low power consumption. Crafted to bring AI processing capabilities to the edge, it integrates embedded AI, DSP, and advanced I/O functionalities, enabling quick and effective solutions for a variety of use cases. What sets xcore.ai apart is its cycle-accurate programmability and low-latency control, which improve the responsiveness and precision of the applications in which it is deployed. Tailored for smart environments, xcore.ai ensures robust and flexible computing power, suitable for consumer, industrial, and automotive markets. xcore.ai supports a wide range of functionalities, including voice and audio processing, making it ideal for developing smart interfaces such as voice-controlled devices. It also provides a framework for implementing complex algorithms and third-party applications, positioning it as a scalable solution for the growing demands of the connected world.
The Chimera GPNPU from Quadric is engineered to meet the diverse needs of modern AI applications, bridging the gap between traditional processing and advanced AI model requirements. It's a fully licensable processor, designed to deliver high AI inference performance while eliminating the complexity of traditional multi-core systems. The GPNPU boasts an exceptional ability to execute various AI models, including classical backbones, state-of-the-art transformers, and large language models, all within a single execution pipeline.\n\nOne of the core strengths of the Chimera GPNPU is its unified architecture that integrates matrix, vector, and scalar processing capabilities. This singular design approach allows developers to manage complex tasks such as AI inference and data-parallel processing without resorting to multiple tools or artificial partitioning between processors. Users can expect heightened productivity thanks to its modeless operation, which is fully programmable and efficiently executes C++ code alongside AI graph code.\n\nIn terms of versatility and application potential, the Chimera GPNPU is adaptable across different market segments. It's available in various configurations to suit specific performance needs, from single-core designs to multi-core clusters capable of delivering up to 864 TOPS. This scalability, combined with future-proof programmability, ensures that the Chimera GPNPU not only addresses current AI challenges but also accommodates the ever-evolving landscape of cognitive computing requirements.
The Metis AIPU M.2 Accelerator Module from Axelera AI is a cutting-edge solution designed for enhancing AI performance directly within edge devices. Engineered to fit the M.2 form factor, this module packs powerful AI processing capabilities into a compact and efficient design, suitable for space-constrained applications. It leverages the Metis AI Processing Unit to deliver high-speed inference directly at the edge, minimizing latency and maximizing data throughput. The module is optimized for a range of computer vision tasks, making it ideal for applications like multi-channel video analytics, quality inspection, and real-time people monitoring. With its advanced architecture, the AIPU module supports a wide array of neural networks and can handle up to 24 concurrent video streams, making it incredibly versatile for industries looking to implement AI-driven solutions across various sectors. Providing seamless compatibility with AI frameworks such as TensorFlow, PyTorch, and ONNX, the Metis AIPU integrates seamlessly with existing systems to streamline AI model deployment and optimization. This not only boosts productivity but also significantly reduces time-to-market for edge AI solutions. Axelera's comprehensive software support ensures that users can achieve maximum performance from their AI models while maintaining operational efficiency.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
Designed for minimal power consumption, the Tianqiao-70 is a 64-bit RISC-V CPU core that harmonizes efficiency with energy savings. Targeting primarily the commercial space, this CPU core supports applications that demand lower power usage without compromising performance outputs. It stands out in the fields of mobile and desktop processing, AI learning, and other demanding applications that require consistent yet power-efficient computing. Architected to provide maximum throughput with minimum power draw, it is essential for energy-critical systems. The Tianqiao-70 showcases StarFive's commitment to optimizing for efficiency, enabling mobile, desktop, and AI platforms to leverage low power requirements effectively. This makes it a compelling choice for developers aiming to integrate eco-friendly solutions in their products.
The RV12 RISC-V Processor is a versatile single-core microprocessor that adheres to both the RV32I and RV64I RISC-V instruction sets. Designed primarily for the embedded market, this processor features a Harvard architecture that enables simultaneous instruction and data accesses, enhancing performance in computing tasks. As part of the Roa Logic CPU family, this processor is highly configurable, allowing users to adjust its parameters to fit specific application requirements, thus making it an excellent choice for technology developers seeking efficient custom solutions.
The Speedster7t FPGA family is crafted for high-bandwidth tasks, tackling the usual restrictions seen in conventional FPGAs. Manufactured using the TSMC 7nm FinFET process, these FPGAs are equipped with a pioneering 2D network-on-chip architecture and a series of machine learning processors for optimal high-bandwidth performance and AI/ML workloads. They integrate interfaces for high-paced GDDR6 memory, 400G Ethernet, and PCI Express Gen5 ports. This 2D network-on-chip connects various interfaces to upward of 80 access points in the FPGA fabric, enabling ASIC-like performance, yet retaining complete programmability. The product encourages users to start with the VectorPath accelerator card which houses the Speedster7t FPGA. This family offers robust tools for applications such as 5G infrastructure, computational storage, and test and measurement.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
The SiFive Intelligence X280 processor targets applications in machine learning and artificial intelligence, offering a high-performance, scalable architecture for emerging data workloads. As part of the Intelligence family, the X280 prioritizes a software-first methodology in processor design, addressing future ML and AI deployment needs, especially at the edge. This makes it particularly useful for scenarios requiring high computational power close to the data source. Central to its capabilities are scalable vector and matrix compute engines that can adapt to evolving workloads, thus future-proofing investments in AI infrastructure. With high-bandwidth bus interfaces and support for custom engine control, the X280 ensures seamless integration with varied system architectures, enhancing operational efficiency and throughput. By focusing on versatility and scalability, the X280 allows developers to deploy high-performance solutions without the typical constraints of more traditional platforms. It supports wide-ranging AI applications, from edge computing in IoT to advanced machine learning tasks, underpinning its role in modern and future-ready computing solutions.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
This core is designed for ultra-low power applications, offering a remarkable balance of power efficiency and performance. Operating at a mere 10mW at 1GHz, it showcases Micro Magic's advanced design techniques that allow for high-speed processing while maintaining low voltage operations. The core is ideal for energy-sensitive applications where performance cannot be compromised. With its ability to operate efficiently at 5GHz, this RISC-V core provides a formidable foundation for high-performance, low-power computing. It is a testament to Micro Magic's ability to develop cutting-edge solutions that cater to the needs of modern semiconductor applications. The 64-bit architecture ensures robust processing capabilities, making it suitable for a wide range of applications in various sectors. Whether for IoT devices or complex computing operations, this core is designed to meet diverse requirements by delivering power-packed performance.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The eSi-1600 is a 16-bit CPU core designed for cost-sensitive and power-efficient applications. It accords performance levels similar to that of 32-bit CPUs while maintaining a system cost comparable to 8-bit processors. This IP is particularly well-suited for control applications needing limited memory resources, demonstrating excellent compatibility with mature mixed-signal technologies.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
The SiFive Essential family of processors is renowned for its flexibility and wide applicability across embedded systems. These CPU cores are designed to meet specific market needs with pre-defined, silicon-proven configurations or through use of SiFive Core Designer for custom processor builds. Serving in a range of 32-bit to 64-bit options, the Essential processors can scale from microcontrollers to robust dual-issue CPUs. Widely adopted in the embedded market, the Essential series cores stand out for their scalable performance, adapting to diverse application requirements while maintaining power and area efficiency. They cater to billions of units worldwide, indicating their trusted performance and integration across various industries. The SiFive Essential processors offer an optimal balance of power, area, and cost, making them suitable for a wide array of devices, from IoT and consumer electronics to industrial applications. They provide a solid foundation for products that require reliable performance at a competitive price.
Syntacore's SCR9 processor core is a state-of-the-art, high-performance design targeted at applications requiring extensive data processing across multiple domains. It features a robust 12-stage dual-issue out-of-order pipeline and is Linux-capable. Additionally, the core supports up to 16 cores, offering superior processing power and versatility. This processor includes advanced features such as a VPU (Vector Processing Unit) and hypervisor support, allowing it to manage complex computational tasks efficiently. The SCR9 is particularly well-suited for deployments in enterprise, AI, and telecommunication sectors, reinforcing its status as a key component in next-generation computing solutions.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
The eSi-1650 is a compact, low-power 16-bit CPU core integrating an instruction cache, making it an ideal choice for mature process nodes reliant on OTP or Flash program memory. By omitting large on-chip RAMs, the IP core optimizes power and area efficiency and permits the CPU to capitalize on its maximum operational frequency beyond OTP/Flash constraints.
The RISC-V Core IP developed by AheadComputing Inc. stands out in the field of 64-bit application processors. Designed to deliver exceptional per-core performance, this processor is engineered with the highest standards to maximize the Instructions Per Cycle (IPC) efficiency. AheadComputing's RISC-V Core IP is continuously refined to address the growing demands of high-performance computing applications. The innovative architecture of this core allows for seamless execution of complex algorithms while achieving superior speed and efficiency. This design is crucial for applications that require fast data processing and real-time computational capabilities. By integrating advanced power management techniques, the RISC-V Core IP ensures energy efficiency without sacrificing performance, making it suitable for a wide range of electronic devices. Anticipating future computing needs, AheadComputing's RISC-V Core IP incorporates state-of-the-art features that support scalability and adaptability. These features ensure that the IP remains relevant as technology evolves, providing a solid foundation for developing next-generation computing solutions. Overall, it embodies AheadComputing’s commitment to innovation and performance excellence.
The RISC-V CPU IP N Class is part of a comprehensive lineup offered by Nuclei, optimized for microcontroller applications. This 32-bit architecture is ideal for AIoT solutions, allowing seamless integration into innovative low-power and high-efficiency projects. As a highly configurable IP, it supports extensions in security and physical safety measures, catering to applications that demand reliability and adaptability. With a focus on configurability, the N Class can be tailored for specific system requirements by selecting only the necessary features, ensuring optimized performance and resource utilization. Designed with robust and readable Verilog coding, it facilitates effective debugging and performance, power, and area (PPA) optimization. The IP also supports Trust Execution Environment (TEE) for enhanced security, catering to a variety of IoT and embedded applications. This class offers efficient scalability, supporting several RISC-V extensions like B, K, P, and V, while also allowing for user-defined instruction expansion. Committed to delivering a highly adaptable processor solution, the RISC-V CPU IP N Class is essential for developers aiming to implement secure and flexible embedded systems.
The Y180 is a streamlined microprocessor featuring around 8K gates, specifically designed as a clone of the Zilog Z180 CPU. Tailored for applications requiring compact size and efficient performance, this CPU-only design is a suitable replacement for legacy systems needing updates with reliable components. As a lightweight processor, the Y180 supports fundamental computational tasks while maintaining compatibility with existing infrastructure designed for Z180 systems. Its straightforward design facilitates easy integration, making it desirable for projects necessitating simplicity and lower power usage. Given its specialized purpose, the Y180 serves niche markets where updating older technology cost-effectively without compromising performance is a priority. It leverages Systemyde's depth of microprocessor design expertise to deliver practicality and adaptability.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
The SCR6 stands out with a high-performance 12-stage dual-issue out-of-order pipeline coupled with an advanced FPU. As a microcontroller core, it boasts excellent computational capabilities while maintaining efficient power consumption. This attribute makes it highly suitable for deeply embedded applications that demand a fine balance between performance and energy efficiency. Its architecture is particularly favored in sectors such as sensor fusion, control systems, and network devices. The SCR6 is emblematic of Syntacore's engineering prowess, integrating high-level functionalities into compact design footprints.
The Veyron V1 CPU from Ventana Micro Systems is an industry-leading processor designed to deliver unparalleled performance for data-intensive applications. This RISC-V based CPU is crafted to meet the needs of modern data centers and enterprises, offering a sophisticated balance of power efficiency and computational capabilities. The Veyron V1 is engineered to handle complex workloads with its advanced architecture that competes favorably against current industry standards. Incorporating the latest innovations in chiplet technology, the Veyron V1 boasts exceptional scalability, allowing it to seamlessly integrate into diverse computing environments. Whether employed in a high-performance cloud server or an enterprise data center, this CPU is optimized to provide a consistent, robust performance across various applications. Its architecture supports scalable, modular designs, making it suitable for custom SoC implementations, thereby enabling faster time-to-market for new products. The Veyron V1’s compatibility with RISC-V open standards ensures versatility and adaptability, providing enterprises the freedom to innovate without the constraints of proprietary technologies. It includes support for essential system IP and interfaces, facilitating easy integration across different technology platforms. With a focus on extensible instruction sets, the Veyron V1 allows customized performance optimizations tailored to specific user needs, making it an essential tool in the arsenal of modern computing solutions.
The SCR1 is an open-source microcontroller core optimized for deeply embedded applications. It features a 4-stage in-order pipeline, ensuring efficient processing of instructions. Designed for compactness and resource optimization, the SCR1 is an excellent choice for applications requiring simplicity and functionality within constrained environments. Its open-source nature and configurability make it adaptable to a wide range of use cases, from consumer electronics to industrial devices. This core is part of Syntacore's commitment to providing highly efficient, silicon-proven solutions.
The SCR3 is an efficient microcontroller core offered by Syntacore, designed to meet the needs of both industrial and consumer applications. It introduces a 5-stage in-order pipeline and includes privilege modes along with an MPU, as well as L1 and L2 cache support for improved performance. This core balances processing efficiency with power consumption, making it suitable for use cases that require reliable performance within space and energy constraints. The SCR3's versatility extends across various sectors, including IoT, automotive, and networking.
The SCR7 application core is a Linux-capable, high-performance processing unit equipped with a 12-stage dual-issue out-of-order pipeline. Designed for sophisticated computational tasks, it delivers superior performance through its SMP support, accommodating up to 8 cores. The core's comprehensive memory architecture, which includes cache coherency, makes it ideal for demanding sectors like AI and high-performance computing. The SCR7 is configured to excel in data-heavy environments, efficiently handling complex operations with lower power consumption, aligning with global trends in AI and enterprise data processing.
The Codasip RISC-V BK Core Series represents a family of processor cores that bring advanced customization to the forefront of embedded designs. These cores are optimized for power and performance, striking a fine balance that suits an array of applications, from sensor controllers in IoT devices to sophisticated automotive systems. Their modular design allows developers to tailor instructions and performance levels directly to their needs, providing a flexible platform that enhances both existing and new applications. Featuring high degrees of configurability, the BK Core Series facilitates designers in achieving superior performance and efficiency. By supporting a broad spectrum of operating requirements, including low-power and high-performance scenarios, these cores stand out in the processor IP marketplace. The series is verified through industry-leading practices, ensuring robust and reliable operation in various application environments. Codasip has made it straightforward to use and adapt the BK Core Series, with an emphasis on simplicity and productivity in customizing processor architecture. This ease of use allows for swift validation and deployment, enabling quicker time to market and reducing costs associated with custom hardware design.
TT-Ascalon™ is a versatile RISC-V CPU core developed by Tenstorrent, emphasizing the utility of open standards to meet a diverse array of computing needs. Built to be highly configurable, TT-Ascalon™ allows for the inclusion of 2 to 8 cores per cluster complemented by a customizable L2 cache. This architecture caters to clients seeking a tailored processing solution without the limitations tied to proprietary systems. With support for CHI.E and AXI5-LITE interfaces, TT-Ascalon™ ensures robust connectivity while maintaining system integrity and performance density. Its security capabilities are premised on equivalent RISC-V primitives, ensuring a reliable and trusted environment for operations involving sensitive data. Tenstorrent’s engineering prowess, evident in TT-Ascalon™, has been shaped by experienced personnel from renowned tech giants. This IP is meant to align with various performance targets, suited for complex computational tasks that demand flexibility and efficiency in design.
The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.
The Nuclei N300 Series Processor Core is a commercial RISC-V Processor Core Series designed by Nuclei System Technology for microcontroller, IoT, or other low-power applications. The N300 Series offers advanced features such as dual-issue capability, configurable instruction sets including ISA extensions, low-power management modes, and comprehensive debug support. It also includes support for ECC, TEE, and scalable local memory interfaces. Enhanced features like ETRACE and customizable instructions via the NICE interface further extend its capabilities.
The SiFive Performance family of processors is designed to offer top-tier performance and throughput across a range of sizes and power profiles. These cores provide highly efficient RISC-V scalar and vector computing capabilities, tailored for an optimal balance that delivers industry-leading results. With options for high-performance 64-bit out-of-order scalar engines and optional vector compute engines, the Performance series ensures customers get the maximum capabilities in computational power. Incorporating a robust architecture, these processors support extensive hardware capabilities, including full support for the RVA23 profile and an option for vector processing adjustments that maximizes computing efficiency. The SiFive Performance series has cores that cater to various needs, whether for general-purpose computing or applications requiring extensive parallel processing capabilities. SiFive's architecture allows for scalability and customization, bridging the gap between high-demand computational tasks and power efficiency. It is meticulously designed to meet the rigorous demands of modern and future computing applications, ensuring that both enterprise and consumer electronics can leverage the power of RISC-V computing. This makes it an ideal choice for developers seeking to push the boundaries of processing capabilities.
Tensix Neo is an AI-focused semiconductor solution from Tenstorrent that capitalizes on the robustness of RISC-V architecture. This IP is crafted to enhance the efficiency of both AI training and inference processes, making it a vital tool for entities needing scalable AI solutions without hefty power demands. With Tensix Neo, developers can rest assured of the silicon-proven reliability that backs its architecture, facilitating a smooth integration into existing AI platforms. The IP embraces the flexibility and customization needed for advanced AI workloads, optimizing resources and yielding results with high performance per watt. As the demand for adaptable AI solutions grows, Tensix Neo offers a future-proof platform that can accommodate rapid advancements and complex deployments in machine learning applications. By providing developers with tested and verified infrastructure, Tensix Neo stands as a benchmark in AI IP development.
The Maverick-2 Intelligent Compute Accelerator (ICA) is a groundbreaking innovation by Next Silicon Ltd. This architecture introduces a novel software-defined approach that adapts in real-time to optimize computational tasks, breaking the traditional constraints of CPUs and GPUs. By dynamically learning and accelerating critical code segments, Maverick-2 ensures enhanced efficiency and performance efficiency for high-performance computing (HPC), artificial intelligence (AI), and vector databases. Designers have developed the Maverick-2 to support a wide range of common programming languages, including C/C++, FORTRAN, OpenMP, and Kokkos, facilitating an effortless porting process. This robust toolchain reduces time-intensive application porting, allowing for a significant cut in development time while maximizing scientific output and insights. Developers can enjoy seamless integration into their existing workflows without needing new proprietary software stacks. A standout feature of this intelligent architecture is its ability to adjust hardware configurations on-the-fly, optimizing power efficiency and overall performance. With an emphasis on sustainable innovation, the Maverick-2 offers a performance-per-watt advantage that exceeds traditional GPU and high-end CPU solutions by over fourfold, making it a cost-effective and environmentally friendly choice for modern data centers and research facilities.
The AON1100 offers a sophisticated AI solution for voice and sensor applications, marked by a remarkable power usage of less than 260μW during processing yet maintaining high levels of accuracy in environments with sub-0dB SNR. It is a leading option for always-on devices, providing effective solutions for contexts requiring constant machine listening ability.\n\nThis AI chip excels in processing real-world acoustic and sensor data efficiently, delivering up to 90% accuracy by employing advanced signal processing techniques. The AON1100's low power requirements make it an excellent choice for battery-operated devices, ensuring sustainable functionality through efficient power consumption over extended operational periods.\n\nThe scalability of the AON1100 allows it to be adapted for various applications, including smart homes and automotive settings. Its integration within broader AI platform strategies enhances intelligent data collection and contextual understanding capabilities, delivering transformative impacts on device interactivity and user experience.
EverOn represents sureCore's commitment to providing low power solutions with its Single Port Ultra Low Voltage (ULV) SRAM, silicon-proven on the 40ULP BULK CMOS process. This IP achieves up to 80% savings in dynamic power consumption and an impressive 75% reduction in static power, suitable for modern applications needing broad voltage operation. EverOn distinguishes itself with a wide operating range from 0.6V to 1.21V, offering adaptability for Internet of Things (IoT) applications and wearable technology demanding extreme power optimization. It offers a remarkable 20MHz cycle time at its minimal operating voltage of 0.6V, successfully scaling to over 300MHz at 1.21V, thus balancing power efficiency and performance. Supporting synchronous single port SRAM designs with extensive memory capacity ranging from 8Kbytes to 576Kbytes, EverOn incorporates sureCore’s "SMART-Assist" technology. It ensures robust operation right down to retention voltages, while its advanced architectural features, like bank subdivision with enhanced sleep modes, deliver flexibility that is critical for optimizing battery life in various operational contexts.
The RISC-V CPU IP NS Class is specifically engineered for security-focused applications, including fintech mobile payments and IoT security. This architecture supports a variety of security protocols, making it ideal for systems that require robust data protection and secure transaction handling. It features a background in efficiently managing sensitive information, supporting comprehensive information security solutions with strong cryptographic capabilities. This IP is built with RISC-V's flexible extensions, ensuring files and communication streams maintain confidentiality and integrity in diverse operational scenarios. Robust by design, the NS Class caters to sectors such as IoT, where data protection is paramount, making it a trusted choice for developers seeking to enforce stringent security measures into their solutions. With options for extending functionality and increasing resilience through user-defined instructions, the NS Class remains adaptable for future security requirements.
The NoISA Processor is an innovative microprocessor designed by Hotwright Inc. to overcome the limitations of traditional instruction set architectures. Unlike standard processors, which rely on a fixed ALU, register file, and hardware controller, the NoISA Processor utilizes the Hotstate machine - an advanced microcoded algorithmic state machine. This technology allows for runtime reprogramming and flexibility, making it highly suitable for various applications where space, power efficiency, and adaptability are paramount. With the NoISA Processor, users can achieve significant performance improvements without the limitations imposed by fixed instruction sets. It's particularly advantageous in IoT and edge computing scenarios, offering enhanced efficiency compared to conventional softcore CPUs while maintaining lower energy consumption. Moreover, this processor is ideal for creating small, programmable state machines and Sysotlic arrays rapidly. Its unique architecture permits behavior modification through microcode, rather than altering the FPGA, thus offering unprecedented flexibility and power in adapting to specific technological needs.
Y51 microprocessor embeds the 8051 Instruction Set Architecture within a 2-clock machine cycle, streamlining processing for devices adhering to this widely recognized standard. By adopting this efficient architecture, the Y51 ensures compatibility with legacy systems while optimizing performance outcomes. Focused on simplicity and efficiency, the Y51 is ideal for applications within embedded systems where adherence to the 8051 architecture is required. It provides easy system integration due to its standard compliance and operational fluency, becoming a repeat choice for developers maintaining or upgrading applications utilizing older technology. System architects benefit from the Y51’s reliability and processing strength, underscoring Systemyde’s commitment to delivering microprocessors that meet the traditional demands without sacrificing integration ease or flexibility.
Rabbit 4000 is a powerful member of the Rabbit microprocessor family, designed to deliver significant computational power and versatility in design. With 161K gates and 128 pins, this microprocessor supports more complex systems that require efficient processing capabilities combined with extensive peripheral support. This model is suitable for applications requiring superior performance and flexibility, making it particularly advantageous for use in high-demand environments such as industrial automation and sophisticated control systems. Its design is silicon-proven, reinforcing Systemyde's reputation for reliability and excellence in microprocessor engineering. The Rabbit 4000 offers enhanced features, including an upgraded instruction set that allows for greater processing power and efficiency. Its robust configuration supports the integration of multiple peripherals, ensuring that it meets the comprehensive needs of modern technology landscapes. This makes it a trusted choice for developers working with intricate and demanding applications.
The SCR4 combines efficiency with advanced features, offering a 5-stage in-order pipeline, an FPU (Floating Point Unit), and an MPU (Memory Protection Unit), alongside comprehensive cache support. This microcontroller core is tailored for applications necessitating precision and enhanced computational performance. Its architecture enables effective power management, making it ideal for embedded systems with strict energy budgets. By supporting a range of markets like industrial automation and sensor networks, the SCR4 core exemplifies Syntacore's adaptability and forward-thinking approach.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!