The CPU, or Central Processing Unit, is the central component of computer systems, acting as the brain that executes instructions and processes data. Our category of CPU semiconductor IPs offers a diverse selection of intellectual properties that enable the development of highly efficient and powerful processors for a wide array of applications, from consumer electronics to industrial systems. Semiconductor IPs in this category are designed to meet the needs of modern computing, offering adaptable and scalable solutions for different technology nodes and design requirements.
These CPU semiconductor IPs provide the core functionalities required for the development of processors capable of handling complex computations and multitasking operations. Whether you're developing systems for mobile devices, personal computers, or embedded systems, our IPs offer optimized solutions that cater to the varying demands of power consumption, processing speed, and operational efficiency. This ensures that you can deliver cutting-edge products that meet the market's evolving demands.
Within the CPU semiconductor IP category, you'll find a range of products including RISC (Reduced Instruction Set Computer) processors, multi-core processors, and customizable processor cores among others. Each product is designed to integrate seamlessly with other system components, offering enhanced compatibility and flexibility in system design. These IP solutions are developed with the latest architectural advancements and technological improvements to support next-generation computing needs.
Selecting the right CPU semiconductor IP is crucial for achieving target performance and efficiency in your applications. Our offerings are meticulously curated to provide comprehensive solutions that are robust, reliable, and capable of supporting diverse computing applications. Explore our CPU semiconductor IP portfolio to find the perfect components that will empower your innovative designs and propel your products into the forefront of technology.
The KL730 AI SoC is a state-of-the-art chip incorporating Kneron's third-generation reconfigurable NPU architecture, delivering unmatched computational power with capabilities reaching up to 8 TOPS. This chip's architecture is optimized for the latest CNN network models and performs exceptionally well in transformer-based applications, reducing DDR bandwidth requirements substantially. Furthermore, it supports advanced video processing functions, capable of handling 4K 60FPS outputs with superior image handling features like noise reduction and wide dynamic range support. Applications can range from intelligent security systems to autonomous vehicles and commercial robotics.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
The Tianqiao-70 is engineered for ultra-low power consumption while maintaining robust computational capabilities. This commercial-grade 64-bit RISC-V CPU core presents an ideal choice for scenarios demanding minimal power usage without conceding performance. It is primarily designed for emerging mobile applications and devices, providing both economic and environmental benefits. Its architecture prioritizes low energy profiles, making it perfect for a wide range of applications, including mobile computing, desktop devices, and intelligent IoT systems. The Tianqiao-70 fits well into environments where conserving battery life is a priority, ensuring that devices remain operational for extended periods without needing frequent charging. The core maintains a focus on energy efficiency, yet it supports comprehensive computing functions to address the needs of modern, power-sensitive applications. This makes it a flexible component in constructing a diverse array of SoC solutions and meeting specific market demands for sustainability and performance.
The Metis M.2 AI accelerator module from Axelera AI is a cutting-edge solution for embedded AI applications. Designed for high-performance AI inference, this card boasts a single quad-core Metis AIPU that delivers industry-leading performance. With dedicated 1 GB DRAM memory, it operates efficiently within compact form factors like the NGFF M.2 socket. This capability unlocks tremendous potential for a range of AI-driven vision applications, offering seamless integration and heightened processing power.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
Designed for entry-level server-class applications, the SCR9 is a 64-bit RISC-V processor core that comes equipped with cutting-edge features, such as an out-of-order superscalar pipeline, making it apt for processing-intensive environments. It supports both single and double-precision floating-point operations adhering to IEEE standards, which ensure precise computation results. This processor core is tailored for high-performance computing needs, with a focus on AI and ML, as well as conventional data processing tasks. It integrates an advanced interrupt system featuring APLIC configurations, enabling responsive operations even under heavy workloads. SCR9 supports up to 16 cores in a multi-cluster arrangement, each utilizing coherent multi-level caches to maintain rapid data processing and management. The comprehensive development package for SCR9 includes ready-to-deploy toolchains and simulators that expedite software development, particularly within Linux environments. The core is well-suited for deployment in entry-level server markets and data-intensive applications, with robust support for virtualization and heterogeneous architectures.
Ventana's Veyron V2 CPU represents the pinnacle of high-performance AI and data center-class RISC-V processors. Engineered to deliver world-class performance, it supports extensive data center workloads, offering superior computational power and efficiency. The V2 model is particularly focused on accelerating AI and ML tasks, ensuring compute-intensive applications run seamlessly. Its design makes it an ideal choice for hyperscale, cloud, and edge computing solutions where performance is non-negotiable. This CPU is instrumental for companies aiming to scale with the latest in server-class technology.
The RV12 is a versatile, single-issue RISC-V compliant processor core, designed for the embedded market. With compliance to both RV32I and RV64I specifications, this core is part of Roa Logic's 32/64-bit CPU offerings. Featuring a Harvard architecture, it efficiently handles simultaneous instruction and data memory operations. The architecture is enhanced with an optimizing folded 4-stage pipeline, maximizing the overlap of execution with memory access to reduce latency and boost throughput. Flexibility is a cornerstone of the RV12 processor, offering numerous configuration options to tailor performance and efficiency. Users can select optional components such as branch prediction units, instruction and data caches, and a debug unit. This configurability allows designers to balance trade-offs between speed, power consumption, and area, optimizing the core for specific applications. The processor core supports a variety of standard software tools and comes with a full suite of development resources, including support for the Eclipse Integrated Development Environment (IDE) and GNU toolchain. The RV12 design emphasizes a small silicon footprint and power-efficient operation, making it ideal for a wide range of embedded applications.
As the SoC that placed Kneron on the map, the KL520 AI SoC continues to enable sophisticated edge AI processing. It integrates dual ARM Cortex M4 CPUs, ideally serving as an AI co-processor for products like smart home systems and electronic devices. It supports an array of 3D sensor technologies including structured light and time-of-flight cameras, which broadens its application in devices striving for autonomous functionalities. Particularly noteworthy is its ability to maximize power savings, making it feasible to power some devices on low-voltage battery setups for extended operational periods. This combination of size and power efficiency has seen the chip integrated into numerous consumer product lines.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
The Yitian 710 Processor is an advanced Arm-based server chip developed by T-Head, designed to meet the extensive demands of modern data centers and enterprise applications. This processor boasts 128 high-performance Armv9 CPU cores, each coupled with robust caches, ensuring superior processing speeds and efficiency. With a 2.5D packaging technology, the Yitian 710 integrates multiple dies into a single unit, facilitating enhanced computational capability and energy efficiency. One of the key features of the Yitian 710 is its memory subsystem, which supports up to 8 channels of DDR5 memory, achieving a peak bandwidth of 281 GB/s. This configuration guarantees rapid data access and processing, crucial for high-throughput computing environments. Additionally, the processor is equipped with 96 PCIe 5.0 lanes, offering a dual-direction bandwidth of 768 GB/s, enabling seamless connectivity with peripheral devices and boosting system performance overall. The Yitian 710 Processor is meticulously crafted for applications in cloud services, big data analytics, and AI inference, providing organizations with a robust platform for their computing needs. By combining high core count, extensive memory support, and advanced I/O capabilities, the Yitian 710 stands as a cornerstone for deploying powerful, scalable, and energy-efficient data processing solutions.
The Chimera GPNPU by Quadric redefines AI computing on devices by combining processor flexibility with NPU efficiency. Tailored for on-device AI, it tackles significant machine learning inference challenges faced by SoC developers. This licensable processor scales massively offering performance from 1 to 864 TOPs. One of its standout features is the ability to execute matrix, vector, and scalar code in a single pipeline, essentially merging the functionalities of NPUs, DSPs, and CPUs into a single core. Developers can easily incorporate new ML networks such as vision transformers and large language models without the typical overhead of partitioning tasks across multiple processors. The Chimera GPNPU is entirely code-driven, empowering developers to optimize their models throughout a device's lifecycle. Its architecture allows for future-proof flexibility, handling newer AI workloads as they emerge without necessitating hardware changes. In terms of memory efficiency, the Chimera architecture is notable for its compiler-driven DMA management and support for multiple levels of data storage. Its rich instruction set optimizes both 8-bit integer operations and complex DSP tasks, providing full support for C++ coded projects. Furthermore, the Chimera GPNPU integrates AXI Interfaces for efficient memory handling and configurable L2 memory to minimize off-chip access, crucial for maintaining low power dissipation.
The eSi-3250 32-bit RISC processor core excels in applications needing efficient caching structures and high-performance computation, thanks to its support for both instruction and data caches. This core targets applications where slower memory technologies or higher core/bus clock ratios exist, by leveraging configurable caches which reduce power consumption and boost performance. This advanced processor design integrates a wide range of arithmetic capabilities, supporting IEEE-754 floating-point functions and 32-bit SIMD operations to facilitate complex data processing. It uses an optional memory management unit (MMU) for virtual memory implementation and memory protection, enhancing its functional safety in various operating environments.
The KL630 AI SoC represents Kneron's sophisticated approach to AI processing, boasting an architecture that accommodates Int4 precision and transformers, making it incredibly adept in delivering performance efficiency alongside energy conservation. This chip shines in contexts demanding high computational intensity such as city surveillance and autonomous operation. It sports an ARM Cortex A5 CPU and a specialized NPU with 1 eTOPS computational power at Int4 precision. Suitable for running diverse AI applications, the KL630 is optimized for seamless operation in edge AI devices, providing comprehensive support for industry-standard AI frameworks and displaying superior image processing capabilities.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The eSi-3200 is a versatile 32-bit RISC processor core that combines low power usage with high performance, ideal for embedded control applications using on-chip memory. Its structure supports a wide range of computational tasks with a modified-Harvard architecture that allows simultaneous instruction and data fetching. This design facilitates deterministic performance, making it perfect for real-time control. The eSi-3200 processor supports extensive arithmetic operations, offering optional IEEE-754 floating-point units for both single-precision and SIMD instructions which optimize parallel data processing. Its compatibility with AMBA AXI or AHB interconnects ensures easy integration into various systems.
The eSi-1600 is a highly efficient 16-bit RISC processor core designed for applications that require low power and cost-effective solutions. Despite its 16-bit architecture, it offers performance akin to pricier 32-bit processors, making it an ideal choice for controlling functions in mature mixed-signal processes. The eSi-1600 is renowned for its power efficiency, running applications in fewer clock cycles compared to traditional 8-bit CPUs. Its instruction set includes 92 basic instructions and the capability for 74 user-defined ones, enhancing its adaptability. With support for a wide range of peripherals through AMBA AHB and APB buses, this core is versatile for various integration needs.
The xcore.ai platform from XMOS is engineered to revolutionize the scope of intelligent IoT by offering a powerful yet cost-efficient solution that combines high-performance AI processing with flexible I/O and DSP capabilities. At its heart, xcore.ai boasts a multi-threaded architecture with 16 logical cores divided across two processor tiles, each equipped with substantial SRAM and a vector processing unit. This setup ensures seamless execution of integer and floating-point operations while facilitating high-speed communication between multiple xcore.ai systems, allowing for scalable deployments in varied applications. One of the standout features of xcore.ai is its software-defined I/O, enabling deterministic processing and precise timing accuracy, which is crucial for time-sensitive applications. It integrates embedded PHYs for various interfaces such as MIPI, USB, and LPDDR, enhancing its adaptability in meeting custom application needs. The device's clock frequency can be adjusted to optimize power consumption, affirming its cost-effectiveness for IoT solutions demanding high efficiency. The platform's DSP and AI performances are equally impressive. The 32-bit floating-point pipeline can deliver up to 1600 MFLOPS with additional block floating point capabilities, accommodating complex arithmetic computations and FFT operations essential for audio and vision processing. Its AI performance reaches peaks of 51.2 GMACC/s for 8-bit operations, maintaining substantial throughput even under intensive AI workloads, making xcore.ai an ideal candidate for AI-enhanced IoT device creation.
The NaviSoC by ChipCraft is a highly integrated GNSS system-on-chip (SoC) designed to bring navigation technologies to a single die. Combining a GNSS receiver with an application processor, the NaviSoC delivers unmatched precision in a dependable, scalable, and cost-effective package. Designed for minimal energy consumption, it caters to cutting-edge applications in location-based services (LBS), the Internet of Things (IoT), and autonomous systems like UAVs and drones. This innovative product facilitates a wide range of customizations, adaptable to varied market needs. Whether the application involves precise lane-level navigation or asset tracking and management, the NaviSoC meets and exceeds market expectations by offering enhanced security and reliability, essential for synchronization and smart agricultural processes. Its compact design, which maintains high efficiency and flexibility, ensures that clients can tailor their systems to exact specifications without compromise. NaviSoC stands as a testament to ChipCraft's pioneering approach to GNSS technologies.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
The SCR7 is a 64-bit RISC-V application core crafted to meet high-performance demands of applications requiring powerful data processing. Featuring a sophisticated dual-issue pipeline with out-of-order execution, it enhances computational efficiency across varied tasks. The core is equipped with a robust floating-point unit and supports extensive RISC-V ISA extensions for advanced computing capabilities. SCR7's memory system includes L1 to L3 caches, with options for expansive up to 16MB L3 caching, ensuring data availability and integrity in demanding environments. Its multicore architecture supports up to eight cores, facilitating intensive computational tasks across industries such as AI and machine learning. Ideal for high-performance computing and big data applications, the SCR7 leverages its advanced interrupt systems and intelligent memory management for seamless operation. Comprehensive development resources, from simulators to SDKs, augment its integration across Linux-based systems, accelerating project development timelines.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The Y180 is a streamlined microprocessor design, incorporating approximately 8K gates and serving primarily as a CPU clone of the Zilog Z180. It caters to applications requiring efficient, compact processing power without extensive resource demands. Its design is particularly apt for systems that benefit from Z80 architecture compatibility, ensuring effortless integration and functionality within a variety of technological landscapes.
The Avispado core is a 64-bit in-order RISC-V processor that provides an excellent balance of performance and power efficiency. With a focus on energy-conscious designs, Avispado facilitates the development of machine learning applications and is prime for environments with limited silicon resources. It leverages Semidynamics' innovative Gazzillion Missesâ„¢ technology to address challenges with sparse tensor weights, enhancing energy efficiency and operational performance for AI tasks. Structured to support multiprocessor configurations, Avispado is integral in systems requiring cache coherence and high memory throughput. It is particularly suitable for setups aimed at recommendation systems due to its ability to manage numerous outstanding memory requests, thanks to its advanced memory interface architectures. Integration with Semidynamics' Vector Unit enriches its offering, allowing dense computations and providing optimal performance in handling vector tasks. The ability to engage with Linux-ready environments and support for RISC-V Vector Specification 1.0 ensures that Avispado integrates seamlessly into existing frameworks, fostering innovative applications in fields like data centers and beyond.
The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.
The eSi-3264 is a cutting-edge 32/64-bit processor core that incorporates SIMD DSP extensions, making it suitable for applications requiring both efficient data parallelism and minimal silicon footprint. Designed for high-accuracy DSP tasks, this processor's multifunctional capabilities target audio processing, sensor hubs, and complex arithmetic operations. The eSi-3264 processor supports sizeable instruction and data caches, which significantly enhance system performance when accessing slower external memory sources. With dual and quad MAC operations that include 64-bit accumulation, it enhances DSP execution, applying 8, 16, and 32-bit SIMD instructions for real-time data handling and minimizing CPU load.
The Dynamic Neural Accelerator (DNA) II offers a groundbreaking approach to enhancing edge AI performance. This neural network architecture core stands out due to its runtime reconfigurable architecture that allows for efficient interconnections between compute components. DNA II supports both convolutional and transformer network applications, accommodating an extensive array of edge AI functions. By leveraging scalable performance, it makes itself a valuable asset in the development of systems-on-chip (SoC) solutions. DNA II is spearheaded by EdgeCortix's patented data path architecture, focusing on technical optimization to maximize available computing resources. This architecture uniquely allows DNA II to maintain low power consumption while flexibly adapting to various task demands across diverse AI models. Its higher utilization rates and faster processing set it apart from traditional IP core solutions, addressing industry demands for more efficient and effective AI processing. In concert with the MERA software stack, DNA II optimally sequences computation tasks and resource distribution, further refining efficiency and effectiveness in processing neural networks. This integration of hardware and software not only aids in reducing on-chip memory bandwidth usage but also enhances the parallel processing ability of the system, catering to the intricate needs of modern AI computing environments.
Topaz FPGAs are designed for high-volume production applications where cost efficiency, compact form factor, and energy efficiency are paramount. These FPGAs integrate a set of commonly used features and protocols, such as MIPI, Ethernet, and PCIe Gen3, making them ideal for use in machine vision, robotics, and consumer electronics. With logic densities ranging from 52,160 to 326,080 logic elements, Topaz FPGAs provide versatile support for complex applications while keeping power consumption low.\n\nThe advanced Quantumâ„¢ compute fabric in Topaz allows for effective packing of logic in XLR cells, which enhances the scope for innovation and design flexibility. These FPGAs excel in applications requiring substantial computational resources without a hefty power draw, ensuring broad adaptability across various use cases. Topaz's integration capabilities allow for straightforward system expansion, enabling seamless scaling of operations from R&D phases to full production.\n\nThe Topaz FPGA family is engineered to cater to extended product life cycles, which is crucial for industries like automotive and industrial automation where long-term system stability is essential. With multiple package options, including small QFP packages for reduced BoM costs, Topaz FPGAs provide an economically attractive option while ensuring support for high-speed data applications. Efinix's commitment to maintaining a stable product supply until at least 2045 assures partners of sustained innovation and reliability.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
NeuroMosAIc Studio is a comprehensive software platform designed to maximize AI processor utilization through intuitive model conversion, mapping, simulation, and profiling. This advanced software suite supports Edge AI models by optimizing them for specific application needs. It offers precision analysis, network compression, and quantization tools to streamline the process of deploying AI models across diverse hardware setups. The platform is notably adept at integrating multiple AI functions and facilitating edge training processes. With tools like the NMP Compiler and Simulator, it allows developers to optimize functions at different stages, from quantization to training. The Studio's versatility is crucial for developers seeking to enhance AI solutions through customized model adjustments and optimization, ensuring high performance across AI systems. NeuroMosAIc Studio is particularly valuable for its edge training support and comprehensive optimization capabilities, paving the way for efficient AI deployment in various sectors. It offers a robust toolkit for AI model developers aiming to extract the maximum performance from hardware in dynamic environments.
The Tyr Superchip is engineered to tackle the most daunting computational challenges in edge AI, autonomous driving, and decentralized AIoT applications. It merges AI and DSP functionalities into a single, unified processing unit capable of real-time data management and processing. This all-encompassing chip solution handles vast amounts of sensor data necessary for complete autonomous driving and supports rapid AI computing at the edge. One of the key challenges it addresses is providing massive compute power combined with low-latency outputs, achieving what traditional architectures cannot in terms of energy efficiency and speed. Tyr chips are surrounded by robust safety protocols, being ISO26262 and ASIL-D ready, making them ideally suited for the critical standards required in automotive systems. Designed with high programmability, the Tyr Superchip accommodates the fast-evolving needs of AI algorithms and supports modern software-defined vehicles. Its low power consumption, under 50W for higher-end tasks, paired with a small silicon footprint, ensures it meets eco-friendly demands while staying cost-effective. VSORA’s Superchip is a testament to their innovative prowess, promising unmatched efficiency in processing real-time data streams. By providing both power and processing agility, it effectively supports the future of mobility and AI-driven automation, reinforcing VSORA’s position as a forward-thinking leader in semiconductor technology.
The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic, Inc. is engineered to operate efficiently with minimal power consumption, making it a standout solution for high-performance applications. This processor core is capable of running at an impressive 5GHz, yet it only consumes 10mW at 1GHz, illustrating its capability to deliver exceptional performance while keeping power usage to a minimum. Ideal for scenarios where energy efficiency is crucial, it leverages advanced design techniques to reduce voltage alongside high-speed processing. Maximizing power efficiency without compromising speed, this RISC-V core is suited for a wide array of applications ranging from IoT devices to complex computing systems. Its design allows it to maintain performance even at lower power inputs, a critical feature in sectors that prioritize energy savings and sustainability. The core's architecture supports full configurability, catering to diverse design needs across different technological fields. In addition to its energy-efficient design, the core offers robust computational capabilities, making it a competitive choice for companies looking to implement high-speed, low-power processing solutions in their product lines. The flexibility and power of this core accentuate Micro Magic's commitment to delivering top-tier semiconductor solutions that meet the evolving demands of modern technology.
The RISC-V Hardware-Assisted Verification by Bluespec is designed to expedite the verification process for RISC-V cores. This platform supports both ISA and system-level testing, adding robust features such as verifying standard and custom ISA extensions along with accelerators. Moreover, it offers scalable access through the AWS cloud, making verification available anytime and anywhere. This tool aligns with the needs of modern developers, ensuring thorough testing within a flexible and accessible framework.
SCR1 is an open-source and silicon-proven microcontroller core, tailored for deeply embedded applications. This 32-bit RISC-V core supports the standard ISA with optional extensions for multiplication, division, and compressed instructions. The design comprises a simple in-order 4-stage pipeline, providing efficient interrupt handling with an IPIC unit. It connects seamlessly with various interfaces, including AXI4, AHB-Lite, and JTAG, enhancing its adaptability across different systems. The SCR1 core boasts a Tightly-Coupled Memory (TCM) subsystem supporting up to 64KB. It features up to 16 interrupt lines and a range of performance monitoring tools making it ideal for IoT, control systems, and smart card applications. Pre-configured software development tools, including IDEs like Eclipse and Visual Studio Code plugins, complement the core, enabling developers to quickly deploy applications tailored to SCR1’s architecture. Additionally, SCR1 comes packaged with a rich suite of documentation and pre-configured FPGA-based SDK, ensuring a smooth transition from development to implementation. Its GPL-compliant open-source license ensures flexibility for commercial and educational use, making it a versatile choice for a wide range of projects.
The Hanguang 800 AI Accelerator by T-Head is an advanced semiconductor technology designed to accelerate AI computations and machine learning tasks. This accelerator is specifically optimized for high-performance inference, offering substantial improvements in processing times for deep learning applications. Its architecture is developed to leverage parallel computing capabilities, making it highly suitable for tasks that require fast and efficient data handling. This AI accelerator supports a broad spectrum of machine learning frameworks, ensuring compatibility with various AI algorithms. It is equipped with specialized processing units and a high-throughput memory interface, allowing it to handle large datasets with minimal latency. The Hanguang 800 is particularly effective in environments where rapid inferencing and real-time data processing are essential, such as in smart cities and autonomous driving. With its robust design and multi-faceted processing abilities, the Hanguang 800 Accelerator empowers industries to enhance their AI and machine learning deployments. Its capability to deliver swift computation and inference results ensures it is a valuable asset for companies looking to stay at the forefront of technological advancement in AI applications.
aiWare stands out as a premier hardware IP for high-performance neural processing, tailored for complex automotive AI applications. By offering exceptional efficiency and scalability, aiWare empowers automotive systems to harness the full power of neural networks across a wide variety of functions, from Advanced Driver Assistance Systems (ADAS) to fully autonomous driving platforms. It boasts an innovative architecture optimized for both performance and energy efficiency, making it capable of handling the rigorous demands of next-generation AI workloads. The aiWare hardware features an NPU designed to achieve up to 256 Effective Tera Operations Per Second (TOPS), delivering high performance at significantly lower power. This is made possible through a thoughtfully engineered dataflow and memory architecture that minimizes the need for external memory bandwidth, thus enhancing processing speed and reducing energy consumption. The design ensures that aiWare can operate efficiently across a broad range of conditions, maintaining its edge in both small and large-scale applications. A key advantage of aiWare is its compatibility with aiMotive's aiDrive software, facilitating seamless integration and optimizing neural network configurations for automotive production environments. aiWare's development emphasizes strong support for AI algorithms, ensuring robust performance in diverse applications, from edge processing in sensor nodes to high central computational capacity. This makes aiWare a critical component in deploying advanced, scalable automotive AI solutions, designed specifically to meet the safety and performance standards required in modern vehicles.
The SCR6 is a high-performance microcontroller core optimized for demanding embedded applications requiring substantial computational power. Its out-of-order 12-stage pipeline, complemented by a superscalar architecture, enhances processing speeds, making it ideal for real-time systems. Supporting a wide range of RISC-V ISA extensions, including cryptography and bit manipulation, SCR6 caters to secure and efficient data operations. The SCR6's memory subsystem is robust, featuring dual-level caches augmented with an L3 network-on-chip option. This rich memory architecture, along with efficient interrupt processing via APLIC units, ensures smooth high-speed data throughput in intensive applications. The core supports heterogeneous multicore configurations, enhancing parallel task execution. Designed for industrial and IoT environments, SCR6 comes with extensive development support. Its toolkit includes simulations, FPGA-based SDKs, and integration resources, facilitated through industry-standard interfaces, ensuring rapid development cycles and application deployment.
Designed for low-power applications, the eSi-1650 16-bit processor IP core includes an instruction cache, enhancing performance efficiency in systems using OTP or Flash for program memory. This core offers a low gate count, similar to many 8-bit cores, while the inclusion of a cache allows it to operate at higher speeds than standalone memory performance would normally allow. Its instruction set is robust, featuring a multitude of arithmetic and optional application-specific instructions, adaptations which facilitate lower power consumption and higher performance by allowing more immediate processing and reduced clock speeds.
The SAKURA-II AI accelerator is designed specifically to address the challenges of energy efficiency and processing demands in edge AI applications. This powerhouse delivers top-tier performance while maintaining a compact and low-power silicon architecture. The key advantage of SAKURA-II is its capability to handle vision and Generative AI applications with unmatched efficiency, thanks to the integration of the Dynamic Neural Accelerator (DNA) core. This core exhibits run-time reconfigurability that supports multiple neural network models simultaneously, adapting in real-time without compromising on speed or accuracy. Focusing on the demanding needs of modern AI applications, the SAKURA-II easily manages models with billions of parameters, such as Llama 2 and Stable Diffusion, all within a mere power envelope of 8W. It supports a large memory bandwidth and DRAM capacity, ensuring smooth handling of complex workloads. Furthermore, its multiple form factors, including modules and cards, allow for versatile system integration and rapid development, significantly shortening the time-to-market for AI solutions. EdgeCortix has engineered the SAKURA-II to offer superior DRAM bandwidth, allowing for up to 4x the DRAM bandwidth of other accelerators, crucial for low-latency operations and nimbly executing large-scale AI workflows such as Language and Vision Models. Its architecture promises higher AI compute utilization than traditional solutions, thus delivering significant energy efficiency advantages.
Emphasizing energy efficiency and processing power, the KL530 AI SoC is equipped with a newly developed NPU architecture, making it one of the first chips to adopt Int4 precision commercially. It offers remarkable computing capacity with lower energy consumption compared to its predecessors, making it ideal for IoT and AIoT scenarios. Embedded with an ARM Cortex M4 CPU, this chip enhances comprehensive image processing performance and multimedia codec efficiency. Its ISP capabilities leverage AI-based enhancements for superior image quality while maintaining low power usage during operation, thereby extending its competitiveness in fields such as robotics and smart appliances.
The KL720 AI SoC stands out for its excellent performance-to-power ratio, designed specifically for real-world applications where such efficiency is critical. Delivering nearly 0.9 TOPS per Watt, this chip underlines significant advancement in Kneron's edge AI capabilities. The KL720 is adept for high-performance devices like cutting-edge IP cameras, smart TVs, and AI-driven consumer electronics. Its architecture, based on the ARM Cortex M4 CPU, facilitates high-quality image and video processing, from 4K imaging to natural language processing, thereby advancing capabilities in devices needing rigorous computational work without draining power excessively.
The SCR4 core is a high-performance, area-efficient RISC-V processor with floating-point computation capabilities. Targeting mobile and industrial applications, it supports both single and double precision, adhering to IEEE 754-2008 standards. Its instruction set is complete with advanced extensions, including atomic and cryptography functions for secure and efficient operations. With a powerful 5-stage in-order pipeline and a dedicated FPU, the SCR4 can handle complex mathematical tasks swiftly. Its memory architecture features both L1 and L2 caches, alongside a TCM unit, enabling rapid data access and management essential in real-time environments. Incorporating a robust branch prediction unit and support for multicore setups, the SCR4 excels in environments demanding synchronized computing tasks across multiple processors. It’s supported by comprehensive development kits and detailed documentation to expedite the design and implementation processes across diverse platforms.
SiFive Essential family offers a highly customizable set of processor IPs suitable for a range of applications, from embedded microcontrollers to full-fledged Linux-capable designs. This family presents the flexibility to tailor power, area, and performance metrics according to specific market needs, ensuring that designers can optimize their solutions for diverse applications. The Essential lineup is structured to allow easy adaptability, featuring scalable microarchitectures that cater to every stage of product development. From lightweight, power-efficient processors optimized for IoT devices to more robust configurations designed for real-time control and processing, SiFive Essential processors cover a broad spectrum of use cases. Key features include advanced trace and debug capabilities and an open, scalable platform enhancing the overall security of SoC designs. With its comprehensive customization options, the Essential family is perfect for designers who need to strike a balance between performance and power efficiency. This versatility positions the SiFive Essential series as a cornerstone in providing quality RISC-V solutions, allowing for innovation without compromise on customizability and scalability.
The Spiking Neural Processor T1 is an innovative ultra-low power microcontroller designed for always-on sensing applications, bringing intelligence directly to the sensor edge. This processor utilizes the processing power of spiking neural networks, combined with a nimble RISC-V processor core, to form a singular chip solution. Its design supports next-generation AI and signal processing capabilities, all while operating within a very narrow power envelope, crucial for battery-powered and latency-sensitive devices. This microcontroller's architecture supports advanced on-chip signal processing capabilities that include both Spiking Neural Networks (SNNs) and Deep Neural Networks (DNNs). These processing capabilities enable rapid pattern recognition and data processing similar to how the human brain functions. Notably, it operates efficiently under sub-milliwatt power consumption and offers fast response times, making it an ideal choice for devices such as wearables and other portable electronics that require continuous operation without significant energy draw. The T1 is also equipped with diverse interface options, such as QSPI, I2C, UART, JTAG, GPIO, and a front-end ADC, contained within a compact 2.16mm x 3mm, 35-pin WLCSP package. The device boosts applications by enabling them to execute with incredible efficiency and minimal power, allowing for direct connection and interaction with multiple sensor types, including audio and image sensors, radar, and inertial units for comprehensive data analysis and interaction.
The C100 from Chipchain is a highly integrated, low-power consumption single-chip solution tailored for IoT applications. Featuring an advanced 32-bit RISC-V CPU capable of operating at speeds up to 1.5GHz, it houses embedded RAM and ROM for efficient processing and computational tasks. This chip's core strength lies in its multifunctional nature, integrating Wi-Fi, various transmission interfaces, an ADC, LDO, and temperature sensors, facilitating a streamlined and rapid application development process. The C100 chip is engineered to support a diverse set of applications, focusing heavily on expanding IoT capabilities with enhanced control and connectivity features. Beyond its technical prowess, the C100 stands out with its high-performance wireless microcontrollers, designed specifically for the burgeoning IoT market. By leveraging various embedded technologies, the C100 enables simplified, fast, and adaptive application deployment across a wide array of sectors including security, healthcare, smart home devices, and digital entertainment. The chip’s integrated features ensure it can meet the rigorous demands of modern IoT applications, characterized by high integration and reliability. Moreover, the C100 represents a leap forward in IoT product development with its extensive focus on energy efficiency, compact size, and secure operations. Providing a complete IoT solution, this chip is instrumental in advancing robust IoT ecosystems, driving innovation in smart connectivity. Its comprehensive integration provides IoT developers with a significant advantage, allowing them to develop solutions that are not only high-performing but also ensure sustainability and user safety.
Wormhole is a versatile communication system designed to enhance data flow within complex computational architectures. By employing state-of-the-art connectivity solutions, it enables efficient data exchange, critical for high-speed processing and low-latency communication. This technology is essential for maintaining optimal performance in environments demanding seamless data integration. Wormhole's ability to manage significant data loads with minimal latency makes it particularly suitable for applications requiring real-time data processing and transfer. Its integration into existing systems can enhance overall efficiency, fostering a more responsive computational environment. This makes it an invaluable asset for sectors undergoing digital transformation. The adaptability of Wormhole to various technological requirements ensures it remains relevant across diverse industry applications. This flexibility means that it can scale with ongoing technological advancements, cementing its role as a cornerstone in the evolving landscape of high-speed data communications.
The Codasip RISC-V BK Core Series is engineered to deliver flexibility and adaptability for a variety of embedded applications. These cores are designed to be low-power, offering an excellent balance of performance and energy efficiency. The series provides a spectrum of configurations, allowing developers to customize them to align with unique project requirements, ensuring each processor operates at peak efficiency for its specific use case. The cores are RISC-V compliant and adhere to stringent industry standards for quality, making them a reliable choice for sensitive applications.
The M8051EW expands upon the M8051W's impressive performance by incorporating on-chip debugging capabilities. This microcontroller core offers not only rapid execution but also integrates a JTAG debug port for compatibility with external debugging tools. Additionally, this core is designed with hardware breakpoints and instruction tracebacks, providing full read and write access across all register and memory locations. Such capabilities, together with its fast execution cycle, make it an ideal choice for designs requiring advanced debugging and real-time control.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!