The CPU, or Central Processing Unit, is the central component of computer systems, acting as the brain that executes instructions and processes data. Our category of CPU semiconductor IPs offers a diverse selection of intellectual properties that enable the development of highly efficient and powerful processors for a wide array of applications, from consumer electronics to industrial systems. Semiconductor IPs in this category are designed to meet the needs of modern computing, offering adaptable and scalable solutions for different technology nodes and design requirements.
These CPU semiconductor IPs provide the core functionalities required for the development of processors capable of handling complex computations and multitasking operations. Whether you're developing systems for mobile devices, personal computers, or embedded systems, our IPs offer optimized solutions that cater to the varying demands of power consumption, processing speed, and operational efficiency. This ensures that you can deliver cutting-edge products that meet the market's evolving demands.
Within the CPU semiconductor IP category, you'll find a range of products including RISC (Reduced Instruction Set Computer) processors, multi-core processors, and customizable processor cores among others. Each product is designed to integrate seamlessly with other system components, offering enhanced compatibility and flexibility in system design. These IP solutions are developed with the latest architectural advancements and technological improvements to support next-generation computing needs.
Selecting the right CPU semiconductor IP is crucial for achieving target performance and efficiency in your applications. Our offerings are meticulously curated to provide comprehensive solutions that are robust, reliable, and capable of supporting diverse computing applications. Explore our CPU semiconductor IP portfolio to find the perfect components that will empower your innovative designs and propel your products into the forefront of technology.
BrainChip's Akida Neural Processor IP is a groundbreaking development in neuromorphic processing, designed to mimic the human brain in interpreting sensory inputs. By implementing an event-based architecture, it processes only the critical data at the point of acquisition, achieving unparalleled performance with significantly reduced power consumption. This architecture enables on-chip learning, reducing dependency on cloud processing, thus enhancing privacy and security.\n\nThe Akida Neural Processor IP supports incremental learning and high-speed inference across a vast range of applications, making it highly versatile. It is structured to handle data sparsity effectively, which cuts down on operations substantially, leading to considerable improvements in efficiency and responsiveness. The processor's scalability and compact design allow for wide deployment, from minimal-node setups for ultra-low power operations to more extensive configurations for handling complex tasks.\n\nImportantly, the Akida processor uses a fully customizable AI neural processor that leverages event-based processing and an on-chip mesh network for seamless communication. The technology also features support for hybrid quantized weights and provides robust tools for integration, including fully synthesizable RTL IP packages, hardware-based event processing, and on-chip learning capabilities.
The Akida 2nd Generation is an evolution of BrainChip's innovative neural processor technology. It builds upon its predecessor's strengths by delivering even greater efficiency and a broader range of applications. The processor maintains an event-based architecture that optimizes performance and power consumption, providing rapid response times suitable for edge AI applications that prioritize speed and privacy.\n\nThis next-generation processor enhances accuracy with support for 8-bit quantization, which allows for finer grained processing capabilities and more robust AI model implementations. Furthermore, it offers extensive scalability, supporting configurations from a few nodes for low-power needs to many nodes for handling more complex cognitive tasks. As with the previous version, its architecture is inherently cloud-independent, enabling inference and learning directly on the device.\n\nAkida 2nd Generation continues to push the boundaries of AI processing at the edge by offering enhanced processing capabilities, making it ideal for applications demanding high accuracy and efficiency, such as automotive safety systems, consumer electronics, and industrial monitoring.
The KL730 is a sophisticated AI System on Chip (SoC) that embodies Kneron's third-generation reconfigurable NPU architecture. This SoC delivers a substantial 8 TOPS of computing power, designed to efficiently handle CNN network architectures and transformer applications. Its innovative NPU architecture significantly optimizes DDR bandwidth, providing powerful video processing capabilities, including supporting 4K resolution at 60 FPS. Furthermore, the KL730 demonstrates formidable performance in noise reduction and low-light imaging, positioning it as a versatile solution for intelligent security, video conferencing, and autonomous applications.
The Yitian 710 Processor is an advanced Arm-based server chip developed by T-Head, designed to meet the extensive demands of modern data centers and enterprise applications. This processor boasts 128 high-performance Armv9 CPU cores, each coupled with robust caches, ensuring superior processing speeds and efficiency. With a 2.5D packaging technology, the Yitian 710 integrates multiple dies into a single unit, facilitating enhanced computational capability and energy efficiency. One of the key features of the Yitian 710 is its memory subsystem, which supports up to 8 channels of DDR5 memory, achieving a peak bandwidth of 281 GB/s. This configuration guarantees rapid data access and processing, crucial for high-throughput computing environments. Additionally, the processor is equipped with 96 PCIe 5.0 lanes, offering a dual-direction bandwidth of 768 GB/s, enabling seamless connectivity with peripheral devices and boosting system performance overall. The Yitian 710 Processor is meticulously crafted for applications in cloud services, big data analytics, and AI inference, providing organizations with a robust platform for their computing needs. By combining high core count, extensive memory support, and advanced I/O capabilities, the Yitian 710 stands as a cornerstone for deploying powerful, scalable, and energy-efficient data processing solutions.
The Metis AIPU M.2 Accelerator Module is designed for edge AI applications that demand high-performance inference capabilities. This module integrates a single Metis AI Processing Unit (AIPU), providing an excellent solution for AI acceleration within constrained devices. Its capability to handle high-speed data processing with limited power consumption makes it an optimal choice for applications requiring efficiency and precision. With 1GB of dedicated DRAM memory, it seamlessly supports a wide array of AI pipelines, ensuring rapid integration and deployment. The design of the Metis AIPU M.2 module is centered around maximizing performance without excessive energy consumption, making it suitable for diverse applications such as real-time video analytics and multi-camera processing. Its compact form factor eases incorporation into various devices, delivering robust performance for AI tasks without the heat or power trade-offs typically associated with such systems. Engineered to problem-solve current AI demands efficiently, the M.2 module comes supported by the Voyager SDK, which simplifies the integration process. This comprehensive software suite empowers developers to build and optimize AI models directly on the Metis platform, facilitating a significant reduction in time-to-market for innovative solutions.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
Designed for extreme low-power environments, the Tianqiao-70 RISC-V CPU core emphasizes energy efficiency while maintaining sufficient computational strength for commercial applications. It serves scenarios where low power consumption is critical, such as mobile devices, desktop applications, AI, and autonomous systems. This model caters to the requirements of energy-conscious markets, facilitating operations that demand efficiency and performance within minimal power budgets.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
Ventana's Veyron V2 CPU represents the pinnacle of high-performance AI and data center-class RISC-V processors. Engineered to deliver world-class performance, it supports extensive data center workloads, offering superior computational power and efficiency. The V2 model is particularly focused on accelerating AI and ML tasks, ensuring compute-intensive applications run seamlessly. Its design makes it an ideal choice for hyperscale, cloud, and edge computing solutions where performance is non-negotiable. This CPU is instrumental for companies aiming to scale with the latest in server-class technology.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
The Chimera GPNPU from Quadric is designed as a general-purpose neural processing unit intended to meet a broad range of demands in machine learning inference applications. It is engineered to perform both matrix and vector operations along with scalar code within a single execution pipeline, which offers significant flexibility and efficiency across various computational tasks. This product achieves up to 864 Tera Operations per Second (TOPs), making it suitable for intensive applications including automotive safety systems. Notably, the GPNPU simplifies system-on-chip (SoC) hardware integration by consolidating hardware functions into one processor core. This unification reduces complexity in system design tasks, enhances memory usage profiling, and optimizes power consumption when compared to systems involving multiple heterogeneous cores such as NPUs and DSPs. Additionally, its single-core setup enables developers to efficiently compile and execute diverse workloads, improving performance tuning and reducing development time. The architecture of the Chimera GPNPU supports state-of-the-art models with its Forward Programming Interface that facilitates easy adaptation to changes, allowing support for new network models and neural network operators. It’s an ideal solution for products requiring a mix of traditional digital signal processing and AI inference like radar and lidar signal processing, showcasing a rare blend of programming simplicity and long-term flexibility. This capability future-proofs devices, expanding their lifespan significantly in a rapidly evolving tech landscape.
Designed for entry-level server-class applications, the SCR9 is a 64-bit RISC-V processor core that comes equipped with cutting-edge features, such as an out-of-order superscalar pipeline, making it apt for processing-intensive environments. It supports both single and double-precision floating-point operations adhering to IEEE standards, which ensure precise computation results. This processor core is tailored for high-performance computing needs, with a focus on AI and ML, as well as conventional data processing tasks. It integrates an advanced interrupt system featuring APLIC configurations, enabling responsive operations even under heavy workloads. SCR9 supports up to 16 cores in a multi-cluster arrangement, each utilizing coherent multi-level caches to maintain rapid data processing and management. The comprehensive development package for SCR9 includes ready-to-deploy toolchains and simulators that expedite software development, particularly within Linux environments. The core is well-suited for deployment in entry-level server markets and data-intensive applications, with robust support for virtualization and heterogeneous architectures.
The KL520 was Kneron's first foray into AI SoCs, characterized by its small size and energy efficiency. This chip integrates a dual ARM Cortex M4 CPU architecture, which can function both as a host processor and as a supportive AI co-processor for diverse edge devices. Ideal for smart devices such as door locks and cameras, it is compatible with various 3D sensor technologies, offering a balance of compact design and high performance. As a result, this SoC has been adopted by multiple products in the smart home and security sectors.
The RV12 RISC-V Processor from Roa Logic is a highly versatile CPU designed for embedded applications. It complies with the RV32I and RV64I specifications of the RISC-V instruction set, supporting single-core configurations. The RV12 processor is renowned for its configurability, allowing it to be tailored to specific application requirements. It implements a Harvard architecture, which enables concurrent access to both instruction and data memory, optimizing performance and efficiency. Roa Logic's RV12 processor is part of their broader portfolio of 32/64-bit CPU solutions that leverage the open-source RISC-V instruction set. This architecture is favored for its simplicity and scalability, making it ideal for various embedded systems. The processor is equipped with an optimizing feature set that enhances its processing capabilities, ensuring it meets the rigorous demands of modern applications. Incorporating the RV12 processor into projects is streamlined thanks to its comprehensive support documentation and available test benches. These resources facilitate smooth integration into larger systems, providing developers with a reliable foundation for building advanced embedded systems. Its design is a testament to Roa Logic's commitment to delivering high-performance, adaptable IP solutions to the semiconductor industry.
The KL630 chip stands out with its pioneering NPU architecture, making it the industry's first to support Int4 precision alongside transformer networks. This unique capability enables it to achieve exceptional computational efficiency and low energy consumption, suitable for a wide variety of applications. The chip incorporates an ARM Cortex A5 CPU, providing robust support for all major AI frameworks and delivering superior ISP capabilities for handling low light conditions and HDR applications, making it ideal for security, automotive, and smart city uses.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
The eSi-3250 stands as a high-performance 32-bit RISC IP processor, optimized for implementations within ASIC or FPGA environments that demand rigorous caching strategies due to slower internal or external memories. Noteworthy for its adaptable instruction and data cache capabilities, this core is tailored to excel in scenarios where the CPU core to bus clock ratio exceeds singularity. The eSi-3250 integrates dual separate caches for both data and instructions, enabling configuration in various associativity forms optimizing for elevated performance while maintaining power efficiency. It includes a specialized optional memory management unit, vital for memory protection and the deployment of virtual memory, accommodating sophisticated system requirements. Incorporating an expansive instruction set, the processor is equipped for intensive computational tasks with a multitude of optional additional instruction types and addressing modes. Additional requisite supporting hardware includes incorporated debug features conducive to efficient system analysis and troubleshooting, solidifying the eSi-3250's position as a favored choice for high-throughput, low-power applications across a spectrum of technology processes.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The eSi-1600 is a compact, low-power, and cost-effective processor core specifically engineered for integration into both ASIC and FPGA designs. It delivers performance comparable to costlier 32-bit processors while maintaining an affordability akin to 8-bit options, making it suited for control tasks within mature mixed-signal environments requiring less than 64kB memory. Despite its 16-bit design, it achieves notable power savings by executing applications in fewer clock cycles, reducing the need for high-frequency operations and enabling faster power-down states. Boasting a versatile instruction set, the eSi-1600 encompasses both general-purpose and optional custom functions, enhancing flexibility for specialized computations. Innovative architectural features like a 5-stage pipeline facilitate high clock speeds even in older technologies. This processor supports intricate arithmetic operations including multiply-accumulate and division, along with diverse bit manipulation instructions beneficial for efficient data handling and algorithm execution. Moreover, its capacity to intermingle 16 and 32-bit instructions increases code density, optimizing both performance and power efficiency. The eSi-1600 supports various operating modes and privileges via an optional memory protection unit, providing secure execution for multiple applications. Comprehensive debugging support assists in effective program diagnosis and optimization. This processor core is thoroughly validated across technological processes and included as a Verilog RTL IP core, illustrating its adaptability, reliability, and readiness for broad deployment.
The Dynamic Neural Accelerator II (DNA-II) is a highly efficient and versatile IP specifically engineered for optimizing AI workloads at the edge. Its unique architecture allows runtime reconfiguration of interconnects among computing units, which facilitates improved parallel processing and efficiency. DNA-II supports a broad array of networks, including convolutional and transformer networks, making it an ideal choice for numerous edge applications. Its design emphasizes low power consumption while maintaining high computational performance. By utilizing a dynamic data path architecture, DNA-II sets a new benchmark for IP cores aimed at enhancing AI processing capabilities.
xcore.ai is a powerful platform tailored for the intelligent IoT market, offering unmatched flexibility and performance. It boasts a unique multi-threaded micro-architecture that provides low-latency and deterministic performance, perfect for smart applications. Each xcore.ai contains 16 logical cores distributed across two multi-threaded processor tiles, each equipped with 512kB of SRAM and capable of both integer and floating-point operations. The integrated interprocessor communication allows high-speed data exchange, ensuring ultimate scalability across multiple xcore.ai SoCs within a unified development environment.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
SCR1 is an open-source and silicon-proven microcontroller core, tailored for deeply embedded applications. This 32-bit RISC-V core supports the standard ISA with optional extensions for multiplication, division, and compressed instructions. The design comprises a simple in-order 4-stage pipeline, providing efficient interrupt handling with an IPIC unit. It connects seamlessly with various interfaces, including AXI4, AHB-Lite, and JTAG, enhancing its adaptability across different systems. The SCR1 core boasts a Tightly-Coupled Memory (TCM) subsystem supporting up to 64KB. It features up to 16 interrupt lines and a range of performance monitoring tools making it ideal for IoT, control systems, and smart card applications. Pre-configured software development tools, including IDEs like Eclipse and Visual Studio Code plugins, complement the core, enabling developers to quickly deploy applications tailored to SCR1’s architecture. Additionally, SCR1 comes packaged with a rich suite of documentation and pre-configured FPGA-based SDK, ensuring a smooth transition from development to implementation. Its GPL-compliant open-source license ensures flexibility for commercial and educational use, making it a versatile choice for a wide range of projects.
The Y180 is a compact CPU-only design, serving as a clone of the Zilog Z180 CPU, and involves approximately 8K gates. It caters to applications that require compatibility with the Zilog architecture and prefer a minimalistic yet effective microprocessor implementation.
The eSi-3200 represents the mid-tier solution in the eSi-RISC family, bringing a high degree of versatility and performance to embedded control systems. This 32-bit processor is proficiently designed for scenarios demanding enhanced computational capabilities or extended address spaces without compromise on power efficiency, suitably fitting applications with on-chip memory implementations. Engineered without a cache, the eSi-3200 facilitates deterministic performance essential for real-time applications. It leverages a modified-Harvard architecture allowing concurrent instruction and data fetches, maximizing throughput. With a 5-stage pipeline, the processor achieves high clock frequencies suitable for time-critical operations enhancing responsiveness and efficiency. The comprehensive instruction set encompasses core arithmetic functions, including advanced IEEE-754 single-precision floating-point operations, which cater to data-intensive and mathematically challenging applications. Designed with optimal flexibility, it can accommodate optional custom instructions tailored to specific processing needs, offering a well-balanced solution for versatile embedded applications. Delivered as a Verilog RTL IP core, it ensures platform compatibility, simplifying integration into diverse silicon nodes.
The KL530 is built with an advanced heterogeneous AI chip architecture, designed to enhance computing efficiency while reducing power usage. Notably, it is recognized as the first in the market to support INT4 precision and transformers for commercial applications. The chip, featuring a low-power ARM Cortex M4 CPU, delivers impressive performance with 1 TOPS@INT 4 computing power, providing up to 70% higher processing efficiency compared to INT8 architectures. Its integrated smart ISP optimizes image quality, supporting AI models like CNN and RNN, suitable for IoT and AIoT ecosystems.
The RISC-V CPU IP N Class is designed to cater to the needs of 32-bit microcontroller units (MCUs) and AIoT (Artificial Intelligence of Things) applications. It is engineered to provide a balance of performance and power efficiency, making it suitable for a range of general computing needs. With its adaptable architecture, the N Class processor allows for customization, enabling developers to configure the core to meet specific application requirements while minimizing unnecessary overhead. Incorporating the RISC-V open standard, the N Class delivers robust functional features, supporting both security and functional safety needs. This processor core is ideal for applications that require reliable performance combined with low energy consumption. Developers benefit from an extensive set of resources and tools available in the RISC-V ecosystem to facilitate the integration and deployment of this processor across diverse use cases. The RISC-V CPU IP N Class demonstrates excellent scalability, allowing for configuration that aligns with the specific demands of IoT devices and embedded systems. Whether for implementing sophisticated sensor data processing or managing communication protocols within a smart device, the N Class provides the foundation necessary for developing innovative and efficient solutions.
Wormhole is a high-efficiency processor designed to handle intensive AI processing tasks. Featuring an advanced architecture, it significantly accelerates AI workload execution, making it a key component for developers looking to optimize their AI applications. Wormhole supports an expansive range of AI models and frameworks, enabling seamless adaptation and deployment across various platforms. The processor’s architecture is characterized by high core counts and integrated system interfaces that facilitate rapid data movement and processing. This ensures that Wormhole can handle both single and multi-user environments effectively, especially in scenarios that demand extensive computational resources. The seamless connectivity supports vast memory pooling and distributed processing, enhancing AI application performance and scalability. Wormhole’s full integration with Tenstorrent’s open-source ecosystem further amplifies its utility, providing developers with the tools to fully leverage the processor’s capabilities. This integration facilitates optimized ML workflows and supports continuous enhancement through community contributions, making Wormhole a forward-thinking solution for cutting-edge AI development.
aiWare represents aiMotive's advanced hardware intellectual property core for automotive neural network acceleration, pushing boundaries in efficiency and scalability. This neural processing unit (NPU) is tailored to meet the rigorous demands of automotive AI inference, providing robust support for various AI workloads, including CNNs, LSTMs, and RNNs. By achieving up to 256 Effective TOPS and remarkable scalability, aiWare caters to a wide array of applications, from edge processors in sensors to centralized high-performance modules.\n\nThe design of aiWare is particularly focused on enhancing efficiency in neural network operations, achieving up to 98% efficiency across diverse automotive applications. It features an innovative dataflow architecture, ensuring minimal external memory bandwidth usage while maximizing in-chip data processing. This reduces power consumption and enhances performance, making it highly adaptable for deployment in resource-critical environments.\n\nAdditionally, aiWare is embedded with comprehensive tools like the aiWare Studio SDK, which streamlines the neural network optimization and iteration process without requiring extensive NPU code adjustments. This ensures that aiWare can deliver optimal performance while minimizing development timelines by allowing for early performance estimations even before target hardware testing. Its integration into ASIL-B or higher certified solutions underscores aiWare's capability to power the most demanding safety applications in the automotive domain.
The C100 is designed to enhance IoT connectivity and performance with its highly integrated architecture. Built around a robust 32-bit RISC-V CPU running up to 1.5GHz, this chip offers powerful processing capabilities ideal for IoT applications. Its architecture includes embedded RAM and ROM memory, facilitating efficient data handling and computations. A prime feature of the C100 is its integration of Wi-Fi components and various transmission interfaces, enhancing its utility in diverse IoT environments. The inclusion of an ADC, LDO, and a temperature sensor supports myriad applications, ensuring devices can operate in a wide range of conditions and applications. The chip's low power consumption is a critical factor in this design, enabling longer operation duration in connected devices and reducing maintenance frequency due to less charging or battery replacement needs. This makes the C100 chip suitable for secure smart home systems, interactive toys, and healthcare devices.
SAKURA-II is an advanced AI accelerator recognized for its efficiency and adaptability. It is specifically designed for edge applications that require rapid, real-time AI inference with minimal delay. Capable of processing expansive generative AI models such as Llama 2 and Stable Diffusion within an 8W power envelope, this accelerator supports a wide range of applications from vision to language processing. Its enhanced memory bandwidth and substantial DRAM capacity ensure its suitability for handling complex AI workloads, including large-scale language and vision models. The SAKURA-II platform also features robust power management, allowing it to achieve high efficiency during operations.
The SCR7 is a 64-bit RISC-V application core crafted to meet high-performance demands of applications requiring powerful data processing. Featuring a sophisticated dual-issue pipeline with out-of-order execution, it enhances computational efficiency across varied tasks. The core is equipped with a robust floating-point unit and supports extensive RISC-V ISA extensions for advanced computing capabilities. SCR7's memory system includes L1 to L3 caches, with options for expansive up to 16MB L3 caching, ensuring data availability and integrity in demanding environments. Its multicore architecture supports up to eight cores, facilitating intensive computational tasks across industries such as AI and machine learning. Ideal for high-performance computing and big data applications, the SCR7 leverages its advanced interrupt systems and intelligent memory management for seamless operation. Comprehensive development resources, from simulators to SDKs, augment its integration across Linux-based systems, accelerating project development timelines.
The Hanguang 800 AI Accelerator by T-Head is an advanced semiconductor technology designed to accelerate AI computations and machine learning tasks. This accelerator is specifically optimized for high-performance inference, offering substantial improvements in processing times for deep learning applications. Its architecture is developed to leverage parallel computing capabilities, making it highly suitable for tasks that require fast and efficient data handling. This AI accelerator supports a broad spectrum of machine learning frameworks, ensuring compatibility with various AI algorithms. It is equipped with specialized processing units and a high-throughput memory interface, allowing it to handle large datasets with minimal latency. The Hanguang 800 is particularly effective in environments where rapid inferencing and real-time data processing are essential, such as in smart cities and autonomous driving. With its robust design and multi-faceted processing abilities, the Hanguang 800 Accelerator empowers industries to enhance their AI and machine learning deployments. Its capability to deliver swift computation and inference results ensures it is a valuable asset for companies looking to stay at the forefront of technological advancement in AI applications.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The Intelligence X280 is engineered to provide extensive capabilities for artificial intelligence and machine learning applications, emphasizing a software-first design approach. This high-performance processor supports vector and matrix computations, making it adept at handling the demanding workloads typical in AI-driven environments. With an extensive ALU and integrated VFPU capabilities, the X280 delivers superior data processing power. Capable of supporting complex AI tasks, the X280 processor leverages SiFive's advanced vector architecture to allow for high-speed data manipulation and precision. The core supports extensive vector lengths and offers compatibility with various machine learning frameworks, facilitating seamless deployment in both embedded and edge AI applications. The Intelligence family, represented by the X280, offers solutions that are not only scalable but are customizable to particular workload specifications. With high-bandwidth interfaces for connecting custom engines, this processor is built to evolve alongside AI's progressive requirements, ensuring relevance in rapidly changing technology landscapes.
NeuroMosAIc Studio is a comprehensive software platform designed to maximize AI processor utilization through intuitive model conversion, mapping, simulation, and profiling. This advanced software suite supports Edge AI models by optimizing them for specific application needs. It offers precision analysis, network compression, and quantization tools to streamline the process of deploying AI models across diverse hardware setups. The platform is notably adept at integrating multiple AI functions and facilitating edge training processes. With tools like the NMP Compiler and Simulator, it allows developers to optimize functions at different stages, from quantization to training. The Studio's versatility is crucial for developers seeking to enhance AI solutions through customized model adjustments and optimization, ensuring high performance across AI systems. NeuroMosAIc Studio is particularly valuable for its edge training support and comprehensive optimization capabilities, paving the way for efficient AI deployment in various sectors. It offers a robust toolkit for AI model developers aiming to extract the maximum performance from hardware in dynamic environments.
The Topaz FPGA family by Efinix is crafted for high-performance, cost-efficient production volumes. Topaz FPGAs combine an advanced architecture with a low-power, high-volume design, suitable for mainstream applications. These devices integrate seamlessly into systems requiring robust protocol support, including PCIe Gen3, LVDS, and MIPI, making them ideal for machine vision, industrial automation, and wireless communications. These FPGAs are designed to pack more logic into a compact area, allowing for enhanced innovation and feature addition. The architecture facilitates seamless migration to higher performance Titanium FPGAs, making Topaz a flexible and future-proof choice for developers. With support for various BGAs, these units are easy to integrate, thus enhancing system design efficiency. Topaz FPGAs ensure product longevity and a stable supply chain, integral for applications characterized by long life cycles. This ensures systems maintain high efficiency and functionality over extended periods, aligning with Efinix’s commitment to offering durable and reliable semiconductor solutions for diverse market needs.
Micro Magic offers a state-of-the-art 64-bit RISC-V core known for its ultra-low power consumption, clocking in at just 10mW when operating at 1GHz. This processor harnesses advanced design techniques that allow it to achieve high performance while maintaining low operational voltages, optimizing energy efficiency. This processor stands out for its capability to deliver impressive processing speeds, reaching up to 5GHz under optimal conditions. It is designed with power conservation in mind, making it ideal for applications where energy efficiency is critical without sacrificing processing capability. The core is part of Micro Magic’s commitment to pushing the boundaries of low-power processing technology, making it suitable for a variety of high-speed computing tasks. Its design is particularly advantageous in environments demanding swift data processing and minimal power use, reaffirming Micro Magic’s reputation for pioneering efficient silicon solutions.
SiFive's Essential family of processor cores is designed to offer flexible and scalable performance for embedded applications and IoT devices. These cores provide a wide range of custom configurations that cater to specific power and area requirements across various markets. From minimal configuration microcontrollers to more complex, Linux-capable processors, the Essential family is geared to meet diverse needs while maintaining high efficiency. The Essential lineup includes 2-Series, 6-Series, and 7-Series cores, each offering different levels of scalability and performance efficiency. The 2-Series, for instance, focuses on power optimization, making it ideal for energy-constrained environments. The 6-Series and 7-Series expand these capabilities with richer feature sets, supporting more advanced applications with scalable infrastructure. Engineered for maximum configurability, SiFive Essential cores are equipped with robust debugging and tracing capabilities. They are customizable to optimize integration within System-on-Chip (SoC) applications, ensuring reliable and secure processing across a wide range of technologies. This ability to tailor the core designs ensures that developers can achieve a seamless balance between performance and energy consumption.
The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.
The eSi-3264 epitomizes the pinnacle of the eSi-RISC portfolio, presenting a 32/64-bit processor furnished with SIMD extensions catering to high-performance requirements. Designed for applications demanding digital signal processing functionality, this processor capitalizes on minimal silicon usage while ensuring exceedingly low power consumption. Incorporating an extensive pipeline capable of dual and quad multiply-accumulate operations, the eSi-3264 significantly benefits applications in audio processing, sensor control, and touch interfacing. Its built-in IEEE-754 single and double-precision floating point operations promote comprehensive data processing capabilities, extending versatility across computationally intensive domains. The processor accommodates configurable caching attributes and a memory management unit to bolster performance amidst off-chip memory access. Its robust instruction repertoire, optional custom operations, and user-privilege modes ensure full control in secure execution environments, supporting diverse operational requirements with unmatched resource efficiency.
Tensix Neo represents the next evolution in AI processing, offering robust capabilities for handling modern AI challenges. Its design focuses on maximizing performance while maintaining efficiency, a crucial aspect in AI and machine learning environments. Tensix Neo facilitates advanced computation across multiple frameworks, supporting a range of AI applications. Featuring a strategic blend of core architecture and integrated memory, Tensix Neo excels in both processing speed and capacity, essential for handling comprehensive AI workloads. Its architecture supports multi-threaded operations, optimizing performance for parallel computing scenarios, which are common in AI tasks. Tensix Neo's seamless connection with Tenstorrent's open-source software environment ensures that developers can quickly adapt it to their specific needs. This interconnectivity not only boosts operational efficiency but also supports continuous improvements and feature expansions through community contributions, positioning Tensix Neo as a versatile solution in the landscape of AI technology.
The Spiking Neural Processor T1 is a microcontroller tailored for ultra-low-power applications demanding high-performance pattern recognition at the sensor edge. It features an advanced neuromorphic architecture that leverages spiking neural network engines combined with RISC-V core capabilities. This architecture allows for sub-milliwatt power dissipation and sub-millisecond latency, enabling the processor to conduct real-time analysis and identification of embedded patterns in sensor data while operating in always-on scenarios. Additionally, the T1 provides diverse interfaces, making it adaptable for use with various sensor types.
The KL720 is engineered for high efficiency, achieving up to 0.9 TOPS per Watt, setting it apart in the edge AI marketplace. Designed for real-world scenarios where power efficiency is paramount, this chip supports high-end IP cameras, smart TVs, and AI-enabled devices like glasses and headsets. Its ARM Cortex M4 CPU facilitates the processing of complex tasks like 4K image handling, full HD video, and 3D sensing, making it versatile for applications that include gaming and AI-assisted interactions.
Designed for high-performance and cost-effective processing, the eSi-1650 CPU core is a 16-bit processor that introduces an instruction cache to boost efficiency in power and area. This processor is tailored to work with mature process nodes utilizing OTP or Flash for program memory, eliminating the dependence on large on-chip shadow RAMs and allowing maximum CPU frequency operation independent of OTP/Flash limitations. This core presents an efficient power utilization as it allows running switch applications utilizing an instruction cache, thereby reducing memory fetch time and conserving power. Equipped with an impressive instruction set and optional custom operations, the core ensures adept handling of complex computations and data manipulations. The RISC architecture ensures streamlined performance by executing applications in fewer clock cycles, which can either enhance throughput or extend low-power states. The eSi-1650 features a 5-stage pipeline supporting complex bit and arithmetic instructions, multiprocessor configurations, and high code density through sophisticated instruction encoding. Debugging and troubleshooting are facilitated through advanced hardware debugging capabilities. Also included is an optional Memory Protection Unit for secure operations that distinguish between user and kernel spaces, contributing to system security and robustness. Delivered in a Verilog RTL format, it aligns with diverse technological processes, making it a versatile option for various embedded applications.
The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.
The SCR6 is a high-performance microcontroller core optimized for demanding embedded applications requiring substantial computational power. Its out-of-order 12-stage pipeline, complemented by a superscalar architecture, enhances processing speeds, making it ideal for real-time systems. Supporting a wide range of RISC-V ISA extensions, including cryptography and bit manipulation, SCR6 caters to secure and efficient data operations. The SCR6's memory subsystem is robust, featuring dual-level caches augmented with an L3 network-on-chip option. This rich memory architecture, along with efficient interrupt processing via APLIC units, ensures smooth high-speed data throughput in intensive applications. The core supports heterogeneous multicore configurations, enhancing parallel task execution. Designed for industrial and IoT environments, SCR6 comes with extensive development support. Its toolkit includes simulations, FPGA-based SDKs, and integration resources, facilitated through industry-standard interfaces, ensuring rapid development cycles and application deployment.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!