All IPs > Platform Level IP > Processor Core Dependent
In the realm of semiconductor IP, the Processor Core Dependent category encompasses a variety of intellectual properties specifically designed to enhance and support processor cores. These IPs are tailored to work in harmony with core processors to optimize their performance, adding value by reducing time-to-market and improving efficiency in modern integrated circuits. This category is crucial for the customization and adaptation of processors to meet specific application needs, addressing both performance optimization and system complexity management.
Processor Core Dependent IPs are integral components, typically found in applications that require robust data processing capabilities such as smartphones, tablets, and high-performance computing systems. They can also be implemented in embedded systems for automotive, industrial, and IoT applications, where precision and reliability are paramount. By providing foundational building blocks that are pre-verified and configurable, these semiconductor IPs significantly simplify the integration process within larger digital systems, enabling a seamless enhancement of processor capabilities.
Products in this category may include cache controllers, memory management units, security hardware, and specialized processing units, all designed to complement and extend the functionality of processor cores. These solutions enable system architects to leverage existing processor designs while incorporating cutting-edge features and optimizations tailored to specific application demands. Such customizations can significantly boost the performance, energy efficiency, and functionality of end-user devices, translating into better user experiences and competitive advantages.
In essence, Processor Core Dependent semiconductor IPs represent a strategic approach to processor design, providing a toolkit for customization and optimization. By focusing on interdependencies within processing units, these IPs allow for the creation of specialized solutions that cater to the needs of various industries, ensuring the delivery of high-performance, reliable, and efficient computing solutions. As the demand for sophisticated digital systems continues to grow, the importance of these IPs in maintaining competitive edge cannot be overstated.
The Metis AIPU PCIe AI Accelerator Card offers exceptional performance for AI workloads demanding significant computational capacity. It is powered by a single Metis AIPU and delivers up to 214 TOPS, catering to high-demand applications such as computer vision and real-time image processing. This PCIe card is integrated with the Voyager SDK, providing developers with a powerful yet user-friendly software environment for deploying complex AI applications seamlessly. Designed for efficiency, this accelerator card stands out by providing cutting-edge performance without the excessive power requirements typical of data center equipment. It achieves remarkable speed and accuracy, making it an ideal solution for tasks requiring fast data processing and inference speeds. The PCIe card supports a wide range of AI application scenarios, from enhancing existing infrastructure capabilities to integrating with new, dynamic systems. Its utility in various industrial settings is bolstered by its compatibility with the suite of state-of-the-art neural networks provided in the Axelera AI ecosystem.
The CXL 3.1 Switch by Panmnesia is a high-tech solution designed to manage diverse CXL devices within a cache-coherent system, minimizing latency through its proprietary low-latency CXL IP. This switch supports a scalable and flexible architecture, offering multi-level switching and port-based routing capabilities that allow expansive system configurations to meet various application demands. It is engineered to connect system devices such as CPUs, GPUs, and memory modules, ideal for constructing large-scale systems tailored to specific needs.
The Metis AIPU M.2 Accelerator Module is designed for edge AI applications that demand high-performance inference capabilities. This module integrates a single Metis AI Processing Unit (AIPU), providing an excellent solution for AI acceleration within constrained devices. Its capability to handle high-speed data processing with limited power consumption makes it an optimal choice for applications requiring efficiency and precision. With 1GB of dedicated DRAM memory, it seamlessly supports a wide array of AI pipelines, ensuring rapid integration and deployment. The design of the Metis AIPU M.2 module is centered around maximizing performance without excessive energy consumption, making it suitable for diverse applications such as real-time video analytics and multi-camera processing. Its compact form factor eases incorporation into various devices, delivering robust performance for AI tasks without the heat or power trade-offs typically associated with such systems. Engineered to problem-solve current AI demands efficiently, the M.2 module comes supported by the Voyager SDK, which simplifies the integration process. This comprehensive software suite empowers developers to build and optimize AI models directly on the Metis platform, facilitating a significant reduction in time-to-market for innovative solutions.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The NuLink Die-to-Die PHY is a state-of-the-art IP solution designed to facilitate efficient die-to-die communication on standard organic/laminate packaging. It supports multiple industry standards, including UCIe and Bunch of Wires (BoW) protocols, and features advanced bidirectional signaling capabilities to enhance data transfer rates. The NuLink technology enables exceptional performance, power economy, and reduced area footprint, which elevates its utility in AI applications and complex chiplet systems. A unique feature of this PHY is its simultaneous bidirectional signaling (SBD), that allows data to be sent and received simultaneously on the same physical line, effectively doubling the available bandwidth. This capacity is crucial for applications needing high interconnect performance, such as AI training or inference workloads, without requiring advanced packaging techniques like silicon interposers. The PHY's design supports 64 data lanes configured for optimal placement and bump map layout. With a focus on power efficiency, the NuLink achieves competitive performance metrics even in standard packaging, making it particularly suitable for high-density systems-in-package solutions.
Ventana's Veyron V2 CPU represents the pinnacle of high-performance AI and data center-class RISC-V processors. Engineered to deliver world-class performance, it supports extensive data center workloads, offering superior computational power and efficiency. The V2 model is particularly focused on accelerating AI and ML tasks, ensuring compute-intensive applications run seamlessly. Its design makes it an ideal choice for hyperscale, cloud, and edge computing solutions where performance is non-negotiable. This CPU is instrumental for companies aiming to scale with the latest in server-class technology.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
The Jotunn8 represents a leap in AI inference technology, delivering unmatched efficiency for modern data centers. This chip is engineered to manage AI model deployments with lightning-fast execution, at minimal cost and high scalability. It ensures optimal performance by balancing high throughput and low latency, while being extremely power-efficient, which significantly lowers operational costs and supports sustainable infrastructures. The Jotunn8 is designed to unlock the full capacity of AI investments by providing a high-performance platform that enhances the delivery and impact of AI models across applications. It is particularly suitable for real-time applications such as chatbots, fraud detection, and search engines, where ultra-low latency and very high throughput are critical. Power efficiency is a major emphasis of the Jotunn8, optimizing performance per watt to control energy as a substantial operational expense. Its architecture allows for flexible memory allocation ensuring seamless adaptability across varied applications, providing a robust foundation for scalable AI operations. This solution is aimed at enhancing business competitiveness by supporting large-scale model deployment and infrastructure optimization.
The Chimera GPNPU from Quadric is designed as a general-purpose neural processing unit intended to meet a broad range of demands in machine learning inference applications. It is engineered to perform both matrix and vector operations along with scalar code within a single execution pipeline, which offers significant flexibility and efficiency across various computational tasks. This product achieves up to 864 Tera Operations per Second (TOPs), making it suitable for intensive applications including automotive safety systems. Notably, the GPNPU simplifies system-on-chip (SoC) hardware integration by consolidating hardware functions into one processor core. This unification reduces complexity in system design tasks, enhances memory usage profiling, and optimizes power consumption when compared to systems involving multiple heterogeneous cores such as NPUs and DSPs. Additionally, its single-core setup enables developers to efficiently compile and execute diverse workloads, improving performance tuning and reducing development time. The architecture of the Chimera GPNPU supports state-of-the-art models with its Forward Programming Interface that facilitates easy adaptation to changes, allowing support for new network models and neural network operators. It’s an ideal solution for products requiring a mix of traditional digital signal processing and AI inference like radar and lidar signal processing, showcasing a rare blend of programming simplicity and long-term flexibility. This capability future-proofs devices, expanding their lifespan significantly in a rapidly evolving tech landscape.
Designed for entry-level server-class applications, the SCR9 is a 64-bit RISC-V processor core that comes equipped with cutting-edge features, such as an out-of-order superscalar pipeline, making it apt for processing-intensive environments. It supports both single and double-precision floating-point operations adhering to IEEE standards, which ensure precise computation results. This processor core is tailored for high-performance computing needs, with a focus on AI and ML, as well as conventional data processing tasks. It integrates an advanced interrupt system featuring APLIC configurations, enabling responsive operations even under heavy workloads. SCR9 supports up to 16 cores in a multi-cluster arrangement, each utilizing coherent multi-level caches to maintain rapid data processing and management. The comprehensive development package for SCR9 includes ready-to-deploy toolchains and simulators that expedite software development, particularly within Linux environments. The core is well-suited for deployment in entry-level server markets and data-intensive applications, with robust support for virtualization and heterogeneous architectures.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The Dynamic Neural Accelerator II (DNA-II) is a highly efficient and versatile IP specifically engineered for optimizing AI workloads at the edge. Its unique architecture allows runtime reconfiguration of interconnects among computing units, which facilitates improved parallel processing and efficiency. DNA-II supports a broad array of networks, including convolutional and transformer networks, making it an ideal choice for numerous edge applications. Its design emphasizes low power consumption while maintaining high computational performance. By utilizing a dynamic data path architecture, DNA-II sets a new benchmark for IP cores aimed at enhancing AI processing capabilities.
xcore.ai is a powerful platform tailored for the intelligent IoT market, offering unmatched flexibility and performance. It boasts a unique multi-threaded micro-architecture that provides low-latency and deterministic performance, perfect for smart applications. Each xcore.ai contains 16 logical cores distributed across two multi-threaded processor tiles, each equipped with 512kB of SRAM and capable of both integer and floating-point operations. The integrated interprocessor communication allows high-speed data exchange, ensuring ultimate scalability across multiple xcore.ai SoCs within a unified development environment.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The Nerve IIoT Platform is a comprehensive solution for machine builders, offering cloud-managed edge computing capabilities. This innovative platform delivers high levels of openness, security, flexibility, and real-time data handling, enabling businesses to embark on their digital transformation journeys. Nerve's architecture allows for seamless integration with a variety of hardware devices, from basic gateways to advanced IPCs, ensuring scalability and operational efficiency across different industrial settings. Nerve facilitates the collection, processing, and analysis of machine data in real-time, which is crucial for optimizing production and enhancing operational efficiency. By providing robust remote management functionalities, businesses can efficiently handle device operations and application deployments from any location. This capacity to manage data flows between the factory floor and the cloud transitions enterprises into a new era of digital management, thereby minimizing costs and maximizing productivity. The platform also supports multiple cloud environments, empowering businesses to select their preferred cloud service while maintaining operational continuity. With its secure, IEC 62443-4-1 certified infrastructure, Nerve ensures that both data and applications remain protected from cyber threats. Its integration of open technologies, such as Docker and virtual machines, further facilitates rapid implementation and prototyping, enabling businesses to adapt swiftly to ever-changing demands.
Wormhole is a high-efficiency processor designed to handle intensive AI processing tasks. Featuring an advanced architecture, it significantly accelerates AI workload execution, making it a key component for developers looking to optimize their AI applications. Wormhole supports an expansive range of AI models and frameworks, enabling seamless adaptation and deployment across various platforms. The processor’s architecture is characterized by high core counts and integrated system interfaces that facilitate rapid data movement and processing. This ensures that Wormhole can handle both single and multi-user environments effectively, especially in scenarios that demand extensive computational resources. The seamless connectivity supports vast memory pooling and distributed processing, enhancing AI application performance and scalability. Wormhole’s full integration with Tenstorrent’s open-source ecosystem further amplifies its utility, providing developers with the tools to fully leverage the processor’s capabilities. This integration facilitates optimized ML workflows and supports continuous enhancement through community contributions, making Wormhole a forward-thinking solution for cutting-edge AI development.
aiWare represents aiMotive's advanced hardware intellectual property core for automotive neural network acceleration, pushing boundaries in efficiency and scalability. This neural processing unit (NPU) is tailored to meet the rigorous demands of automotive AI inference, providing robust support for various AI workloads, including CNNs, LSTMs, and RNNs. By achieving up to 256 Effective TOPS and remarkable scalability, aiWare caters to a wide array of applications, from edge processors in sensors to centralized high-performance modules.\n\nThe design of aiWare is particularly focused on enhancing efficiency in neural network operations, achieving up to 98% efficiency across diverse automotive applications. It features an innovative dataflow architecture, ensuring minimal external memory bandwidth usage while maximizing in-chip data processing. This reduces power consumption and enhances performance, making it highly adaptable for deployment in resource-critical environments.\n\nAdditionally, aiWare is embedded with comprehensive tools like the aiWare Studio SDK, which streamlines the neural network optimization and iteration process without requiring extensive NPU code adjustments. This ensures that aiWare can deliver optimal performance while minimizing development timelines by allowing for early performance estimations even before target hardware testing. Its integration into ASIL-B or higher certified solutions underscores aiWare's capability to power the most demanding safety applications in the automotive domain.
The RISC-V CPU IP N Class is designed to cater to the needs of 32-bit microcontroller units (MCUs) and AIoT (Artificial Intelligence of Things) applications. It is engineered to provide a balance of performance and power efficiency, making it suitable for a range of general computing needs. With its adaptable architecture, the N Class processor allows for customization, enabling developers to configure the core to meet specific application requirements while minimizing unnecessary overhead. Incorporating the RISC-V open standard, the N Class delivers robust functional features, supporting both security and functional safety needs. This processor core is ideal for applications that require reliable performance combined with low energy consumption. Developers benefit from an extensive set of resources and tools available in the RISC-V ecosystem to facilitate the integration and deployment of this processor across diverse use cases. The RISC-V CPU IP N Class demonstrates excellent scalability, allowing for configuration that aligns with the specific demands of IoT devices and embedded systems. Whether for implementing sophisticated sensor data processing or managing communication protocols within a smart device, the N Class provides the foundation necessary for developing innovative and efficient solutions.
SAKURA-II is an advanced AI accelerator recognized for its efficiency and adaptability. It is specifically designed for edge applications that require rapid, real-time AI inference with minimal delay. Capable of processing expansive generative AI models such as Llama 2 and Stable Diffusion within an 8W power envelope, this accelerator supports a wide range of applications from vision to language processing. Its enhanced memory bandwidth and substantial DRAM capacity ensure its suitability for handling complex AI workloads, including large-scale language and vision models. The SAKURA-II platform also features robust power management, allowing it to achieve high efficiency during operations.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The Hanguang 800 AI Accelerator by T-Head is an advanced semiconductor technology designed to accelerate AI computations and machine learning tasks. This accelerator is specifically optimized for high-performance inference, offering substantial improvements in processing times for deep learning applications. Its architecture is developed to leverage parallel computing capabilities, making it highly suitable for tasks that require fast and efficient data handling. This AI accelerator supports a broad spectrum of machine learning frameworks, ensuring compatibility with various AI algorithms. It is equipped with specialized processing units and a high-throughput memory interface, allowing it to handle large datasets with minimal latency. The Hanguang 800 is particularly effective in environments where rapid inferencing and real-time data processing are essential, such as in smart cities and autonomous driving. With its robust design and multi-faceted processing abilities, the Hanguang 800 Accelerator empowers industries to enhance their AI and machine learning deployments. Its capability to deliver swift computation and inference results ensures it is a valuable asset for companies looking to stay at the forefront of technological advancement in AI applications.
The Intelligence X280 is engineered to provide extensive capabilities for artificial intelligence and machine learning applications, emphasizing a software-first design approach. This high-performance processor supports vector and matrix computations, making it adept at handling the demanding workloads typical in AI-driven environments. With an extensive ALU and integrated VFPU capabilities, the X280 delivers superior data processing power. Capable of supporting complex AI tasks, the X280 processor leverages SiFive's advanced vector architecture to allow for high-speed data manipulation and precision. The core supports extensive vector lengths and offers compatibility with various machine learning frameworks, facilitating seamless deployment in both embedded and edge AI applications. The Intelligence family, represented by the X280, offers solutions that are not only scalable but are customizable to particular workload specifications. With high-bandwidth interfaces for connecting custom engines, this processor is built to evolve alongside AI's progressive requirements, ensuring relevance in rapidly changing technology landscapes.
SiFive's Essential family of processor cores is designed to offer flexible and scalable performance for embedded applications and IoT devices. These cores provide a wide range of custom configurations that cater to specific power and area requirements across various markets. From minimal configuration microcontrollers to more complex, Linux-capable processors, the Essential family is geared to meet diverse needs while maintaining high efficiency. The Essential lineup includes 2-Series, 6-Series, and 7-Series cores, each offering different levels of scalability and performance efficiency. The 2-Series, for instance, focuses on power optimization, making it ideal for energy-constrained environments. The 6-Series and 7-Series expand these capabilities with richer feature sets, supporting more advanced applications with scalable infrastructure. Engineered for maximum configurability, SiFive Essential cores are equipped with robust debugging and tracing capabilities. They are customizable to optimize integration within System-on-Chip (SoC) applications, ensuring reliable and secure processing across a wide range of technologies. This ability to tailor the core designs ensures that developers can achieve a seamless balance between performance and energy consumption.
Tensix Neo represents the next evolution in AI processing, offering robust capabilities for handling modern AI challenges. Its design focuses on maximizing performance while maintaining efficiency, a crucial aspect in AI and machine learning environments. Tensix Neo facilitates advanced computation across multiple frameworks, supporting a range of AI applications. Featuring a strategic blend of core architecture and integrated memory, Tensix Neo excels in both processing speed and capacity, essential for handling comprehensive AI workloads. Its architecture supports multi-threaded operations, optimizing performance for parallel computing scenarios, which are common in AI tasks. Tensix Neo's seamless connection with Tenstorrent's open-source software environment ensures that developers can quickly adapt it to their specific needs. This interconnectivity not only boosts operational efficiency but also supports continuous improvements and feature expansions through community contributions, positioning Tensix Neo as a versatile solution in the landscape of AI technology.
The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.
The GNSS VHDL Library is a cornerstone offering from GNSS Sensor Ltd, engineered to provide a potent solution for those integrating Global Navigation Satellite System functionalities. This library is lauded for its configurability, allowing developers to harness the power of satellite navigation on-chip efficiently. It facilitates the incorporation of GPS, GLONASS, and Galileo systems into digital designs with minimum fuss. Designed to be largely independent from specific CPU platforms, the GNSS VHDL Library stands out for its flexibility. It employs a single configuration file to adapt to different hardware environments, ensuring broad compatibility and ease of implementation. Whether for research or commercial application, this library allows for rapid prototyping of reliable GNSS systems, providing essential building blocks for precise navigation capabilities. Integrating fast search engines and offering configurable signal processing capabilities, the library supports scalability across platforms, making it a crucial component for industries requiring high-precision navigation technology. Its architecture supports both 32-bit SPARC-V8 and 64-bit RISC-V system-on-chips, highlighting its adaptability and cutting-edge design.
The Time-Triggered Protocol (TTP) is an advanced communication protocol designed to enable high-reliability data transmission in embedded systems. It is widely used in mission-critical environments such as aerospace and automotive industries, where it supports deterministic message delivery. By ensuring precise time coordination across various control units, TTP helps enhance system stability and predictability, which are essential for real-time operations. TTP operates on a time-triggered architecture that divides time into fixed-length intervals, known as communication slots. These slots are assigned to specific tasks, enabling precise scheduling of messages and eliminating the possibility of data collision. This deterministic approach is crucial for systems that require high levels of safety and fault tolerance, allowing them to operate effectively under stringent conditions. Moreover, TTP supports fault isolation and recovery mechanisms that significantly improve system reliability. Its ability to detect and manage faults without operator intervention is key in maintaining continuous system operations. Deployment is also simplified by its modular structure, which allows seamless integration into existing networks.
The Zhenyue 510 SSD Controller is a high-performance enterprise-grade controller providing robust management for SSD storage solutions. It is engineered to deliver exceptional I/O throughput of up to 3400K IOPS and a data transfer rate reaching 14 GByte/s. This remarkable performance is achieved through the integration of T-Head's proprietary low-density parity-check (LDPC) error correction algorithms, enhancing reliability and data integrity. Equipped with T-Head's low-latency architecture, the Zhenyue 510 offers swift read and write operations, crucial for applications demanding fast data processing capabilities. It supports flexible Nand flash interfacing, which makes it adaptable to multiple generations of flash memory technologies. This flexibility ensures that the device remains a viable solution as storage standards evolve. Targeted at applications such as online transactions, large-scale data management, and software-defined storage systems, the Zhenyue 510's advanced capabilities make it a cornerstone for organizations needing seamless and efficient data storage solutions. The combination of innovative design, top-tier performance metrics, and adaptability positions the Zhenyue 510 as a leader in SSD controller technologies.
The iniDSP is a high-performance 16-bit fixed-point Digital Signal Processor (DSP) built for system-on-chip applications. Its architecture, inspired by the CD2450A design from Clarkspur Inc., provides exceptional processing power with minimal energy consumption, making it well-suited for both consumer electronics like hearing aids and more complex control systems in industrial settings. This DSP core offers enhanced computational capabilities, including a 16x16 multiplier with a 40-bit accumulator, which allows for precise and rapid signal processing tasks. Its flexible and fully synchronous design ensures compatibility with a variety of systems and technologies, supporting seamless integration into existing infrastructures or new developments. The iniDSP core is accompanied by comprehensive software tools, including an assembler, linker, and debugger, providing a solid foundation for developers to implement complex algorithms efficiently. The design’s emphasis on low power consumption, combined with robust high-performance operations, makes it an attractive choice for a broad range of applications ranging from audio processing to adaptive control systems.
The RegSpec tool from Dyumnin Semiconductors is a sophisticated code generation solution designed to create comprehensive CCSR codes from various input formats including SystemRDL, IP-XACT, CSV, Excel, XML, and JSON. This tool not only outputs Verilog RTL, System Verilog UVM code, and SystemC header files but also generates documentation in multiple formats such as HTML, PDF, and Word. Unlike traditional CSR code generators, RegSpec covers intricate scenarios involving synchronization across multiple clock domains, hardware handshakes, and interrupt setups, which typically require manual coding. It aids designers by offering full support for complex CCSR features, potentially reducing the design cycle time and improving accuracy. For verification purposes, RegSpec generates UVM-compatible code, enabling seamless integration into your verification environment. It also supports RALF file format generation, which aligns with VMM methodologies, thus broadening its applicability across various verification frameworks. In terms of system design, the tool extends its capabilities by generating standard C/C++ headers essential for firmware access and creating SystemC models for comprehensive system simulations. Furthermore, RegSpec ensures compatibility and interoperability with existing industry tools through import and export functionalities in SystemRDL and IP-XACT formats. The tool's versatility is highlighted by its ability to handle custom data formats, offering robust flexibility for designers working in unique environments. Overall, RegSpec is an indispensable asset for those looking to streamline their register design processes with enhanced automation and reduced manual effort.
ISPido represents a fully configurable RTL Image Signal Processing Pipeline, adhering to the AMBA AXI4 standards and tailored through the AXI4-LITE protocol for seamless integration with systems such as RISC-V. This advanced pipeline supports a variety of image processing functions like defective pixel correction, color filter interpolation using the Malvar-Cutler algorithm, and auto-white balance, among others. Designed to handle resolutions up to 7680x7680, ISPido provides compatibility for both 4K and 8K video systems, with support for 8, 10, or 12-bit depth inputs. Each module within this pipeline can be fine-tuned to fit specific requirements, making it a versatile choice for adapting to various imaging needs. The architecture's compatibility with flexible standards ensures robust performance and adaptability in diverse applications, from consumer electronics to professional-grade imaging solutions. Through its compact design, ISPido optimizes area and energy efficiency, providing high-quality image processing while keeping hardware demands low. This makes it suitable for battery-operated devices where power efficiency is crucial, without sacrificing the processing power needed for high-resolution outputs.
The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.
The Codasip RISC-V BK Core Series is designed to offer highly performant solutions suitable for a range of tasks from embedded applications to more demanding compute environments. By leveraging the RISC-V architecture, the BK Core Series provides a balance of power efficiency and processing capability, which is ideal for IoT edge applications and sensor controllers. The series is built around the philosophy of flexibility, allowing for modifications and enhancements to meet specific application requirements, including the integration of custom instructions to accommodate special workloads. This series also supports functional safety and security measures as outlined by industry standards, ensuring a robust foundation for critical applications.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
ISPido on VIP Board is a customized runtime solution tailored for Lattice Semiconductors’ Video Interface Platform (VIP) board. This setup enables real-time image processing and provides flexibility for both automated configuration and manual control through a menu interface. Users can adjust settings via histogram readings, select gamma tables, and apply convolutional filters to achieve optimal image quality. Equipped with key components like the CrossLink VIP input bridge board and ECP5 VIP Processor with ECP5-85 FPGA, this solution supports dual image sensors to produce a 1920x1080p HDMI output. The platform enables dynamic runtime calibration, providing users with interface options for active parameter adjustments, ensuring that image settings are fine-tuned for various applications. This system is particularly advantageous for developers and engineers looking to integrate sophisticated image processing capabilities into their devices. Its runtime flexibility and comprehensive set of features make it a valuable tool for prototyping and deploying scalable imaging solutions.
Dyumnin Semiconductors' RISCV SoC is a robust solution built around a 64-bit quad-core server-class RISC-V CPU, designed to meet advanced computing demands. This chip is modular, allowing for the inclusion of various subsystems tailored to specific applications. It integrates a sophisticated AI/ML subsystem that features an AI accelerator tightly coupled with a TensorFlow unit, streamlining AI operations and enhancing their efficiency. The SoC supports a multimedia subsystem equipped with IP for HDMI, Display Port, and MIPI, as well as camera and graphic accelerators for comprehensive multimedia processing capabilities. Additionally, the memory subsystem includes interfaces for DDR, MMC, ONFI, NorFlash, and SD/SDIO, ensuring compatibility with a wide range of memory technologies available in the market. This versatility makes it a suitable choice for devices requiring robust data storage and retrieval capabilities. To address automotive and communication needs, the chip's automotive subsystem provides connectivity through CAN, CAN-FD, and SafeSPI IPs, while the communication subsystem supports popular protocols like PCIe, Ethernet, USB, SPI, I2C, and UART. The configurable nature of this SoC allows for the adaptation of its capabilities to meet specific end-user requirements, making it a highly flexible tool for diverse applications.
Designed for high-performance computing environments, the RISC-V CPU IP UX Class incorporates a 64-bit architecture enriched with MMU capabilities, making it an excellent choice for Linux-based applications within data centers and network infrastructures. This class of processors is optimized to meet the demanding requirements of modern computing systems, where throughput and reliability are critical. The UX Class supports advanced features like multi-core designs, which enable it to efficiently manage parallel processing tasks. This capability allows for significant performance improvements in applications where simultaneous process execution is desired. Moreover, the UX Class adheres to the RISC-V open architecture, promoting flexibility and innovation among developers who require customized, high-performance processor cores. Accompanied by an extensive ecosystem, the UX Class provides developers with a wealth of resources needed to maximize the processor's capabilities. From toolchains to development kits, these resources streamline the deployment process, allowing for the quick adaptation and integration of UX Class processors into existing and new systems alike. The UX Class is instrumental in advancing the development of data-centric applications and infrastructures.
The SiFive Performance family is an embodiment of high-efficiency computing, tailored to deliver maximum throughput across various applications. Designed with a 64-bit out-of-order architecture, these processors are equipped with up to 256-bit vector support, making them proficient in handling complex data and multimedia processing tasks critical for data centers and AI applications. The Performance cores range from 3-wide to 6-wide out-of-order models, capable of integrating up to two vector engines dedicated to AI workload optimizations. This setup provides an excellent balance of energy efficiency and computing power, supporting diverse applications ranging from web servers and network storage to consumer electronics requiring smart capabilities. Focused on maximizing performance while minimizing power usage, the Performance family allows developers to customize and optimize processing capabilities to match specific use-cases. This adaptability, combined with high efficiency, renders the Performance line a fitting choice for modern computational tasks that demand both high throughput and energy conservation.
The RV32EC_P2 core is a streamlined 2-stage pipeline RISC-V processor core aimed at small, low-power embedded applications. This processor core is designed to run only trusted firmware and can be implemented in both ASIC and FPGA-based design flows. It is compliant with RISC-V User-Level ISA V2.2, incorporating standard compressed instructions to minimize code size and optional integer multiplication and division instructions for flexibility. With a simple machine-mode privileged architecture, it supports direct physical memory addressing, along with an external interrupt controller for expanded interrupt handling. The core also integrates tightly-coupled memory interfaces and a low-power idle state option, making it highly adaptable for various low-energy applications.
The iCan PicoPop® is a sophisticated System on Module (SOM) based on Xilinx's Zynq UltraScale+. This miniaturized module is pivotal in simulations requiring high-performance processes like video signal processing within aerospace applications. It serves as the backbone for complex embedded systems, ensuring reliable and efficient operation in demanding environments.
The Tyr AI Processor Family is designed around versatile programmability and high performance for AI and general-purpose processing. It consists of variants such as Tyr4, Tyr2, and Tyr1, each offering a unique performance profile optimized for different operational scales. These processors are fully programmable and support high-level programming throughout, ensuring they meet diverse computing needs with precision. Each member of the Tyr family features distinct core configurations, tailored for specific throughput and performance needs. The top-tier Tyr4 boasts 8 cores with a peak capability of 1600 Tflops when leveraging fp8 tensor cores, making it suitable for demanding AI tasks. Tyr2 and Tyr1 scale down these resources to 4 and 2 cores, respectively, achieving proportional efficiency and power savings. All models incorporate substantial on-chip memory, optimizing data handling and execution efficiency without compromising on power use. Moreover, the Tyr processors adapt AI processes automatically on a layer-by-layer basis to enhance implementation efficiency. This adaptability, combined with their near-theory performance levels, renders them ideal for high-throughput AI workloads that require flexible execution and dependable scalability.
VisualSim Architect is an advanced modeling and simulation tool designed for system engineers to explore and analyze performance, power, and functionality of electronic systems. This platform supports a multi-domain model of computation that is capable of simulating a wide range of devices including processors, memory storage, wireless systems, and semiconductor buses. Utilizing an XML database, VisualSim Architect allows for flexible model creation, easy integration across distributed systems, and supports real-time adjustments and batch processing for comprehensive system analysis. The platform boasts extensive libraries for various types of components such as hardware, software, resource management, and traffic control, each designed to streamline model construction and enable thorough exploration across diverse applications. Users benefit from the ability to examine internal logics, manage buffers, and accurately model functionalities to ensure all components meet industry specifications. These IP blocks can be customized and adjusted in real time to fit specific project requirements. VisualSim Architect is equipped with robust reporting features that provide essential insights into system utilization, delay metrics, and advanced cache performance analyses. This tool is designed to be user-friendly, offering a graphical environment for model construction and validation. The software is compatible with major operating systems including Windows, Linux, and Mac OS X, empowering users to leverage its capabilities irrespective of their technical environment.
Bluespec's Portable RISC-V Cores offer a versatile and adaptable solution for developers seeking cross-platform compatibility with support for FPGAs from Achronix, Xilinx, Lattice, and Microsemi. These cores come with support for operating systems like Linux and FreeRTOS, providing developers with a seamless and open-source toolset for application development. By leveraging Bluespec’s extensive compatibility and open-source frameworks, developers can benefit from efficient, versatile RISC-V application deployment.
ASIC North's Sensor Interface Derivatives are specialized circuits designed to enhance sensor systems by improving their performance and integration into broader network architectures. These derivatives offer reliable operation across a wide range of applications, ensuring robust performance even in challenging environments. With a focus on customizability, the designs are adaptable to meet specific client needs, facilitating the development of efficient sensor networks.
The SoC Platform by SEMIFIVE facilitates the rapid development of custom silicon chips, optimized for specific applications through the use of domain-specific architectures. Paired with a pool of pre-verified IPs, it lowers the cost, mitigates risks, and speeds up the development timeline compared to traditional methods. This platform effortlessly supports a multitude of applications by providing silicon-proven infrastructure. Supporting various process technologies, this platform integrates seamlessly with existing design methodologies, offering flexibility and the possibility to fine-tune specifications according to application needs. The core of the platform's design philosophy focuses on maximizing reusability and minimizing engineering overhead, key for reducing time-to-market. Designed for simplicity and comprehensiveness, the SoC Platform offers tools and models that ensure quality and reduce integration complexity, from architecture and physical design to software support. As an end-to-end solution, it stands out as a reliable partner for enterprises aiming to bring innovative products to market efficiently and effectively.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!