All IPs > Platform Level IP > Processor Core Dependent
In the realm of semiconductor IP, the Processor Core Dependent category encompasses a variety of intellectual properties specifically designed to enhance and support processor cores. These IPs are tailored to work in harmony with core processors to optimize their performance, adding value by reducing time-to-market and improving efficiency in modern integrated circuits. This category is crucial for the customization and adaptation of processors to meet specific application needs, addressing both performance optimization and system complexity management.
Processor Core Dependent IPs are integral components, typically found in applications that require robust data processing capabilities such as smartphones, tablets, and high-performance computing systems. They can also be implemented in embedded systems for automotive, industrial, and IoT applications, where precision and reliability are paramount. By providing foundational building blocks that are pre-verified and configurable, these semiconductor IPs significantly simplify the integration process within larger digital systems, enabling a seamless enhancement of processor capabilities.
Products in this category may include cache controllers, memory management units, security hardware, and specialized processing units, all designed to complement and extend the functionality of processor cores. These solutions enable system architects to leverage existing processor designs while incorporating cutting-edge features and optimizations tailored to specific application demands. Such customizations can significantly boost the performance, energy efficiency, and functionality of end-user devices, translating into better user experiences and competitive advantages.
In essence, Processor Core Dependent semiconductor IPs represent a strategic approach to processor design, providing a toolkit for customization and optimization. By focusing on interdependencies within processing units, these IPs allow for the creation of specialized solutions that cater to the needs of various industries, ensuring the delivery of high-performance, reliable, and efficient computing solutions. As the demand for sophisticated digital systems continues to grow, the importance of these IPs in maintaining competitive edge cannot be overstated.
Addressing the need for high-performance AI processing, the Metis AIPU PCIe AI Accelerator Card from Axelera AI offers an outstanding blend of speed, efficiency, and power. Designed to boost AI workloads significantly, this PCIe card leverages the prowess of the Metis AI Processing Unit (AIPU) to deliver unparalleled AI inference capabilities for enterprise and industrial applications. The card excels in handling complex AI models and large-scale data processing tasks, significantly enhancing the efficiency of computational tasks within various edge settings. The Metis AIPU embedded within the PCIe card delivers high TOPs (Tera Operations Per Second), allowing it to execute multiple AI tasks concurrently with remarkable speed and precision. This makes it exceptionally suitable for applications such as video analytics, autonomous driving simulations, and real-time data processing in industrial environments. The card's robust architecture reduces the load on general-purpose processors by offloading AI tasks, resulting in optimized system performance and lower energy consumption. With easy integration capabilities supported by the state-of-the-art Voyager SDK, the Metis AIPU PCIe AI Accelerator Card ensures seamless deployment of AI models across various platforms. The SDK facilitates efficient model optimization and tuning, supporting a wide range of neural network models and enhancing overall system capabilities. Enterprises leveraging this card can see significant improvements in their AI processing efficiency, leading to faster, smarter, and more efficient operations across different sectors.
The Universal Chiplet Interconnect Express (UCIe) by EXTOLL is a cutting-edge interconnect framework designed to revolutionize chip-to-chip communication within heterogeneous systems. This product exemplifies the shift towards chiplet architecture, a modular approach enabling enhanced performance and flexibility in semiconductor designs. UCIe offers an open and customizable platform that supports a wide range of technology nodes, particularly excelling in the 12nm to 28nm range. This adaptability ensures it can meet the diverse needs of modern semiconductor applications, providing a bridge that enhances integration across various chiplet components. Such capabilities make it ideal for applications requiring high bandwidth and low latency. The design of UCIe focuses on minimizing power consumption while maximizing data throughput, aligning with EXTOLL’s objective of delivering eco-efficient technology. It empowers manufacturers to forge robust connections between chiplets, allowing optimized performance and scalability in data-intensive environments like data centers and advanced consumer electronics.
The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.
aiWare is a high-performance NPU designed to meet the rigorous demands of automotive AI inference, providing a scalable solution for ADAS and AD applications. This hardware IP core is engineered to handle a wide array of AI workloads, including the most advanced neural network structures like CNNs, LSTMs, and RNNs. By integrating cutting-edge efficiency and scalability, aiWare delivers industry-leading neural processing power tailored to automobile-grade specifications.\n\nThe NPU's architecture emphasizes hardware determinism and offers ISO 26262 ASIL-B certification, ensuring that aiWare meets stringent automotive safety standards. Its efficient design also supports up to 256 effective TOPS per core, and can scale to handle thousands of TOPS through multicore integration, minimizing power consumption effectively. The aiWare's system-level optimizations reduce reliance on external memory by leveraging local memory for data management, boosting performance efficiency across varied input data sizes and complexities.\n\naiWare’s development toolkit, aiWare Studio, is distinguished by its innovative ability to optimize neural network execution without the need for manual intervention by software engineers. This empowers ai engineers to focus on refining NNs for production, significantly accelerating iteration cycles. Coupled with aiMotive's aiDrive software suite, aiWare provides an integrated environment for creating highly efficient automotive AI applications, ensuring seamless integration and rapid deployment across multiple vehicle platforms.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
Chimera GPNPU is engineered to revolutionize AI/ML computational capabilities on single-core architectures. It efficiently handles matrix, vector, and scalar code, unifying AI inference and traditional C++ processing under one roof. By alleviating the need for partitioning AI workloads between different processors, it streamlines software development and drastically speeds up AI model adaptation and integration. Ideal for SoC designs, the Chimera GPNPU champions an architecture that is both versatile and powerful, handling complex parallel workloads with a single unified binary. This configuration not only boosts software developer productivity but also ensures an enduring flexibility capable of accommodating novel AI model architectures on the horizon. The architectural fabric of the Chimera GPNPU seamlessly blends the high matrix performance of NPUs with C++ programmability found in traditional processors. This core is delivered in a synthesizable RTL form, with scalability options ranging from a single-core to multi-cluster designs to meet various performance benchmarks. As a testament to its adaptability, the Chimera GPNPU can run any AI/ML graph from numerous high-demand application areas such as automotive, mobile, and home digital appliances. Developers seeking optimization in inference performance will find the Chimera GPNPU a pivotal tool in maintaining cutting-edge product offerings. With its focus on simplifying hardware design, optimizing power consumption, and enhancing programmer ease, this processor ensures a sustainable and efficient path for future AI/ML developments.
SAKURA-II AI Accelerator represents EdgeCortix's latest advancement in edge AI processing, offering unparalleled energy efficiency and extensive capabilities for generative AI tasks. This accelerator is designed to manage demanding AI models, including Llama 2, Stable Diffusion, DETR, and ViT, within a slim power envelope of about 8W. With capabilities extending to multi-billion parameter models, SAKURA-II meets a wide range of edge applications in vision, language, and audio. The SAKURA-II's architecture maximizes AI compute efficiency, delivering more than twice the utilization of competitive solutions. It boasts remarkable DRAM bandwidth, essential for large language and vision models, while maintaining low power consumption. The hardware supports real-time Batch=1 processing, demonstrating its edge in performance even in constrained environments, making it a choice solution for diverse industrial AI applications. With 60 TOPS (INT8) and 30 TFLOPS (BF16) in performance metrics, this accelerator is built to exceed expectations in demanding conditions. It features robust memory configurations supporting up to 32GB of DRAM, ideal for processing intricate AI workloads. By leveraging sparse computing techniques, SAKURA-II optimizes its memory and bandwidth usage effectively, ensuring reliable performance across all deployed applications.
The Jotunn8 AI Accelerator is engineered for lightning-fast AI inference at unprecedented scale. It is designed to meet the demands of modern data centers by providing exceptional throughput, low latency, and optimized energy efficiency. The Jotunn8 outperforms traditional setups by allowing large-scale deployment of trained models, ensuring robust performance while reducing operational costs. Its capabilities make it ideal for real-time applications such as chatbots, fraud detection, and advanced search algorithms. What sets the Jotunn8 apart is its adaptability to various AI algorithms, including reasoning and generative models, alongside agentic AI frameworks. This seamless integration achieves near-theoretical performance, allowing the chip to excel in applications that require logical rigor and creative processing. With a focus on minimizing carbon footprint, the Jotunn8 is meticulously designed to enhance both performance per watt and overall sustainability. The Jotunn8 supports massive memory handling with HBM capability, promoting incredibly high data throughput that aligns with the needs of demanding AI processes. Its architecture is purpose-built for speed, efficiency, and the ability to scale with technological advances, providing a solid foundation for AI infrastructure looking to keep pace with evolving computational demands.
xcore.ai is XMOS Semiconductor's innovative programmable chip designed for advanced AI, DSP, and I/O applications. It enables developers to create highly efficient systems without the complexity typical of multi-chip solutions, offering capabilities that integrate AI inference, DSP tasks, and I/O control seamlessly. The chip architecture boasts parallel processing and ultra-low latency, making it ideal for demanding tasks in robotics, automotive systems, and smart consumer devices. It provides the toolset to deploy complex algorithms efficiently while maintaining robust real-time performance. With xcore.ai, system designers can leverage a flexible platform that supports the rapid prototyping and development of intelligent applications. Its performance allows for seamless execution of tasks such as voice recognition and processing, industrial automation, and sensor data integration. The adaptable nature of xcore.ai makes it a versatile solution for managing various inputs and outputs simultaneously, while maintaining high levels of precision and reliability. In automotive and industrial applications, xcore.ai supports real-time control and monitoring tasks, contributing to smarter, safer systems. For consumer electronics, it enhances user experience by enabling responsive voice interfaces and high-definition audio processing. The chip's architecture reduces the need for exterior components, thus simplifying design and reducing overall costs, paving the way for innovative solutions where technology meets efficiency and scalability.
The Metis AIPU M.2 Accelerator Module by Axelera AI is a compact and powerful solution designed for AI inference at the edge. This module delivers remarkable performance, comparable to that of a PCIe card, all while fitting into the streamlined M.2 form factor. Ideal for demanding AI applications that require substantial computational power, the module enhances processing efficiency while minimizing power usage. With its robust infrastructure, it is geared toward integrating into applications that demand high throughput and low latency, making it a perfect fit for intelligent vision applications and real-time analytics. The AIPU, or Artificial Intelligence Processing Unit, at the core of this module provides industry-leading performance by offloading AI workloads from traditional CPU or GPU setups, allowing for dedicated AI computation that is faster and more energy-efficient. This not only boosts the capabilities of the host systems but also drastically reduces the overall energy consumption. The module supports a wide range of AI applications, from facial recognition and security systems to advanced industrial automation processes. By utilizing Axelera AI’s innovative software solutions, such as the Voyager SDK, the Metis AIPU M.2 Accelerator Module enables seamless integration and full utilization of AI models and applications. The SDK offers enhancements like compatibility with various industry tools and frameworks, thus ensuring a smooth deployment process and quick time-to-market for advanced AI systems. This product represents Axelera AI’s commitment to revolutionizing edge computing with streamlined, effective AI acceleration solutions.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The Hanguang 800 AI Accelerator by T-Head Semiconductor is a powerful AI acceleration chip designed to enhance machine learning tasks. It excels in providing the computational power necessary for intensive AI workloads, effectively reducing processing times for large-scale data frameworks. This makes it an ideal choice for organizations aiming to infuse AI capabilities into their operations with maximum efficiency. Built with an emphasis on speed and performance, the Hanguang 800 is optimized for applications requiring vast amounts of data crunching. It supports a diverse array of AI models and workloads, ensuring flexibility and robust performance across varying use cases. This accelerates the deployment of AI applications in sectors such as autonomous driving, natural language processing, and real-time data analysis. The Hanguang 800's architecture is complemented by proprietary algorithms that enhance processing throughput, competing against traditional processors by providing significant gains in efficiency. This accelerator is indicative of T-Head's commitment to advancing AI technologies and highlights their capability to cater to specialized industry needs through innovative semiconductor developments.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The Maverick-2 Intelligent Compute Accelerator represents the pinnacle of Next Silicon's innovative approach to computational resources. This state-of-the-art accelerator leverages the Intelligent Compute Architecture for software-defined adaptability, enabling it to autonomously tailor its real-time operations across various HPC and AI workloads. By optimizing performance using insights gained through real-time telemetry, Maverick-2 ensures superior computational efficiency and reduced power consumption, making it an ideal choice for demanding computational environments.\n\nMaverick-2 brings transformative performance enhancements to large-scale scientific research and data-heavy industries by dispensing with the need for codebase modifications or specialized software stacks. It supports a wide range of familiar development tools and frameworks, such as C/C++, FORTRAN, and Kokkos, simplifying the integration process for developers and reducing time-to-discovery significantly.\n\nEngineered with advanced features like high bandwidth memory (HBM3E) and built on TSMC's 5nm process technology, this accelerator provides not only unmatched adaptability but also an energy-efficient, eco-friendly computing solution. Whether embedded in single-die PCIe cards or dual-die OCP Accelerator Modules, the Maverick-2 is positioned as a future-proof solution capable of evolving with technological advancements in AI and HPC.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The NuLink Die-to-Die PHY for Standard Packaging by Eliyan is engineered to facilitate superior die-to-die interconnectivity on standard organic/laminate package substrates. This innovative PHY IP supports key industry standards such as UCIe and BoW, and includes proprietary technologies like UMI and SBD. The NuLink PHY delivers leading performance and power efficiency, comparable to advanced packaging technologies, but at a fraction of the cost. It features configurations with up to 64 data lanes, supporting a data rate per lane of up to 64Gbps, making it ideal for applications demanding high bandwidth and low latency. The implementation enhances system design while reducing the necessary area and thermal load, which significantly eases integration into existing hardware ecosystems.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
The Codasip RISC-V BK Core Series offers versatile, low-power, and high-performance solutions tailored for various embedded applications. These cores ensure efficiency and reliability by incorporating RISC-V compliance and are verified through advanced methodologies. Known for their adaptability, these cores can cater to applications needing robust performance while maintaining stringent power and area requirements.
The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.
The SEMIFIVE AI Inference Platform is engineered to facilitate rapid development and deployment of AI inference solutions within custom silicon environments. Utilizing seamless integration with silicon-proven IPs, this platform delivers a high-performance framework optimized for AI and machine learning tasks. By providing a strategic advantage in cost reduction and efficiency, the platform decreases time-to-market challenges through pre-configured model layers and extensive IP libraries tailored for AI applications. It also offers enhanced scalability through its support for various computational and network configurations, making it adaptable to both high-volume and specialized market segments. This platform supports complex AI workloads on scalable AI engines, ensuring optimized performance in data-intensive operations. The integration of advanced processors and memory solutions within the platform further enhances processing efficiency, positioning it as an ideal solution for enterprises focusing on breakthroughs in AI technologies.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
The Tyr AI Processor Family revolutionizes edge AI by executing data processing directly where data is generated, instead of relying on cloud solutions. This empowers industries with real-time decision-making capabilities by bringing intelligence closer to devices, machines, and sensors. The Tyr processors integrate cutting-edge AI capabilities into compact, efficient designs, achieving high performance akin to data center class levels with much lower power needs. The edge processors from the Tyr line ensure reduced latency and enhanced privacy, making them suitable for autonomous vehicles, smart factories, and other real-time applications demanding immediate, secure insights. They feature robust local data processing options, ensuring minimal reliance on cloud services, which contributes to lower costs and improved compliance with privacy standards. With a focus on multi-modal input handling and sustainability, the Tyr processors provide balanced compute power, memory utilization, and intelligent features that align with the needs of highly dynamic and bandwidth-restricted environments. Using RISC-V cores, they facilitate versatile AI model deployment across edge devices, ensuring high adaptability to the latest technological advances and market demands.
The Codasip L-Series DSP Core is designed to handle demanding signal processing tasks, offering an exemplary balance of computational power and energy efficiency. This DSP core is particularly suitable for applications involving audio processing and sensor data fusion, where performance is paramount. Codasip enriches this product with their extensive experience in RISC-V architectures, ensuring robust and optimized performance.
The Time-Triggered Protocol (TTP) is a cornerstone of TTTech's offerings, designed for high-reliability environments such as aviation. TTP ensures precise synchronization and communication between systems, leveraging a time-controlled approach to data exchange. This makes it particularly suitable for safety-critical applications where timing and order of operations are paramount. The protocol minimizes risks associated with communication errors, thus enhancing operational reliability and determinism. TTP is deployed in various platforms, providing the foundation for time-deterministic operations necessary for complex systems. Whether in avionics or in industries requiring strict adherence to real-time data processing, TTP adapts to the specific demands of each application. By using this protocol, industries can achieve dependable execution of interconnected systems, promoting increased safety and reliability. In particular, TTP's influence extends into integrated circuits where certifiable IP cores are essential, ensuring compliance with stringent industry standards such as RTCA DO-254. Ongoing developments in TTP also include tools and methodologies that facilitate verification and qualification, ensuring that all system components communicate effectively and as intended across all operating conditions.
The ISPido on VIP Board is tailored specifically for Lattice Semiconductor's Video Interface Platform (VIP) and is designed to achieve clear and balanced real-time imaging. This ISPido variant supports automatic configuration options to provide optimal settings the moment the board is powered on. Alternatively, users can customize their settings through a menu interface, allowing for adjustments such as gamma table selection and convolutional filtering. Equipped with the CrossLink VIP Input Bridge, the board features dual Sony IMX 214 image sensors and an ECP5 VIP Processor. The ECP5-85 FPGA ensures reliable processing power while potential outputs include HDMI in YCrCb 4:2:2 format. This flexibility ensures users have a complete, integrated solution that supports runtime calibration and serial port menu configuration, making it an extremely practical choice for real-time applications. The ISPido on VIP Board is built to facilitate seamless integration and high interoperability, making it a suitable choice for those engaged in designing complex imaging solutions. Its adaptability and high-definition support make it particularly advantageous for users seeking to implement sophisticated vision technologies in a variety of industrial applications.
The Dynamic Neural Accelerator II (DNA-II) from EdgeCortix is an advanced neural network IP core tailored for high efficiency and parallelism at the edge. Incorporating a run-time reconfigurable interconnect system between compute units, DNA-II effectively manages both convolutional and transformer workloads. This architecture ensures scalable performance beginning with 1K MACs, suitable for a wide range of SoC implementations. EdgeCortix's patented architecture significantly optimizes data paths between DNA engines, enhancing parallelism while reducing on-chip memory usage. As a core component of the SAKURA-II platform, DNA-II supports state-of-the-art generative AI models with industry-leading energy efficiency. DNA-II's design acknowledges the typical inefficiencies in IP cores, improving compute utilization and power consumption metrics substantially. By adopting innovative reconfigurable datapath technologies, EdgeCortix sets a new benchmark for low-power, high-performance edge AI applications.
Targeted at high-end applications, the SCR9 processor core boasts a 12-stage dual-issue out-of-order pipeline, adding vector processing units (VPUs) to manage intensive computational tasks. It offers hypervisor support, making it suitable for diverse enterprise-grade applications. Configured for up to 16 cores, it exhibits excellent memory management and cache coherency required for state-of-the-art computing platforms such as HPC, AI, and machine learning environments. This core embodies efficiency and performance, catering to industries that leverage high-throughput data processing.
The NMP-350 is an endpoint accelerator designed to deliver the lowest power and cost efficiency in its class. Ideal for applications such as driver authentication and health monitoring, it excels in automotive, AIoT/sensors, and wearable markets. The NMP-350 offers up to 1 TOPS performance with 1 MB of local memory, and is equipped with a RISC-V or Arm Cortex-M 32-bit CPU. It supports multiple use-cases, providing exceptional value for integrating AI capabilities into various devices. NMP-350's architectural design ensures optimal energy consumption, making it particularly suited to Industry 4.0 applications where predictive maintenance is crucial. Its compact nature allows for seamless integration into systems requiring minimal footprint yet substantial computational power. With support for multiple data inputs through AXI4 interfaces, this accelerator facilitates enhanced machine automation and intelligent data processing. This product is a testament to AiM Future's expertise in creating efficient AI solutions, providing the building blocks for smart devices that need to manage resources effectively. The combination of high performance with low energy requirements makes it a go-to choice for developers in the field of AI-enabled consumer technology.
Specially engineered for the automotive industry, the NA Class IP by Nuclei complies with the stringent ISO26262 functional safety standards. This processor is crafted to handle complex automotive applications, offering flexibility and rigorous safety protocols necessary for mission-critical transportation technologies. Incorporating a range of functional safety features, the NA Class IP is equipped to ensure not only performance but also reliability and safety in high-stakes vehicular environments.
The RISCV SoC - Quad Core Server Class is engineered for high-performance applications requiring robust processing capabilities. Designed around the RISC-V architecture, this SoC integrates four cores to offer substantial computing power. It's ideal for server-class operations, providing both performance efficiency and scalability. The RISCV architecture allows for open-source compatibility and flexible customization, making it an excellent choice for users who demand both power and adaptability. This SoC is engineered to handle demanding workloads efficiently, making it suitable for various server applications.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
The iCan PicoPop® is a highly compact System on Module (SOM) based on the Zynq UltraScale+ MPSoC from Xilinx, suited for high-performance embedded applications in aerospace. Known for its advanced signal processing capabilities, it is particularly effective in video processing contexts, offering efficient data handling and throughput. Its compact size and performance make it ideal for integration into sophisticated systems where space and performance are critical.
The TT-Ascalon™ is a high-performance RISC-V CPU designed for general-purpose control, emphasizing power and area efficiency. This processor features an Out-of-Order, superscalar architecture that adheres to the RISC-V RVA23 profile, co-developed with Tenstorrent's own Tensix IP for optimized performance. TT-Ascalon™ is highly scalable, suitable for various high-demand applications that benefit from robust computational capabilities. It's engineered to deliver unmatched performance while maintaining energy efficiency, making it ideal for operations that require reliability without compromising on speed and power efficiency.
The Prodigy Universal Processor by Tachyum is a groundbreaking innovation in the realm of computing, marked as the world's first processor that merges General Purpose Computing, High-Performance Computing, Artificial Intelligence, and various other AI disciplines into a single compact chip. This processor promises to revolutionize hyperscale data centers with its unprecedented processing capabilities and efficiency, pushing the boundaries of current computational power. With its superior performance per watt, Prodigy minimizes energy consumption while maximizing data processing abilities. Offering up to 21 times higher performance compared to its contemporaries, Prodigy stands out by providing a coherent multiprocessor architecture that simplifies the programming environment. It aims to overcome challenges like high power use and server underutilization, which have long plagued modern data centers. By addressing these core issues, it allows enterprises to manage workloads more effectively and sustainably. Furthermore, Prodigy's emulation platform broadens the scope of testing and evaluation, enabling developers to optimize their applications for better performance and low power consumption. With native support for the Prodigy instruction set architecture, the processor seamlessly integrates existing software packages, promising a smooth transition and robust application support. Through the integration of this versatile processor, Tachyum is leading the charge toward a sustainable technological future.
The SiFive Essential family is designed to deliver high customization for processors across varying applications, from standalone MCUs to deeply embedded systems. This family of processor cores provides a versatile solution, meeting diverse market needs with an optimal combination of power, area, and performance. Within this lineup, users can tailor processors for specific market requirements, ranging from simple MCUs to fully-featured, Linux-capable designs. With features such as high configurability, SiFive Essential processors offer flexible design points, allowing scaling from basic 2-stage pipelines to advanced dual-issue superscalar configurations. This adaptability makes SiFive Essential suitable for a wide variety of use cases in microcontrollers, IoT devices, and control plane processing. Additionally, their innovation is proven by billions of units shipped worldwide, highlighting their reliability and versatility. The Essential cores also provide advanced integration options within SoCs, enabling smooth interface and optimized performance. This includes pre-integrated trace and debug features, ensuring efficient development and deployment in diverse applications.
TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.
The Trifecta-GPU offers cutting-edge graphics processing capabilities designed for high-efficiency computing needs. This PXIe/CPCIe module excels in handling intensive tasks across various applications, including signal processing, modular test and measurement, and electronic warfare systems. Built to deliver robust performance, it incorporates advanced GPU technology to ensure rapid data throughput and high computational capability. With a focus on versatility, the Trifecta-GPU seamlessly integrates with existing hardware setups, aiding in the enhancement of system performance through its powerful data handling skills. It is particularly well-suited for environments that demand precise data analysis and execution speed, such as AI and machine learning inference tasks. Its inclusion in RADX's product lineup signifies its importance in providing comprehensive solutions tailored for demanding industrial and research applications. Moreover, this module supports various applications, empowered by its substantial memory bandwidth, and possesses innovative architecture designed to optimize processing power. The Trifecta-GPU is an integral component within RADX’s lineup designed to offer flexibility and power efficiency in equal measure, making it well-suited for future-tech applications that necessitate high-performance standards.
The RISC-V Processor Core provides a foundation for developing customizable, open-standard applications, making it a popular choice for modern computing needs. Benefiting from the RISC-V architecture's flexibility, this core can be tailored to meet specific processing requirements across various embedded systems. Industries dealing with complex design challenges find this open standard not only cost-effective but also powerful in fostering innovation. Optimized for efficiency, the RISC-V Processor Core enables the execution of robust software environments and applications, supporting tasks ranging from simple control functions to more demanding compute-heavy operations. This versatility extends to the seamless integration of additional custom IPs, allowing designers to enhance functionality without performance trade-offs. In high-performance computing environments, the RISC-V Processor Core is praised for its energy-efficient computing capabilities and reduced power consumption, characteristics that are vital in creating sustainable and environmentally friendly tech solutions. Its adaptability into various system-on-chip (SoC) designs makes it integral to the development of a broad spectrum of devices, from consumer electronics to industrial automation systems.
The Veyron V1 is a high-performance RISC-V CPU designed to meet the rigorous demands of modern data centers and compute-intensive applications. This processor is tailored for cloud environments requiring extensive compute capabilities, offering substantial power efficiency while optimizing processing workloads. It provides comprehensive architectural support for virtualization and efficient task management with its robust feature set. Incorporating advanced RISC-V standards, the Veyron V1 ensures compatibility and scalability across a wide range of industries, from enterprise servers to high-performance embedded systems. Its architecture is engineered to offer seamless integration, providing an excellent foundation for robust, scalable computing designs. Equipped with state-of-the-art processing cores and enhanced vector acceleration, the Veyron V1 delivers unmatched throughput and performance management, making it suitable for use in diverse computing environments.
The IP Platform for Low-Power IoT is engineered to accelerate product development with highly integrated, customizable solutions specifically tailored for IoT applications. It consists of pre-validated IP platforms that serve as comprehensive building blocks for IoT devices, featuring ARM and RISC-V processor compatibility. Built for ultra-low power consumption, these platforms support smart and secure application needs, offering a scalable approach for different market requirements. Whether it's for beacons, active RFID, or connected audio devices, these platforms are ideal for various IoT applications demanding rapid development and integration. The solutions provided within this platform are not only power-efficient but also ready for AI implementation, enabling smart, AI-ready IoT systems. With FPGA evaluation mechanisms and comprehensive integration support, the IP Platform for Low-Power IoT ensures a seamless transition from concept to market-ready product.
Tailored for high efficiency, the NMP-550 accelerator advances performance in the fields of automotive, mobile, AR/VR, and more. Designed with versatility in mind, it finds applications in driver monitoring, video analytics, and security through its robust capabilities. Offering up to 6 TOPS of processing power, it includes up to 6 MB of local memory and a choice of RISC-V or Arm Cortex-M/A 32-bit CPU. In environments like drones, robotics, and medical devices, the NMP-550's enhanced computational skills allow for superior machine learning and AI functions. This is further supported by its ability to handle comprehensive data streams efficiently, making it ideal for tasks such as image analytics and fleet management. The NMP-550 exemplifies how AiM Future harnesses cutting-edge technology to develop powerful processors that meet contemporary demands for higher performance and integration into a multitude of smart technologies.
T-Head Semiconductor's Zhenyue 510 is a high-performance SSD controller tailored for enterprise-grade solid-state drives. This controller is intricately engineered to handle large-scale data processing with enhanced reliability and speed. It integrates innovative memory management techniques that maximize the effectiveness of storage solutions, thereby supporting modern data-driven applications in various industries. The Zhenyue 510 boasts advanced error correction mechanisms and efficient power mode management, which together ensure robust data integrity and energy efficiency. Its architecture allows for seamless integration with existing server infrastructures and supports an extensive set of storage interfaces, facilitating versatile deployment options for enterprise users. These features combine to deliver a balance of speed and dependability essential for sustaining the performance demands of high-end applications. With its focus on optimizing NAND performance, the Zhenyue 510 excels in sequencing large datasets, making it indispensable for workloads that thrive on quick data access and manipulation such as databases and real-time analytics. Its design underpins T-Head Semiconductor's mission to deliver components that not only meet but exceed the rigorous expectations of contemporary technology landscapes.
The xcore-200 chip from XMOS is a pivotal component for audio processing, delivering unrivaled performance for real-time, multichannel streaming applications. Tailored for professional and high-resolution consumer audio markets, xcore-200 facilitates complex audio processing with unparalleled precision and flexibility. This chip hosts XMOS's adept capabilities in deterministic and parallel processing, crucial for achieving zero-latency outputs in applications such as voice amplification systems, high-definition audio playback, and multipoint conferencing. Its architecture supports complex I/O operations, ensuring that all audio inputs and outputs are managed efficiently without sacrificing audio quality. The xcore-200 is crafted to handle large volumes of data effortlessly while maintaining the highest levels of integrity and clarity in audio outputs. It provides superior processing power to execute intensive tasks such as audio mixing, effects processing, and real-time equalization, crucial for both consumer electronics and professional audio gear. Moreover, xcore-200 supports a flexible integration into various systems, enhancing the functionality of audio interfaces, smart soundbars, and personalized audio solutions. It also sustains the robust performance demands needed in embedded AI implementations, thereby extending its utility beyond traditional audio systems. The xcore-200 is a testament to XMOS's dedication to pushing the boundaries of what's possible in audio engineering, blending high-end audio performance with cutting-edge processing power.
The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.
ISPido is a powerful and flexible image signal processing pipeline tailored for high-resolution image processing and tuning. It supports a comprehensive pipeline of image enhancement features such as defect correction, color filter array interpolation, and various color space conversions, all configurable via the AXI4-LITE protocol. Designed to handle input depths of 8, 10, or 12 bits, ISPido excels in processing high-definition resolutions up to 7680x7680 pixels, making it highly suitable for a variety of advanced vision applications. The architecture of ISPido is built to be highly compatible with AMBA AXI4 standards, ensuring that it can be seamlessly integrated into existing systems. Each module in the pipeline is individually configurable, allowing for extensive customization to optimize performance. Features such as auto-white balance, gamma correction, and HDR chroma resampling empower developers to produce precise and visually accurate outputs in complex environments. ISPido's modular and versatile design makes it an ideal choice for deploying in heterogeneous processing environments, ranging from low-power battery-operated devices to sophisticated vision systems capable of handling resolutions higher than 8K. This adaptability makes it a prime solution for developers working across various sectors demanding high-quality image processing.
Spectral CustomIP features silicon-proven specialty memory architectures perfect for diverse IC applications. Renowned for its wide range of memory architectures, CustomIP provides designers with options that include Binary and Ternary CAMs, multi-port memories, and cache, among others. These architectures are built on high-density, low-power designs, emphasizing performance while minimizing power usage. CustomIP, part of Spectral's Memory Development Platform, comes in source code format, enabling users to modify and extend design capabilities as necessary. CustomIP integrates SpectralTrak technology, offering PVT monitoring that dynamically adjusts memory timing in response to environmental factors. This ensures stability and high performance across various conditions. CustomIP's flexibility sees it employed in networking via SpectralTCAMs, graphics through SpectralMPorts, and low voltage applications like consumer electronics and healthcare devices with unique options like SpectralLVSRAM and SpectralHRAM. Broad configurations are available, facilitating integration into complex systems. With options for depth reaching 16K Words and data widths extending to 288 bits, the CustomIP suite supports myriad application requirements. Architectures include multiple bank setups and read/write port options, providing versatility for advanced chip designs. The platform's support of BIST, ECC, and test modes, alongside optional rights to modify, offers users a comprehensive set of tools to achieve their desired outcomes.
SiFive Performance family processors are specifically engineered to deliver outstanding performance and efficiency across a wide range of applications. These processors cater to diverse market demands, including data centers, consumer electronics, and AI-driven workloads. They feature high-performance, 64-bit out-of-order cores with optional vector engines, making them ideal for heavy-duty tasks requiring maximum throughput and scalability. The series incorporates a variety of architectural features that optimize performance and energy efficiency. It includes cores scalable from three-wide to six-wide, supporting up to 256-bit vector operations, which are particularly advantageous for AI and multimedia processing applications. This optimal balance ensures that each core offers superior compute density and power efficiency. Additionally, the SiFive Performance series emphasizes flexibility, allowing users to mix and match cores to achieve the desired balance between performance and power consumption. This makes the series a perfect fit for both performance-intensive and power-sensitive applications, enabling developers to create customized solutions tailored to their specific needs.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!