Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Platform Level IP > Processor Core Dependent

Processor Core Dependent Semiconductor IPs

In the realm of semiconductor IP, the Processor Core Dependent category encompasses a variety of intellectual properties specifically designed to enhance and support processor cores. These IPs are tailored to work in harmony with core processors to optimize their performance, adding value by reducing time-to-market and improving efficiency in modern integrated circuits. This category is crucial for the customization and adaptation of processors to meet specific application needs, addressing both performance optimization and system complexity management.

Processor Core Dependent IPs are integral components, typically found in applications that require robust data processing capabilities such as smartphones, tablets, and high-performance computing systems. They can also be implemented in embedded systems for automotive, industrial, and IoT applications, where precision and reliability are paramount. By providing foundational building blocks that are pre-verified and configurable, these semiconductor IPs significantly simplify the integration process within larger digital systems, enabling a seamless enhancement of processor capabilities.

Products in this category may include cache controllers, memory management units, security hardware, and specialized processing units, all designed to complement and extend the functionality of processor cores. These solutions enable system architects to leverage existing processor designs while incorporating cutting-edge features and optimizations tailored to specific application demands. Such customizations can significantly boost the performance, energy efficiency, and functionality of end-user devices, translating into better user experiences and competitive advantages.

In essence, Processor Core Dependent semiconductor IPs represent a strategic approach to processor design, providing a toolkit for customization and optimization. By focusing on interdependencies within processing units, these IPs allow for the creation of specialized solutions that cater to the needs of various industries, ensuring the delivery of high-performance, reliable, and efficient computing solutions. As the demand for sophisticated digital systems continues to grow, the importance of these IPs in maintaining competitive edge cannot be overstated.

All semiconductor IP
131
IPs available

Metis AIPU PCIe AI Accelerator Card

Axelera AI's Metis AIPU PCIe AI Accelerator Card is designed to tackle demanding vision applications with its powerful processing capabilities. The card embeds a single Metis AIPU which can deliver up to 214 TOPS, providing the necessary throughput for concurrent processing of high-definition video streams and complex AI inference tasks. This PCIe card is supported by the Voyager SDK, which enhances the user experience by allowing easy integration into existing systems for efficient deployment of AI inference networks. It suits developers and integrators looking for an upgrade to existing infrastructure without extensive modifications, optimizing performance and accelerating AI model deployment. The card’s design prioritizes performance and efficiency, making it suitable for diverse applications across industries like security, transportation, and smart city environments. Its capacity to deliver high frames per second on popular AI models ensures it meets modern digital processing demands with reliability and precision.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

CXL 3.1 Switch

Panmnesia's CXL 3.1 Switch is an integral component designed to facilitate high-speed, low-latency data transfers across multiple connected devices. It is architected to manage resource allocation seamlessly in AI and high-performance computing environments, supporting broad bandwidth, robust data throughput, and efficient power consumption, creating a cohesive foundation for scalable AI infrastructures. Its integration with advanced protocols ensures high system compatibility.

Panmnesia
AMBA AHB / APB/ AXI, CXL, D2D, Ethernet, Fibre Channel, Gen-Z, Multiprocessor / DSP, PCI, Processor Core Dependent, Processor Core Independent, RapidIO, SAS, SATA, V-by-One
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module from Axelera AI is engineered for applications requiring edge AI computing power in a compact form factor. Leveraging the quad-core Metis AIPU, this module provides efficient AI processing capabilities tailored for real-time analysis and data-intensive tasks in areas like computer vision. Designed to fit into standard NGFF (Next Generation Form Factor) M.2 sockets, it supports a wide range of AI models with dedicated 1GB DRAM memory for optimized performance. This module is especially suitable for systems needing enhanced image and video processing capabilities while maintaining minimal power consumption. The Metis AIPU M.2 Accelerator Module enhances computing architectures by enabling seamless integration of AI for a multitude of industrial and commercial applications. Its efficient design makes it ideal for environments where space is limited, but computational demand is high, ensuring that solutions are both powerful and cost-effective.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

xcore.ai

The xcore.ai platform is designed to power the intelligent Internet of Things (IoT) by combining flexibility and performance efficiency. With its distinctive multi-threaded micro-architecture, it allows for low-latency and predictable performance, crucial for IoT applications. Each xcore.ai device is equipped with 16 logical cores distributed over two tiles, each with integrated 512kB SRAM and a vector unit capable of handling both integer and floating-point operations. Communication between processors is facilitated by a robust interprocessor communication infrastructure, enabling scalability for systems requiring multiple xcore.ai SoCs. This platform supports a multitude of applications by integrating DSP, AI, and I/O processing within a cohesive development environment. For audio and voice processing needs, it offers adaptable, software-defined I/O that aligns with specific application requirements, ensuring efficient and targeted performance. The xcore.ai is also equipped for ai and machine learning tasks with a 256-bit VPU that supports various operations including 32-bit, 16-bit, and 8-bit vector operations, offering peak AI performance. The inclusion of a comprehensive development kit allows developers to explore its capabilities through ready-made solutions or custom-built applications.

XMOS Semiconductor
21 Categories
View Details

Veyron V2 CPU

Veyron V2 represents the next generation of Ventana's high-performance RISC-V CPU. It significantly enhances compute capabilities over its predecessor, designed specifically for data center, automotive, and edge deployment scenarios. This CPU maintains compatibility with the RVA23 RISC-V specification, making it a powerful alternative to the latest ARM and x86 counterparts within similar domains. Focusing on seamless integration, the Veyron V2 offers clean, portable RTL implementations with a standardized interface, optimizing its use for custom SoCs with high-core counts. With a robust 512-bit vector unit, it efficiently supports workloads requiring both INT8 and BF16 precision, making it highly suitable for AI and ML applications. The Veyron V2 is adept in handling cloud-native and virtualized workloads due to its full architectural virtualization support. The architectural advancements offer significant performance-per-watt improvements, and advanced cache and virtualization features ensure a secure and reliable computing environment. The Veyron V2 is available as both a standalone IP and a complete hardware platform, facilitating diverse integration pathways for customers aiming to harness Ventana’s innovative RISC-V solutions.

Ventana Micro Systems
TSMC
16nm, 28nm
AI Processor, CPU, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

NuLink Die-to-Die PHY for Standard Packaging

The NuLink Die-to-Die PHY for Standard Packaging represents Eliyan's cornerstone technology, engineered to harness the power of standard packaging for die-to-die interconnects. This technology circumvents the limitations of advanced packaging by providing superior performance and power efficiencies traditionally associated only with high-end solutions. Designed to support multiple standards, such as UCIe and BoW, the NuLink D2D PHY is an ideal solution for applications requiring high bandwidth and low latency without the cost and complexity of silicon interposers or silicon bridges. In practical terms, the NuLink D2D PHY enables chiplets to achieve unprecedented bandwidth and power efficiency, allowing for increased flexibility in chiplet configurations. It supports a diverse range of substrates, providing advantages in thermal management, production cycle, and cost-effectiveness. The technology's ability to split a Network on Chip (NoC) across multiple chiplets, while maintaining performance integrity, makes it invaluable in ASIC designs. Eliyan's NuLink D2D PHY is particularly beneficial for systems requiring physical separation between high-performance ASICs and heat-sensitive components. By delivering interposer-like bandwidth and power in standard organic or laminate packages, this product ensures optimal system performance across varied applications, including those in AI, data processing, and high-speed computing.

Eliyan
Samsung
4nm, 7nm
AMBA AHB / APB/ AXI, CXL, D2D, MIPI, Network on Chip, Processor Core Dependent
View Details

Jotunn8 AI Accelerator

Jotunn8 represents VSORA's pioneering leap into the world of AI Inference technology, aimed at data centers that require high-speed, cost-efficient, and scalable systems. The Jotunn8 chip is engineered to deliver trained models with unparalleled speed, minimizing latency and optimizing power usage, thereby guaranteeing that high-demand applications such as recommendation systems or large language model APIs operate at optimal efficiency. The Jotunn8 is celebrated for its near-theoretical performance, specifically designed to meet the demands of real-time services like chatbots and fraud detection. With a focus on reducing costs per inference – a critical factor for operating at massive scale – the chip ensures business viability through its power-efficient architecture, which significantly trims operational expenses and reduces carbon footprints. Innovative in its approach, the Jotunn8 supports complex AI computing needs by integrating various AI models seamlessly. It provides the foundation for scalable AI, ensuring that infrastructure can keep pace with growing consumer and business demands, and represents a robust solution that prepares businesses for the future of AI-driven applications.

VSORA
AI Processor, CPU, DSP Core, Interleaver/Deinterleaver, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Chimera GPNPU

The Chimera GPNPU from Quadric stands as a versatile processing unit designed to accelerate machine learning models across a wide range of applications. Uniquely integrating the strengths of neural processing units and digital signal processors, the Chimera GPNPU simplifies heterogeneous workloads by running traditional C++ code and complex AI networks such as large language models and vision transformers in a unified processor architecture. This scalability, tailored from 1 to 864 TOPs, allows it to meet the diverse requirements of markets, including automotive and network edge computing.\n\nA key feature of the Chimera GPNPU is its ability to handle matrix and vector operations alongside scalar control code within a single pipeline. Its fully software-driven nature enables developers to fine-tune model performance over the processor's lifecycle, adapting to evolving AI techniques without needing hardware updates. The system's design minimizes off-chip memory access, thereby enhancing efficiency through its L2 memory management and compiler-driven optimizations.\n\nMoreover, the Chimera GPNPU provides an extensive instruction set, finely tuned for AI inference tasks with intelligent memory management, reducing power consumption and maximizing processing efficiency. Its ability to maintain high performance with deterministic execution across various processes underlines its standing as a leading choice for AI-focused chip design.

Quadric
15 Categories
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator by EdgeCortix provides a cutting-edge solution for efficient AI processing at the edge. Engineered for optimum energy efficiency, it supports real-time Batch=1 AI inferencing and manages extensive parameter models effectively, making it ideal for complex Generative AI applications. The core of SAKURA-II, the Dynamic Neural Accelerator (DNA), is reconfigurable at runtime, which allows for simultaneous execution of multiple neural network models while maintaining high performance metrics. With its advanced neural architecture, SAKURA-II meets the challenging requirements of edge AI applications like image, text, and audio processing. This AI accelerator is distinguished by its ability to support large AI models within a low power envelope, typically operating at around 8 watts, and accommodates vast arrays of inputs such as Llama 2 and Stable Diffusion. SAKURA-II modules are crafted for speedy integration into various systems, offering up to 60 TOPS of performance for INT8 operations. Additionally, its robust design allows handling of high-bandwidth memory scenarios, delivering up to 68 GB/sec of DRAM bandwidth, ensuring superior performance for large language models (LLMs) and vision applications across multiple industries. As a key component of EdgeCortix's edge AI solution platform, the SAKURA-II excels not only in computational efficiency but also in adaptability across various hardware systems like Raspberry Pi. The accelerator system includes options for both small form factor modules and PCIe cards, granting flexibility for different application needs and allowing easy deployment in space-constrained or resource-sensitive environments, thus maximizing the utility of existing infrastructures for AI tasks.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

SCR9 Processor Core

The SCR9 Processor Core represents the height of processor sophistication with a 12-stage dual-issue out-of-order pipeline complemented by a vector processing unit (VPU). This 64-bit, 16-core configuration supports hypervisor capabilities, making it a powerhouse for enterprise and high-performance applications. Designed to meet the rigorous demands of AI, ML, and computational-heavy environments, the SCR9 core delivers exceptional data throughput and processing power. Its comprehensive architecture includes robust memory and cache management, ensuring efficiency and speed in processing. This application-class core is supported by an extensive ecosystem of development tools and platforms, ensuring that developers can exploit its full potential for innovative solutions. With its focus on high efficiency and advanced capabilities, the SCR9 core is a definitive choice in fields demanding top-tier processing power.

Syntacore
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

aiWare

aiWare represents a high-performance neural processing solution aimed at driving efficiency in AI-powered automotive applications. At its core, aiWare is designed to deliver robust inference capabilities necessary for complex neural network operations within the automotive domain. This IP features scalable performance fitting a broad spectrum of use-cases, from sensor-edge processors to high-performance centralized models, alongside substantial variances such as L2 to L4 automated driving applications. The aiWare NPU offers unrivaled efficiency and deterministic flexibility, having achieved ISO 26262 ASIL B certification, which accentuates its safety and reliability for automotive environments. It supports a multitude of advanced neural architectures, including CNNs and RNNs, empowering developers to effectively deploy AI models within constrained automotive ecosystems. AI pollution-free data pathways ensure high throughput with minimum energy consumption, aligning with automotive standards for efficiency and operational dependability. Accompanied by aiWare Studio SDK, aiWare simplifies the development process by offering an offline performance estimator that accurately predicts system performance. This tool, celebrated by OEMs globally, allows developers to refine neural networks with minimal hardware requirements, significantly abbreviating time-to-market while preserving high-performance standards. The aiWare's architecture focuses on enhancing efficiency, ensuring robust performance for applications spanning multi-modality sensing and complex data analytics.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

SiFive Intelligence X280

The SiFive Intelligence X280 delivers best-in-class vector processing capabilities powered by the RISC-V architecture, specifically targeting AI and ML applications. This core is designed to cater to advanced AI workloads, equipped with extensive compute capabilities that include wide vector processing units and scalable matrix computation. With its distinctive software-centric design, the X280 facilitates easy integration and offers adaptability to complex AI and ML processes. Its architecture is built to handle modern computational demands with high efficiency, thanks to its robust bandwidth and scalable execution units that accommodate evolving machine learning algorithms. Ideal for edge applications, the X280 supports sophisticated AI operations, resulting in fast and energy-efficient processing. The design flexibility ensures that the core can be optimized for a wide range of applications, promising unmatched performance scalability and intelligence in edge computing environments.

SiFive, Inc.
AI Processor, CPU, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator is engineered to propel artificial intelligence tasks to new heights with its cutting-edge architecture. This accelerator enhances machine learning tasks by speeding up neural network processing, making it a key player in the burgeoning AI sector. Its innovative design is optimized for low latency and high throughput, facilitating real-time AI application performance and enabling advanced machine learning model implementations. Harnessing an extensive array of computing cores, the Hanguang 800 ensures parallel processing capabilities that significantly reduce training times for large-scale AI models. Its application scope covers diverse sectors, including autonomous driving, smart city infrastructure, and intelligent robotics, underscoring its versatility and adaptability. Built with energy efficiency in mind, this AI accelerator prioritizes minimal power consumption, making it ideal for data centers looking to maximize computational power without overextending their energy footprint. By integrating seamlessly with existing frameworks, the Hanguang 800 offers a ready-to-deploy solution for enterprises seeking to enhance their AI-driven services and operations.

T-Head Semiconductor
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

SiFive Essential

The SiFive Essential line features a versatile range of embedded CPU cores that serve a wide variety of markets, from consumer electronics to industrial computing. With an emphasis on configurability, these cores can be tailored to meet specific operational requirements, providing a foundation for custom applications tailored to precise needs. Offering a scaffold for both 32-bit MCUs and 64-bit CPUs, the Essential series supports various pipeline configurations to maximize throughput and optimize power usage. Its design inherently enables scalable performance, making it suitable for applications requiring performance optimization and advanced customization. These cores uphold SiFive’s tradition of high-quality innovation, delivering a robust, silicon-proven solution with billions of units already shipped globally. Ideal for diverse application scenarios, the Essential series provides a reliable, cost-effective foundation for a multitude of embedded processing projects.

SiFive, Inc.
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

AndeShape Platforms

The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.

Andes Technology
Embedded Memories, Microcontroller, Processor Core Dependent, Processor Core Independent, Standard cell
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Time-Triggered Protocol

The Time-Triggered Protocol (TTP) is an innovative real-time communications protocol used primarily in space and aviation networks. TTP ensures synchronized communication across various nodes in a network, providing deterministic message delivery, which is crucial in systems where timing and reliability are critical. By supporting highly dependable system architectures, it aids in achieving high safety levels required in critical aerospace applications.

TTTech Computertechnik AG
AMBA AHB / APB/ AXI, CAN, CAN XL, CAN-FD, Ethernet, FlexRay, MIPI, Processor Core Dependent, Safe Ethernet, Temperature Sensor
View Details

RISC-V CPU IP N Class

The RISC-V CPU IP N Class from Nuclei is engineered to cover a broad range of applications in the 32-bit architecture spectrum. This IP is designed with an emphasis on microcontroller functionalities, making it ideal for AIoT solutions. Tailored for flexibility, it supports numerous customization options, including integrated security features and functional safety assurance, which underscore its adaptability in diverse IoT deployments. Notably, the N Class configuration supports a wide array of tools and resources within the RISC-V ecosystem to streamline the development process and enhance performance optimization. At its core, the N Class IP is built to comply with the RISC-V open standard, ensuring compatibility and ease of integration with existing systems. Its robust configurability allows developers to select specific features pertinent to their application requirements, thereby optimizing resource allocation and efficiency. Featuring a three-stage pipeline architecture, the N Class delivers single and dual issue capabilities, which are crucial for achieving a balance between processing power and energy efficiency. Furthermore, the N Class IP stands out for its exceptional scalability. It accommodates user-defined instruction extensions, as well as the RISC-V B/K/P/V extensions, which together facilitate the execution of sophisticated tasks and enhance the overall versatility of the processor. This configurability and support for comprehensive information security solutions make it a vital component in securing IoT applications.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

RAIV General Purpose GPU

The RAIV General Purpose GPU (GPGPU) epitomizes versatility and cutting-edge technology in the realm of data processing and graphics acceleration. It serves as a crucial technology enabler for various prominent sectors that are central to the fourth industrial revolution, such as autonomous driving, IoT, virtual reality/augmented reality (VR/AR), and sophisticated data centers. By leveraging the RAIV GPGPU, industries are able to process vast amounts of data more efficiently, which is paramount for their growth and competitive edge. Characterized by its advanced architectural design, the RAIV GPU excels in managing substantial computational loads, which is essential for AI-driven processes and complex data analytics. Its adaptability makes it suitable for a wide array of applications, from enhancing automotive AI systems to empowering VR environments with seamless real-time interaction. Through optimized data handling and acceleration, the RAIV GPGPU assists in realizing smoother and more responsive application workflows. The strategic design of the RAIV GPGPU focuses on enabling integrative solutions that enhance performance without compromising on power efficiency. Its functionality is built to meet the high demands of today’s tech ecosystems, fostering advancements in computational efficiency and intelligent processing capabilities. As such, the RAIV stands out not only as a tool for improved graphical experiences but also as a significant component in driving innovation within tech-centric industries worldwide. Its pioneering architecture thus supports a multitude of applications, ensuring it remains a versatile and indispensable asset in diverse technological landscapes.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Vision Processor, Wireless Processor
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

Zhenyue 510 SSD Controller

The Zhenyue 510 SSD controller represents a breakthrough in enterprise-grade storage technology. This sophisticated controller is crafted to enhance SSD performance in demanding computing environments, offering speed and durability to enterprise data centers. It features state-of-the-art architectures tailored to meet the intense demands of continuous data inflows typical within server farms and cloud storage infrastructure. Combining advanced processing capabilities with custom algorithmic optimizations, the Zhenyue 510 delivers exceptional read and write cycles, ensuring high throughput and stable data handling. Its robust design not only manages high-capacity data storage efficiently but also ensures reliable data integrity through sophisticated error correction techniques and data management protocols. This SSD controller is designed with adaptability in mind, compatible with a wide range of NAND flash technologies, thus offering significant flexibility for various storage applications. Ideal for data-intensive tasks that require consistent performance, the Zhenyue 510 advances T-Head's position within the SSD industry by setting new standards for speed, efficiency, and dependability.

T-Head Semiconductor
eMMC, Flash Controller, NAND Flash, NVM Express, ONFI Controller, Processor Core Dependent, RLDRAM Controller, SAS, SATA, SDRAM Controller, SRAM Controller
View Details

GNSS VHDL Library

The GNSS VHDL Library developed by GNSS Sensor Ltd is a comprehensive collection of modules designed to facilitate the integration of satellite navigation systems into various platforms. This library is highly configurable, offering components like a GNSS engine, fast search engines for GPS, Glonass, and Galileo systems, a Viterbi decoder, and several internal self-test modules. With its design focused on maximum CPU platform independence and flexibility, this library is a powerful tool for developers seeking to incorporate advanced navigation capabilities into their products. This VHDL library allows for the creation of System-on-Chip (SoC) configurations by utilizing pre-built FPGA images that integrate the GNSS library. These images are compatible with both 32-bit SPARC-V8 and 64-bit RISC-V architectures, supporting a wide range of external bus interfaces via a simplified core bus (SCB) that incorporates bridge modules for AMBA and SPI interfaces. This architectural flexibility significantly reduces development costs and complexity. The GNSS VHDL Library ensures seamless compatibility with a variety of frequencies and satellite systems, providing a robust framework for satellite navigation in modern electronic devices. It includes RF front-end modules for GLONASS-L1 and GPS/Galileo/SBAS, which enhance the verification of GNSS configurations. This modularity and adaptability make it an ideal choice for innovative applications in navigation and positioning systems.

GNSS Sensor Ltd
All Foundries
All Process Nodes
AMBA AHB / APB/ AXI, Amplifier, Bluetooth, CAN, GPS, Input/Output Controller, Interrupt Controller, MIL-STD-1553, MIPI, Multi-Protocol PHY, Processor Core Dependent, UWB, Wireless USB
View Details

Veyron V1 CPU

The Veyron V1 is a high-performance RISC-V CPU aimed at data centers and similar applications that require robust computing power. It integrates with various chiplet and IP cores, making it a versatile choice for companies looking to create customized solutions. The Veyron V1 is designed to offer competitive performance against x86 and ARM counterparts, providing a seamless transition between different node process technologies. This CPU benefits from Ventana's innovation in RISC-V technology, where efforts are placed on providing an extensible architecture that facilitates domain-specific acceleration. With capabilities stretching from hyperscale computing to edge applications, the Veyron V1 supports extensive instruction sets for high-throughput operations. It also boasts leading-edge chiplet interfaces, opening up numerous opportunities for rapid productization and cost-effective deployment. Ventana's emphasis on open standards ensures that the Veyron V1 remains an adaptable choice for businesses aiming at bespoke solutions. Its compatibility with system IP and its provision in multiple platform formats—including chiplets—enable businesses to leverage the latest technological advancements in RISC-V. Additionally, the ecosystem surrounding the Veyron series ensures support for both modern software frameworks and cross-platform integration.

Ventana Micro Systems
TSMC
10nm, 16nm
AI Processor, Coprocessor, CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series represents a family of processor cores that bring advanced customization to the forefront of embedded designs. These cores are optimized for power and performance, striking a fine balance that suits an array of applications, from sensor controllers in IoT devices to sophisticated automotive systems. Their modular design allows developers to tailor instructions and performance levels directly to their needs, providing a flexible platform that enhances both existing and new applications. Featuring high degrees of configurability, the BK Core Series facilitates designers in achieving superior performance and efficiency. By supporting a broad spectrum of operating requirements, including low-power and high-performance scenarios, these cores stand out in the processor IP marketplace. The series is verified through industry-leading practices, ensuring robust and reliable operation in various application environments. Codasip has made it straightforward to use and adapt the BK Core Series, with an emphasis on simplicity and productivity in customizing processor architecture. This ease of use allows for swift validation and deployment, enabling quicker time to market and reducing costs associated with custom hardware design.

Codasip
AI Processor, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

RegSpec - Register Specification Tool

RegSpec is a cutting-edge tool that streamlines the generation of control and status register code, catering to the needs of IP designers by overcoming the limitations of traditional CSR generators. It supports complex synchronization and hardware interactions, allowing designers to automate intricate processes like pulse generation and serialization. Furthermore, it enhances verification by producing UVM-compatible code. This tool's flexibility shines as it can import and export in industry-standard formats such as SystemRDL and IP-XACT, interacting seamlessly with other CSR tools. RegSpec not only generates verilog RTL and SystemC header files but also provides comprehensive documentation across multiple formats including HTML, PDF, and Word. By transforming complex designs into streamlined processes, RegSpec plays a vital role in elevating design efficiency and precision. For system design, it creates standard C/C++ headers that facilitate firmware access, accompanied by SystemC models for advanced system modeling. Such comprehensive functionality ensures that RegSpec is invaluable for organizations seeking to optimize register specification, documentation, and CSR generation in a streamlined manner.

Dyumnin Semiconductors
13 Categories
View Details

ISPido

ISPido represents a fully configurable RTL Image Signal Processing Pipeline, adhering to the AMBA AXI4 standards and tailored through the AXI4-LITE protocol for seamless integration with systems such as RISC-V. This advanced pipeline supports a variety of image processing functions like defective pixel correction, color filter interpolation using the Malvar-Cutler algorithm, and auto-white balance, among others. Designed to handle resolutions up to 7680x7680, ISPido provides compatibility for both 4K and 8K video systems, with support for 8, 10, or 12-bit depth inputs. Each module within this pipeline can be fine-tuned to fit specific requirements, making it a versatile choice for adapting to various imaging needs. The architecture's compatibility with flexible standards ensures robust performance and adaptability in diverse applications, from consumer electronics to professional-grade imaging solutions. Through its compact design, ISPido optimizes area and energy efficiency, providing high-quality image processing while keeping hardware demands low. This makes it suitable for battery-operated devices where power efficiency is crucial, without sacrificing the processing power needed for high-resolution outputs.

DPControl
21 Categories
View Details

RISCV SoC - Quad Core Server Class

Dyumnin's RISCV SoC is a versatile platform centered around a 64-bit quad-core server-class RISCV CPU, offering extensive subsystems, including AI/ML, automotive, multimedia, memory, cryptographic, and communication systems. This test chip can be reviewed in an FPGA format, ensuring adaptability and extensive testing possibilities. The AI/ML subsystem is particularly noteworthy due to its custom CPU configuration paired with a tensor flow unit, accelerating AI operations significantly. This adaptability lends itself to innovations in artificial intelligence, setting it apart in the competitive landscape of processors. Additionally, the automotive subsystem caters robustly to the needs of the automotive sector with CAN, CAN-FD, and SafeSPI IPs, all designed to enhance systems connectivity within vehicles. Moreover, the multimedia subsystem boasts a complete range of IPs to support HDMI, Display Port, MIPI, and more, facilitating rich audio and visual experiences across devices.

Dyumnin Semiconductors
26 Categories
View Details

SiFive Performance

The SiFive Performance series is designed to deliver the highest levels of computing power while maintaining energy efficiency. These cores are optimized for a variety of demanding applications, offering a balance of high throughput, scalability, and customization. Tailored for industries that require maximum performance, the series boasts both scalar and vector processing capabilities, equipped with advanced features like out-of-order execution and optional vector compute engines for enhanced versatility. The design of the SiFive Performance series allows for flexible deployment across diverse applications, from high-performance computing environments to embedded systems. The major emphasis on customization enables users to optimize their solutions for specific needs, ensuring that performance metrics are closely aligned with operational demands. These cores provide support for the latest RISC-V profiles, offering improved computation power, energy efficiency, and integration flexibility. The result is a highly capable core IP solution that is ready to power next-generation technology in sectors like data centers, AI, and beyond.

SiFive, Inc.
CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

Tyr AI Processor Family

The Tyr AI Processor series by VSORA is revolutionizing Edge AI by bringing real-time intelligence and decision-making power directly to edge devices. This family of processors delivers the compute power equivalent to data centers but in a compact, energy-efficient form factor ideal for edge environments. Tyr processors are specifically designed to process data on the device itself, reducing latency, preserving bandwidth, and maintaining data privacy without the need for cloud reliance. This localized processing translates to split-second analytics and decision capabilities critical for technologies like autonomous vehicles and industrial automation. With Tyr, industries can achieve superior performance while minimizing operational costs and energy consumption, fostering greener AI deployments. The processors’ design accommodates the demanding requirements of modern edge applications, ensuring they can support the evolving needs of future edge intelligence systems.

VSORA
AI Processor, CAN XL, DSP Core, Interleaver/Deinterleaver, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

ISPido on VIP Board

ISPido on VIP Board is a customized runtime solution tailored for Lattice Semiconductors’ Video Interface Platform (VIP) board. This setup enables real-time image processing and provides flexibility for both automated configuration and manual control through a menu interface. Users can adjust settings via histogram readings, select gamma tables, and apply convolutional filters to achieve optimal image quality. Equipped with key components like the CrossLink VIP input bridge board and ECP5 VIP Processor with ECP5-85 FPGA, this solution supports dual image sensors to produce a 1920x1080p HDMI output. The platform enables dynamic runtime calibration, providing users with interface options for active parameter adjustments, ensuring that image settings are fine-tuned for various applications. This system is particularly advantageous for developers and engineers looking to integrate sophisticated image processing capabilities into their devices. Its runtime flexibility and comprehensive set of features make it a valuable tool for prototyping and deploying scalable imaging solutions.

DPControl
18 Categories
View Details

Origin E1

The "Origin E1" is engineered for AI applications demanding minimal power and space, commonly deployed in home appliances, smartphones, and security devices. Tailored for efficiency, the E1 provides a compact AI processing unit optimized to handle always-on tasks with a low power footprint. Its architecture focuses on achieving less than 1 TOPS performance, making it well-suited for applications where conserving power and memory is paramount. By utilizing Expedera's packet-based schema, the E1 achieves parallel processing across layers, enhancing speed and reducing energy and space requirements, crucial for maintaining the performance of everyday smart devices. In terms of utility, the E1 is exemplified in always-listening technologies, allowing for seamless user experiences by keeping the power necessary for continuous AI analysis to a minimum, ensuring privacy as all data remains processed within the subsystem.

Expedera
13 Categories
View Details

Portable RISC-V Cores

Bluespec's Portable RISC-V Cores offer a versatile and adaptable solution for developers seeking cross-platform compatibility with support for FPGAs from Achronix, Xilinx, Lattice, and Microsemi. These cores come with support for operating systems like Linux and FreeRTOS, providing developers with a seamless and open-source toolset for application development. By leveraging Bluespec’s extensive compatibility and open-source frameworks, developers can benefit from efficient, versatile RISC-V application deployment.

Bluespec
AMBA AHB / APB/ AXI, CPU, Peripheral Controller, Processor Core Dependent, Safe Ethernet
View Details

iCan PicoPop® System on Module

The iCan PicoPop® System on Module offers a compact solution for high-performance computing in constrained environments, particularly in the realm of aerospace technology. This system on module is designed to deliver robust computing power while maintaining minimal space usage, offering an excellent ratio of performance to size. The PicoPop® excels in integrating a variety of functions onto a single module, including processing, memory, and interface capabilities, which collectively handle the demanding requirements of aerospace applications. Its efficient power consumption and powerful processing capability make it ideally suited to a range of in-flight applications and systems. This solution is tailored to support the development of sophisticated aviation systems, ensuring scalability and flexibility in deployment. With its advanced features and compact form, the iCan PicoPop® System on Module stands out as a potent component for modern aerospace challenges.

Oxytronic
Building Blocks, CPU, DSP Core, Fibre Channel, Interrupt Controller, LCD Controller, Processor Core Dependent, Processor Core Independent, Standard cell, Wireless Processor
View Details

pPLL02F Family

The pPLL02F Family is designed as a versatile suite of all digital PLLs ideal for a range of clocking applications with frequencies up to 2GHz. This family stands out for its low-jitter and compact area, making it a superb fit for moderate-speed microprocessor blocks and general-purpose digital systems. Its support for fractional multiplication facilitates its application across various industries where precise timing and efficient clocking are paramount. Tailored to support numerous PLL systems, the pPLL02F IPs are optimized for seamless integration into diverse technological environments. They are adept at functioning as clock sources for microprocessors and general digital systems, where reliability and resource conservation are vital. The pPLL02F Family is compatible with multiple foundries, including technologies like GlobalFoundries 22FDX, Samsung 8LPP, and TSMC N6/N7. These PLLs are readily adaptable, allowing engineers to leverage their capabilities across different semiconductor frameworks, ensuring a consistent and performance-driven experience.

Perceptia Devies Australia
GLOBALFOUNDRIES, Samsung, TSMC
14nm, 16nm, 32nm, 45nm
AMBA AHB / APB/ AXI, Clock Generator, Clock Synthesizer, PLL, Processor Core Dependent
View Details

SoC Platform

The SoC Platform by SEMIFIVE is designed to streamline the system-on-chip (SoC) development process, boasting rapid creation capabilities with minimal effort. Developed using silicon-proven IPs, the platform is attuned to specific domain applications and incorporates optimized design methodologies. This results in reduced costs, minimized risks, and faster design cycles. Fundamental features include a domain-specific architecture, pre-verified IP components, and hardware/software bring-up tools ready for activation, ensuring seamless integration and high performance. Distinct attributes of the SoC Platform involve leveraging a pre-configured and thoroughly validated IP pool. This preparation fosters swift adaptation to varying requirements and presents customers with rapid time-to-market opportunities. Additionally, users can benefit from a reduction in engineering risk, supported by silicon-proven elements integrated into the platform's design. Whether it's achieving lower development costs or maximizing component reusability, the platform ensures a comprehensive and tailored engagement model for diverse project needs. Capabilities such as dynamic configuration choices and integration of non-platform IPs further enhance flexibility, accommodating specialized customer requirements. Target applications range from AI inference systems and AIoT environments to high-performance computing (HPC) uses. By managing every aspect of the design and manufacturing lifecycle, the platform positions SEMIFIVE as a one-stop partner for achieving innovative semiconductor breakthrough.

SEMIFIVE
15 Categories
View Details

Neural Network Accelerator

Designed to cater to the needs of edge computing, the Neural Network Accelerator by Gyrus AI is a powerhouse of performance and efficiency. With a focus on graph processing capabilities, this product excels in implementing neural networks by providing native graph processing. The accelerator attains impressive speeds, achieving 30 TOPS/W, while offering efficient computational power with significantly reduced clock cycles, ranging between 10 to 30 times less compared to traditional models. The design ensures that power consumption is kept at a minimum, being 10-20 times lower due to its low memory usage configuration. Beyond its power efficiency, this accelerator is designed to maximize space with a smaller die area, ensuring an 8-10 times reduction in size while maintaining high utilization rates of over 80% for various model structures. Such design optimizations make it an ideal choice for applications requiring a compact, high-performance solution capable of delivering fast computations without compromising on energy efficiency. The Neural Network Accelerator is a testament to Gyrus AI's commitment to enabling smarter edge computing solutions. Additionally, Gyrus AI has paired this technology with software tools that facilitate the execution of neural networks on the IP, simplifying integration and use in various applications. This seamless integration is part of their broader strategy to augment human intelligence, providing solutions that enhance and expand the capabilities of AI-driven technologies across industries.

Gyrus AI
AI Processor, Coprocessor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor
View Details

RISC-V CPU IP UX Class

Nuclei's UX Class RISC-V CPU IP represents a high-caliber offering crafted for Linux-based systems in data centers and networks. This IP harnesses a 64-bit architecture, incorporating a Memory Management Unit to enhance system operations and facilitate the processing of heavy data loads. It is designed to meet the rigorous standards of data-intensive environments while maintaining a focus on efficiency and seamless integration within established systems. Built with a focus on adaptability, the UX Class IP can be configured to meet a variety of operational specifications, making it a versatile choice for data center applications that require robust and reliable processing power. It supports a dual-issue pipeline structure, enabling more efficient management of concurrent processing tasks and enhancing throughput across complex computing scenarios. Furthermore, the UX Class IP integrates a range of functionalities intended to bolster system security and reliability. Trusted execution environments and advanced physical security measures ensure that sensitive data remains protected, which is a critical requirement in network and data center environments. By supporting a comprehensive toolchain, including RTOS and Linux environments, the UX Class IP ensures developers have access to essential resources that streamline development processes and enhance application standards.

Nuclei System Technology
Building Blocks, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Cores
View Details

Spectral CustomIP

Designed for specialized memory applications, Spectral's CustomIP offers a diverse range of memory architectures, including Binary and Ternary CAMs, multi-ported memories, and more. These solutions emphasize high density and low dynamic power consumption, offering architectures tailored to networking, graphics, and consumer devices. With capabilities that include advanced compiler features and comprehensive integration views, CustomIP is ideal for differentiated ICs that demand unique memory solutions. Users benefit from a source code availability that facilitates modifications, enabling further technological customization.

Spectral Design & Test Inc.
All Foundries
All Process Nodes
Embedded Memories, I/O Library, Processor Core Dependent, SDRAM Controller, Standard cell
View Details

Codasip L-Series DSP Core

The Codasip L-Series DSP Core offers specialized features tailored for digital signal processing applications. It is designed to efficiently handle high data throughput and complex algorithms, making it ideal for applications in telecommunications, multimedia processing, and advanced consumer electronics. With its high configurability, the L-Series can be customized to optimize processing power, ensuring that specific application needs are met with precision. One of the key advantages of this core is its ability to be finely tuned to deliver optimal performance for signal processing tasks. This includes configurable instruction sets that align precisely with the unique requirements of DSP applications. The core’s design ensures it can deliver top-tier performance while maintaining energy efficiency, which is critical for devices that operate in power-sensitive environments. The L-Series DSP Core is built on Codasip's proven processor design methodologies, integrating seamlessly into existing systems while providing a platform for developers to expand and innovate. By offering tools for easy customization within defined parameters, Codasip ensures that users can achieve the best possible outcomes for their DSP needs efficiently and swiftly.

Codasip
AI Processor, Audio Processor, CPU, DSP Core, Multiprocessor / DSP, Processor Core Dependent
View Details

Satellite Navigation SoC Integration

The Satellite Navigation SoC Integration solution offers a seamless method to embed satellite navigation capabilities within a System on Chip (SoC). This solution effectively integrates GPS, GLONASS, SBAS, and Galileo channels, along with independent fast search engines for each navigation system, enabling a robust and comprehensive navigation system. Because of its silicon-verified nature and VHDL library-based design, it ensures ease of integration and compatibility with various platforms. Notably, this IP was among the first to be integrated with open hardware architecture such as RISC-V, bolstering its adaptability and performance. The navigation IP features advanced signal processing capabilities that are platform independent, supporting a high update rate that can reach up to 1000 Hz. This high performance is complemented by a user-friendly API, making it accessible for developers to implement in various applications. Its versatility is further demonstrated through the support of a wide range of communication protocols and its ability to work seamlessly with other software services like OpenStreetMaps. This solution is optimal for developers looking to enhance their SoC with precise and reliable satellite navigation functionalities. It is particularly beneficial in modern applications requiring high accuracy and reliability, offering comprehensive features that facilitate a range of applications beyond traditional GPS functions. The integration of this technology enables devices to perform at unprecedented levels of efficiency and accuracy in location-based applications.

GNSS Sensor Ltd
All Foundries
All Process Nodes
14 Categories
View Details

SystemBIST

SystemBIST offers a versatile plug-and-play solution for FPGA configuration and JTAG testing with its unique patented architecture. This product is ideal for creating high-quality, self-testable, and in-the-field reconfigurable equipment, ensuring reliable integration and testing in any IEEE 1532 or 1149.1 compliant devices. SystemBIST efficiently utilizes existing system flash memory to store vital configuration and test data, which can be applied at power-up, enabling cost-effective setup without the need for multiple configuration PROMs. Additionally, SystemBIST enhances PCB testing efficacy by embedding deterministic Built-In Self-Test (BIST) capabilities, using manufacturing-based JTAG/IEEE 1149.1 test patterns to conduct efficient system tests. This platform simplifies the adoption of robust security measures, including 128-bit security identifiers, helping prevent unauthorized cloning and tampering of FPGAs. By doing so, it seamlessly supports the intricate demands of modern electronics and embedded systems. The platform not only reduces the cost associated with in-system configuration but also advances the embedded test methodology by enabling debugging without removing and replacing programmable components. It offers a scalable and reusable test infrastructure, applicable across different product generations and varying applications, thereby extending the operational life and value of the technology involved.

Intellitech Corp.
Coprocessor, Multi-Protocol PHY, Processor Core Dependent, Receiver/Transmitter
View Details

AON1100

The AON1100 is a leading AI chip built for efficient voice and sensor processing tasks, offering exceptional performance with under 260μW power usage. It achieves a 90% accuracy level even in sub-zero decibel noise environments. Ideal for devices that require continuous sensory input without significant power drain, it is designed to function seamlessly in demanding acoustic environments, optimizing both performance and power.

AONDevices, Inc.
11 Categories
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt