Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Platform Level IP

Platform Level IP: Comprehensive Semiconductor Solutions

Platform Level IP is a critical category within the semiconductor IP ecosystem, offering a wide array of solutions that are fundamental to the design and efficiency of semiconductor devices. This category includes various IP blocks and cores tailored for enhancing system-level performance, whether in consumer electronics, automotive systems, or networking applications. Suitable for both embedded control and advanced data processing tasks, Platform Level IP encompasses versatile components necessary for building sophisticated, multicore systems and other complex designs.

Subcategories within Platform Level IP cover a broad spectrum of integration needs:

1. **Multiprocessor/DSP (Digital Signal Processing)**: This includes specialized semiconductor IPs for handling tasks that require multiple processor cores working in tandem. These IPs are essential for applications needing high parallelism and performance, such as media processing, telecommunications, and high-performance computing.

2. **Processor Core Dependent**: These semiconductor IPs are designed to be tightly coupled with specific processor cores, ensuring optimal compatibility and performance. They include enhancements that provide seamless integration with one or more predetermined processor architectures, often used in specific applications like embedded systems or custom computing solutions.

3. **Processor Core Independent**: Unlike core-dependent IPs, these are flexible solutions that can integrate with a wide range of processor cores. This adaptability makes them ideal for designers looking to future-proof their technological investments or who are working with diverse processing environments.

Overall, Platform Level IP offers a robust foundation for developing flexible, efficient, and scalable semiconductor devices, catering to a variety of industries and technological requirements. Whether enhancing existing architectures or pioneering new designs, semiconductor IPs in this category play a pivotal role in the innovation and evolution of electronic devices.

All semiconductor IP
Platform Level IP
A/D Converter Amplifier Analog Comparator Analog Filter Analog Front Ends Analog Multiplexer Analog Subsystems Clock Synthesizer Coder/Decoder D/A Converter DLL Graphics & Video Modules Other Oversampling Modulator Photonics PLL Power Management RF Modules Sensor Temperature Sensor CAN CAN XL CAN-FD FlexRay LIN Other Safe Ethernet Arbiter Audio Controller Clock Generator CRT Controller Disk Controller DMA Controller GPU Input/Output Controller Interrupt Controller Keyboard Controller LCD Controller Other Peripheral Controller Receiver/Transmitter Timer/Watchdog AMBA AHB / APB/ AXI CXL D2D Gen-Z HDMI I2C IEEE1588 Interlaken MIL-STD-1553 MIPI Multi-Protocol PHY PCI PCMCIA PowerPC RapidIO SAS SATA Smart Card USB V-by-One VESA Embedded Memories I/O Library Other Standard cell DDR eMMC Flash Controller HBM Mobile DDR Controller NAND Flash NVM Express ONFI Controller RLDRAM Controller SDIO Controller SDRAM Controller SRAM Controller 2D / 3D ADPCM Audio Interfaces AV1 Camera Interface CSC H.263 H.264 H.265 Image Conversion JPEG JPEG 2000 MPEG 4 QOI TICO VGA WMA WMV Network on Chip Multiprocessor / DSP Processor Core Dependent Processor Core Independent AI Processor Audio Processor Building Blocks Coprocessor CPU DSP Core IoT Processor Microcontroller Processor Cores Security Processor Vision Processor Wireless Processor Content Protection Software Cryptography Cores Embedded Security Modules Other Platform Security Security Protocol Accelerators Security Subsystems 3GPP-5G 3GPP-LTE 802.11 802.16 / WiMAX Bluetooth CPRI Digital Video Broadcast GPS JESD 204A / JESD 204B OBSAI Other UWB W-CDMA Wireless USB ATM / Utopia Cell / Packet Error Correction/Detection Ethernet Fibre Channel HDLC Interleaver/Deinterleaver Modulation/Demodulation Optical/Telecom
Vendor

Akida Neural Processor IP

Akida Neural Processor IP by BrainChip serves as a pivotal technology asset for enhancing edge AI capabilities. This IP core is specifically designed to process neural network tasks with a focus on extreme efficiency and power management, making it an ideal choice for battery-powered and small-footprint devices. By utilizing neuromorphic principles, the Akida Neural Processor ensures that only the most relevant computations are prioritized, which translates to substantial energy savings while maintaining high processing speeds. This IP's compatibility with diverse data types and its ability to form multi-layer neural networks make it versatile for a wide range of industries including automotive, consumer electronics, and healthcare. Furthermore, its capability for on-device learning, without network dependency, contributes to improved device autonomy and security, making the Akida Neural Processor an integral component for next-gen intelligent systems. Companies adopting this IP can expect enhanced AI functionality with reduced development overheads, enabling quicker time-to-market for innovative AI solutions.

BrainChip
AI Processor, Coprocessor, CPU, Digital Video Broadcast, Network on Chip, Platform Security, Processor Core Independent, Vision Processor
View Details

Akida 2nd Generation

The Akida 2nd Generation continues BrainChip's legacy of low-power, high-efficiency AI processing at the edge. This iteration of the Akida platform introduces expanded support for various data precisions, including 8-, 4-, and 1-bit weights and activations, which enhance computational flexibility and efficiency. Its architecture is significantly optimized for both spatial and temporal data processing, serving applications that demand high precision and rapid response times such as robotics, advanced driver-assistance systems (ADAS), and consumer electronics. The Akida 2nd Generation's event-based processing model greatly reduces unnecessary operations, focusing on real-time event detection and response, which is vital for applications requiring immediate feedback. Furthermore, its sophisticated on-chip learning capabilities allow adaptation to new tasks with minimal data, fostering more robust AI models that can be personalized to specific use cases without extensive retraining. As industries continue to migrate towards AI-powered solutions, the Akida 2nd Generation provides a compelling proposition with its improved performance metrics and lower power consumption profile.

BrainChip
11 Categories
View Details

KL730 AI SoC

The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.

Kneron
TSMC
12nm
16 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

Designed for high-performance applications, the Metis AIPU PCIe AI Accelerator Card by Axelera AI offers powerful AI processing capabilities in a PCIe card format. This card is equipped with the Metis AI Processing Unit, capable of delivering up to 214 TOPS, making it ideal for intensive AI tasks and vision applications that require substantial computational power. With support for the Voyager SDK, this card ensures seamless integration and rapid deployment of AI models, helping developers leverage existing infrastructures efficiently. It's tailored for applications that demand robust AI processing like high-resolution video analysis and real-time object detection, handling complex networks with ease. Highlighted for its performance in ResNet-50 processing, which it can execute at a rate of up to 3,200 frames per second, the PCIe AI Accelerator Card perfectly meets the needs of cutting-edge AI applications. The software stack enhances the developer experience, simplifying the scaling of AI workloads while maintaining cost-effectiveness and energy efficiency for enterprise-grade solutions.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Ethernet, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

MetaTF

MetaTF is BrainChip's toolkit aimed at optimizing and deploying machine learning models onto their proprietary Akida neuromorphic platform. This sophisticated toolset allows developers to convert existing models into sparse neural networks suited for Akida's efficient processing capabilities. MetaTF supports a seamless workflow from model conversion to deployment, simplifying the transition for developers aiming to leverage Akida's low-power, high-performance processing. The toolkit ensures that machine learning applications are optimized for edge deployment without compromising on speed or accuracy. This tool fosters an environment where AI models can be customized to meet specific application demands, delivering personalized and highly-innovative AI solutions. MetaTF's role is crucial in enabling developers to efficiently integrate complex neural networks into real-world devices, aiding in applications like smart city infrastructure, IoT devices, and industrial automation. By using MetaTF, companies can dramatically enhance the adaptability and responsiveness of their AI applications while maintaining stringent power efficiency standards.

BrainChip
AI Processor, Coprocessor, Processor Core Independent, Vision Processor
View Details

CXL 3.1 Switch

Panmnesia's CXL 3.1 Switch is an integral component designed to facilitate high-speed, low-latency data transfers across multiple connected devices. It is architected to manage resource allocation seamlessly in AI and high-performance computing environments, supporting broad bandwidth, robust data throughput, and efficient power consumption, creating a cohesive foundation for scalable AI infrastructures. Its integration with advanced protocols ensures high system compatibility.

Panmnesia
AMBA AHB / APB/ AXI, CXL, D2D, Ethernet, Fibre Channel, Gen-Z, Multiprocessor / DSP, PCI, Processor Core Dependent, Processor Core Independent, RapidIO, SAS, SATA, V-by-One
View Details

Universal Chiplet Interconnect Express (UCIe)

EXTOLL's Universal Chiplet Interconnect Express (UCIe) is a cutting-edge solution designed to meet the evolving needs of chip-to-chip communication. UCIe enables seamless data exchange between chiplets, fostering a new era of modular and scalable processor designs. This technology is especially vital for applications requiring high bandwidth and low latency in data transfer between different chip components. Built to support heterogeneous integration, UCIe offers superior scalability and is compatible with a variety of process nodes, enabling easy adaptation to different technological requirements. This ensures that system architects can achieve optimal performance without compromising on design flexibility or efficiency. Furthermore, UCIe's design philosophy is centered around maintaining ultra-low power consumption, aligning with modern demands for energy-efficient technology. Through EXTOLL’s UCIe, developers have the capability to build versatile and multi-functional platforms that are more robust than ever. This interconnect technology not only facilitates communications between chips but enhances the overall architecture, paving the way for future innovations in chiplet systems.

EXTOLL GmbH
GLOBALFOUNDRIES, Samsung, TSMC
28nm
AMBA AHB / APB/ AXI, D2D, Gen-Z, Multiprocessor / DSP, Network on Chip, Processor Core Independent, USB, V-by-One, VESA
View Details

Akida IP

Akida IP represents BrainChip's groundbreaking approach to neuromorphic AI processing. Inspired by the efficiencies of cognitive processing found in the human brain, Akida IP delivers real-time AI processing capabilities directly at the edge. Unlike traditional data-intensive architectures, it operates with significantly reduced power consumption. Akida IP's design supports multiple data formats and integrates seamlessly with other hardware platforms, making it flexible for a wide range of AI applications. Uniquely, it employs sparsity, focusing computation only on pertinent data, thereby minimizing unnecessary processing and conserving power. The ability to operate independently of cloud-driven data processes not only conserves energy but enhances data privacy and security by ensuring that sensitive data remains on the device. Additionally, Akida IP’s temporal event-based neural networks excel in tracking event patterns over time, providing invaluable benefits in sectors like autonomous vehicles where rapid decision-making is critical. Akida IP's remarkable integration capacity and its scalability from small, embedded systems to larger computing infrastructures make it a versatile choice for developers aiming to incorporate smart AI capabilities into various devices.

BrainChip
AI Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Processor Core Independent, Vision Processor
View Details

Veyron V2 CPU

The Ventana Veyron V2 CPU represents a substantial upgrade in processing power, setting a new standard in AI and data center performance with its RISC-V architecture. Created for applications that demand intensive computing resources, the Veyron V2 excels in providing high throughput and superior scalability. It is aimed at cloud-native operations and intensive data processing tasks requiring robust, reliable compute power. This CPU is finely tuned for modern, virtualized environments, delivering a server-class performance tailored to manage cloud-native workloads efficiently. The Veyron V2 supports a range of integration options, making it dependably adaptable for custom silicon platforms and high-performance system infrastructures. Its design incorporates an IOMMU compliant with RISC-V standards, enabling seamless interoperability with third-party IPs and modules. Ventana's innovation is evident in the Veyron V2's capacity for heterogeneous computing configurations, allowing diverse workloads to be managed effectively. Its architecture features advanced cluster and cache infrastructures, ensuring optimal performance across large-scale deployment scenarios. With a commitment to open standards and cutting-edge technologies, the Veyron V2 is a critical asset for organizations pursuing the next level in computing performance and efficiency.

Ventana Micro Systems
AI Processor, Audio Processor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Yitian 710 Processor

The Yitian 710 processor from T-Head represents a significant advancement in server chip technology, featuring an ARM-based architecture optimized for cloud applications. With its impressive multi-core design and high-speed memory access, this processor is engineered to handle intensive data processing tasks with efficiency and precision. It incorporates advanced fabrication techniques, offering high throughput and low latency to support next-generation cloud computing environments. Central to its architecture are 128 high-performance CPU cores utilizing the Armv9 structure, which facilitate superior computational capabilities. These cores are paired with substantial cache size and high-speed DDR5 memory interfaces, optimizing the processor's ability to manage massive workloads effectively. This attribute makes it an ideal choice for data centers looking to enhance processing speed and efficiency. In addition to its hardware prowess, the Yitian 710 is designed to deliver excellent energy efficiency. It boasts a sophisticated power management system that minimizes energy consumption without sacrificing performance, aligning with green computing trends. This combination of power, efficiency, and environmentally friendly design positions the Yitian 710 as a pivotal choice for enterprises propelling into the future of computing.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

Chimera GPNPU

The Chimera GPNPU from Quadric is engineered to meet the diverse needs of modern AI applications, bridging the gap between traditional processing and advanced AI model requirements. It's a fully licensable processor, designed to deliver high AI inference performance while eliminating the complexity of traditional multi-core systems. The GPNPU boasts an exceptional ability to execute various AI models, including classical backbones, state-of-the-art transformers, and large language models, all within a single execution pipeline.\n\nOne of the core strengths of the Chimera GPNPU is its unified architecture that integrates matrix, vector, and scalar processing capabilities. This singular design approach allows developers to manage complex tasks such as AI inference and data-parallel processing without resorting to multiple tools or artificial partitioning between processors. Users can expect heightened productivity thanks to its modeless operation, which is fully programmable and efficiently executes C++ code alongside AI graph code.\n\nIn terms of versatility and application potential, the Chimera GPNPU is adaptable across different market segments. It's available in various configurations to suit specific performance needs, from single-core designs to multi-core clusters capable of delivering up to 864 TOPS. This scalability, combined with future-proof programmability, ensures that the Chimera GPNPU not only addresses current AI challenges but also accommodates the ever-evolving landscape of cognitive computing requirements.

Quadric
15 Categories
View Details

xcore.ai

xcore.ai is a versatile and powerful processing platform designed for AIoT applications, delivering a balance of high performance and low power consumption. Crafted to bring AI processing capabilities to the edge, it integrates embedded AI, DSP, and advanced I/O functionalities, enabling quick and effective solutions for a variety of use cases. What sets xcore.ai apart is its cycle-accurate programmability and low-latency control, which improve the responsiveness and precision of the applications in which it is deployed. Tailored for smart environments, xcore.ai ensures robust and flexible computing power, suitable for consumer, industrial, and automotive markets. xcore.ai supports a wide range of functionalities, including voice and audio processing, making it ideal for developing smart interfaces such as voice-controlled devices. It also provides a framework for implementing complex algorithms and third-party applications, positioning it as a scalable solution for the growing demands of the connected world.

XMOS Semiconductor
21 Categories
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module from Axelera AI is a cutting-edge solution designed for enhancing AI performance directly within edge devices. Engineered to fit the M.2 form factor, this module packs powerful AI processing capabilities into a compact and efficient design, suitable for space-constrained applications. It leverages the Metis AI Processing Unit to deliver high-speed inference directly at the edge, minimizing latency and maximizing data throughput. The module is optimized for a range of computer vision tasks, making it ideal for applications like multi-channel video analytics, quality inspection, and real-time people monitoring. With its advanced architecture, the AIPU module supports a wide array of neural networks and can handle up to 24 concurrent video streams, making it incredibly versatile for industries looking to implement AI-driven solutions across various sectors. Providing seamless compatibility with AI frameworks such as TensorFlow, PyTorch, and ONNX, the Metis AIPU integrates seamlessly with existing systems to streamline AI model deployment and optimization. This not only boosts productivity but also significantly reduces time-to-market for edge AI solutions. Axelera's comprehensive software support ensures that users can achieve maximum performance from their AI models while maintaining operational efficiency.

Axelera AI
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Building Blocks, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, WMV
View Details

eSi-3250

Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

KL630 AI SoC

The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.

Kneron
TSMC
12nm LP/LP+
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, Processor Core Independent, USB, VGA, Vision Processor
View Details

SAKURA-II AI Accelerator

The SAKURA-II AI Accelerator represents a cutting-edge advancement in the field of generative AI, offering remarkable efficiency in a compact form factor. Engineered for rapid real-time inferencing, it excels in applications requiring low latency and robust performance in small, power-efficient silicon. This accelerator adeptly manages multi-billion parameter models, including Llama 2 and Stable Diffusion, under typical power requirements of 8W, catering to diverse applications from Vision to Language and Audio. Its core advantage lies in exceeding the AI compute utilization of other solutions, ensuring outstanding energy efficiency. The SAKURA-II further supports up to 32GB of DRAM, leveraging enhanced bandwidth for superior performance. Sparse computing techniques minimize memory footprint, while real-time data streaming and support for arbitrary activation functions elevate its functionality, enabling sophisticated applications in edge environments. This versatile AI accelerator not only enhances energy efficiency but also delivers robust memory management, supporting advanced precision for near-FP32 accuracy. Coupled with advanced power management, it suits a wide array of edge AI implementations, affirming its place as a leader in generative AI technologies at the edge.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Talamo SDK

The Talamo Software Development Kit (SDK) is a comprehensive toolkit designed to facilitate the development and deployment of advanced neuromorphic AI applications. Leveraging the familiar PyTorch environment, Talamo simplifies AI model creation and deployment, allowing developers to efficiently build spiking neural network models or adapt existing frameworks. The SDK integrates essential tools for compiling, training, and simulating AI models, providing users a complete environment to tailor their AI solutions without requiring extensive expertise in neuromorphic computing. One of Talamo's standout features is its seamless integration with the Spiking Neural Processor (SNP), offering an easy path from model creation to application deployment. The SDK's architecture simulator supports rapid validation and iteration, giving developers a valuable resource for refining their models. By enabling streamlined processes for building and optimizing applications, Talamo reduces development time and enhances the flexibility of AI deployment in edge scenarios. Talamo is designed to empower developers to utilize the full potential of brain-inspired AI, allowing the creation of end-to-end application pipelines. It supports building complex functions and neural networks through a plug-and-play model approach, minimizing the barriers to entry for deploying neuromorphic solutions. As an all-encompassing platform, Talamo paves the way for the efficient realization of sophisticated AI-driven applications, from inception to final implementation.

Innatera Nanosystems
AI Processor, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

Jotunn8 AI Accelerator

The Jotunn8 AI Accelerator represents a pioneering approach in AI inference chip technology, designed to cater to the demanding needs of contemporary data centers. Its architecture is optimized for high-speed deployment of AI models, combining rapid data processing capabilities with cost-effectiveness and energy efficiency. By integrating features such as ultra-low latency and substantial throughput capacity, it supports real-time applications like chatbots and fraud detection that require immediate data processing and agile responses. The chip's impressive performance per watt metric ensures a lower operational cost, making it a viable option for scalable AI operations that demand both efficiency and sustainability. By reducing power consumption, Jotunn8 not only minimizes expenditure but also contributes to a reduced carbon footprint, aligning with the global move towards greener technology solutions. These attributes make Jotunn8 highly suitable for applications where energy considerations and environmental impact are paramount. Additionally, Jotunn8 offers flexibility in memory performance, allowing for the integration of complexity in AI models without compromising on speed or efficiency. The design emphasizes robustness in handling large-scale AI services, catering to the new challenges posed by expanding data needs and varied application environments. Jotunn8 is not simply about enhancing inference speed; it proposes a new baseline for scalable AI operations, making it a foundational element for future-proof AI infrastructure.

VSORA
13 Categories
View Details

Time-Triggered Ethernet

Time-Triggered Ethernet (TTEthernet) is a cutting-edge data communication solution tailored for aviation and space sectors requiring dual fault-tolerance and redundancy. Critically designed to support environments with high safety-criticality, TTEthernet embodies an evolutionary step in Ethernet communication by integrating deterministic behavior with conventional Ethernet benefits. This blend of technologies facilitates the transfer of data with precision timing, ensuring that all communications occur as scheduled—a vital feature for mission-critical operations. TTEthernet is particularly advantageous in applications requiring high levels of data integrity and latency control. Its deployment across triple-redundant network architectures ensures that even in case of component failures, the network continues to function seamlessly. Such redundancy is necessary in scenarios like human space missions, where data loss or delay is not an option. TTTech's TTEthernet offerings, which also include ASIC designs, meet the European Cooperation for Space Standardization (ECSS) standards, reinforcing their reliability and suitability for the most demanding applications. Supporting both end systems and more intricate system-on-chip designs, this technology synchronizes all data flow to maintain continuity and consistency throughout the network infrastructure.

TTTech Computertechnik AG
Cell / Packet, Ethernet, FlexRay, LIN, MIL-STD-1553, MIPI, Processor Core Independent, Safe Ethernet
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

EXOSTIV

EXOSTIV is a versatile tool providing extensive capture capabilities for monitoring FPGA internal signals. It's designed to visualize operation in real-time, thus offering immense savings by mitigating FPGA bugs during production and lowering engineering costs. The tool adapts to different prototyping boards and supports a variety of FPGA configurations. A hallmark of EXOSTIV's functionality is its ability to perform at-speed analysis in complex FPGA designs. It features robust probes like the EP16000, which connects to FPGA chip transceivers, supporting significant data rates per transceiver. This setup ensures that engineers can conduct real-world testing and accurate data capture, overcoming the hindrances often encountered with simulation-only methods. The tool boasts a user-friendly interface centered around its Core Inserter and Probe Client software, allowing for efficient IP generation and integration into the target design. By providing comprehensive connectivity options via QSFP28 and supporting multiple platforms, EXOSTIV remains an essential asset for engineers aiming to enhance their FPGA design and validation processes.

Exostiv Labs
AMBA AHB / APB/ AXI, Clock Generator, Processor Core Dependent
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

Designed for minimal power consumption, the Tianqiao-70 is a 64-bit RISC-V CPU core that harmonizes efficiency with energy savings. Targeting primarily the commercial space, this CPU core supports applications that demand lower power usage without compromising performance outputs. It stands out in the fields of mobile and desktop processing, AI learning, and other demanding applications that require consistent yet power-efficient computing. Architected to provide maximum throughput with minimum power draw, it is essential for energy-critical systems. The Tianqiao-70 showcases StarFive's commitment to optimizing for efficiency, enabling mobile, desktop, and AI platforms to leverage low power requirements effectively. This makes it a compelling choice for developers aiming to integrate eco-friendly solutions in their products.

StarFive Technology
AI Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

NuLink Die-to-Die PHY for Standard Packaging

The NuLink Die-to-Die PHY for Standard Packaging represents Eliyan's cornerstone technology, engineered to harness the power of standard packaging for die-to-die interconnects. This technology circumvents the limitations of advanced packaging by providing superior performance and power efficiencies traditionally associated only with high-end solutions. Designed to support multiple standards, such as UCIe and BoW, the NuLink D2D PHY is an ideal solution for applications requiring high bandwidth and low latency without the cost and complexity of silicon interposers or silicon bridges. In practical terms, the NuLink D2D PHY enables chiplets to achieve unprecedented bandwidth and power efficiency, allowing for increased flexibility in chiplet configurations. It supports a diverse range of substrates, providing advantages in thermal management, production cycle, and cost-effectiveness. The technology's ability to split a Network on Chip (NoC) across multiple chiplets, while maintaining performance integrity, makes it invaluable in ASIC designs. Eliyan's NuLink D2D PHY is particularly beneficial for systems requiring physical separation between high-performance ASICs and heat-sensitive components. By delivering interposer-like bandwidth and power in standard organic or laminate packages, this product ensures optimal system performance across varied applications, including those in AI, data processing, and high-speed computing.

Eliyan
Samsung
4nm, 7nm
AMBA AHB / APB/ AXI, CXL, D2D, MIPI, Network on Chip, Processor Core Dependent
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

NMP-750

The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

KL520 AI SoC

The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.

Kneron
TSMC
65nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

aiWare

The aiWare hardware neural processing unit (NPU) stands out as a state-of-the-art solution for automotive AI applications, bringing unmatched efficiency and performance. Designed specifically for inference tasks associated with automated driving systems, aiWare supports a wide array of AI workloads including CNNs, LSTMs, and RNNs, ensuring optimal operation across numerous applications.\n\naiWare is engineered to achieve industry-leading efficiency rates, boasting up to 98% efficiency on automotive neural networks. It operates across various performance requirements, from cost-sensitive L2 regulatory applications to advanced multi-sensor L3+ systems. The hardware platform is production-proven, already implemented in several products like Nextchip's APACHE series and enjoys strong industry partnerships.\n\nA key feature of aiWare is its scalability, capable of delivering up to 1024 TOPS with its multi-core architecture, and maintaining high efficiency in diverse AI tasks. The design allows for straightforward integration, facilitating early-stage performance evaluations and certifications with its deterministic operations and minimal host CPU intervention.\n\nA dedicated SDK, aiWare Studio, furthers the potential of the NPU by providing a suite of tools focused on neural network optimization, supporting developers in tuning their AI models with fine precision. Optimized for automotive-grade applications, aiWare's technology ensures seamless integration into systems requiring AEC-Q100 Grade 2 compliance, significantly enhancing the capabilities of automated driving applications from L2 through L4.

aiMotive
AI Processor, Building Blocks, CPU, Cryptography Cores, FlexRay, Platform Security, Processor Core Dependent, Processor Core Independent, Security Protocol Accelerators, Vision Processor
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator by T-Head is designed to meet the needs of intensive machine learning workloads. Boasting superior performance, this AI accelerator leverages cutting-edge algorithms to enhance data processing capabilities, offering rapid speeds for AI tasks. It is particularly suited for deep learning applications that require high throughput and complex computation. Fitted with a highly efficient architecture, the Hanguang 800 speeds up machine learning model training and inference, enabling quicker deployments of AI solutions across industries. Its advanced design ensures compatibility with a wide range of machine learning frameworks, allowing for flexibility in AI application development and deployment. Energy efficiency is a key attribute of the Hanguang 800, incorporating modern power management features that reduce consumption without impacting performance. This makes it not only a high-performance option but also an environmentally friendly choice for businesses seeking to minimize their carbon footprint while optimizing AI processes.

T-Head Semiconductor
AI Processor, CPU, Processor Core Dependent, Security Processor, Vision Processor
View Details

SiFive Intelligence X280

The SiFive Intelligence X280 processor targets applications in machine learning and artificial intelligence, offering a high-performance, scalable architecture for emerging data workloads. As part of the Intelligence family, the X280 prioritizes a software-first methodology in processor design, addressing future ML and AI deployment needs, especially at the edge. This makes it particularly useful for scenarios requiring high computational power close to the data source. Central to its capabilities are scalable vector and matrix compute engines that can adapt to evolving workloads, thus future-proofing investments in AI infrastructure. With high-bandwidth bus interfaces and support for custom engine control, the X280 ensures seamless integration with varied system architectures, enhancing operational efficiency and throughput. By focusing on versatility and scalability, the X280 allows developers to deploy high-performance solutions without the typical constraints of more traditional platforms. It supports wide-ranging AI applications, from edge computing in IoT to advanced machine learning tasks, underpinning its role in modern and future-ready computing solutions.

SiFive, Inc.
AI Processor, CPU, Cryptography Cores, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Cores, Security Processor, Security Subsystems, Vision Processor
View Details

Ultra-Low-Power 64-Bit RISC-V Core

This core is designed for ultra-low power applications, offering a remarkable balance of power efficiency and performance. Operating at a mere 10mW at 1GHz, it showcases Micro Magic's advanced design techniques that allow for high-speed processing while maintaining low voltage operations. The core is ideal for energy-sensitive applications where performance cannot be compromised. With its ability to operate efficiently at 5GHz, this RISC-V core provides a formidable foundation for high-performance, low-power computing. It is a testament to Micro Magic's ability to develop cutting-edge solutions that cater to the needs of modern semiconductor applications. The 64-bit architecture ensures robust processing capabilities, making it suitable for a wide range of applications in various sectors. Whether for IoT devices or complex computing operations, this core is designed to meet diverse requirements by delivering power-packed performance.

Micro Magic, Inc.
TSMC
14nm
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

H.264 FPGA Encoder and CODEC Micro Footprint Cores

The H.264 FPGA Encoder and CODEC Micro Footprint Cores from A2e Technologies is a highly customizable IP core designed specifically for FPGAs. This core is notable for its small size and high speed, capable of supporting 1080p60 H.264 Baseline video with a single core. Featuring exceptionally low latency, as little as 1ms at 1080p30, it offers a customizable solution for various video resolutions and pixel depths. These capabilities make it a competitive choice for applications requiring high-performance video compression with minimal footprint. Designed to be ITAR compliant and licensable, the H.264 core can be tailored to meet specific requirements, offering flexibility in video applications. This product is especially suitable for industries where space and performance are critical, such as defense and industrial controls. The core can work efficiently across a range of resolutions and color depths, providing the potential for integration into a wide array of devices and systems. The company's expertise ensures that this H.264 core is not only versatile but also comes with the option of a low-cost evaluation license, allowing potential users to explore its capabilities before committing fully. With A2e's strong support and integration services, customers have assurance that even complex design requirements can be met with experienced guidance.

A2e Technologies
AI Processor, AMBA AHB / APB/ AXI, Arbiter, Audio Controller, H.264, H.265, HDMI, Multiprocessor / DSP, Other, TICO, USB, Wireless Processor
View Details

eSi-3200

The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

NMP-350

The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

Spiking Neural Processor T1 - Ultra-lowpower Microcontroller for Sensing

The Spiking Neural Processor T1 is a neuromorphic microcontroller engineered for always-on sensor applications. It utilizes a spiking neural network engine alongside a RISC-V processor core, creating an ultra-efficient single-chip solution for real-time data processing. With its optimized power consumption, it enables next-generation artificial intelligence and signal processing in small, battery-operated devices. The T1 delivers advanced applications capabilities within a minimal power envelope, making it suitable for use in devices where power and latency are critical factors. The T1 includes a compact, multi-core RISC-V CPU paired with substantial on-chip SRAM, enabling fast and responsive processing of sensor data. By employing the remarkable abilities of spiking neural networks for pattern recognition, it ensures superior power performance on signal-processing tasks. The versatile processor can execute both SNNs and conventional processing tasks, supported by various standard interfaces, thus offering maximum flexibility to developers looking to implement AI features across different devices. Developers can quickly prototype and deploy solutions using the T1's development kit, which includes software for easy integration into existing systems and tools for accurate performance profiling. The development kit supports a variety of sensor interfaces, streamlining the creation of sophisticated sensor applications without the need for extensive power or size trade-offs.

Innatera Nanosystems
AI Processor, Coprocessor, CPU, DSP Core, Input/Output Controller, IoT Processor, Microcontroller, Multiprocessor / DSP, Standard cell, Vision Processor, Wireless Processor
View Details

SiFive Essential

The SiFive Essential family of processors is renowned for its flexibility and wide applicability across embedded systems. These CPU cores are designed to meet specific market needs with pre-defined, silicon-proven configurations or through use of SiFive Core Designer for custom processor builds. Serving in a range of 32-bit to 64-bit options, the Essential processors can scale from microcontrollers to robust dual-issue CPUs. Widely adopted in the embedded market, the Essential series cores stand out for their scalable performance, adapting to diverse application requirements while maintaining power and area efficiency. They cater to billions of units worldwide, indicating their trusted performance and integration across various industries. The SiFive Essential processors offer an optimal balance of power, area, and cost, making them suitable for a wide array of devices, from IoT and consumer electronics to industrial applications. They provide a solid foundation for products that require reliable performance at a competitive price.

SiFive, Inc.
CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

Time-Triggered Protocol

The Time-Triggered Protocol (TTP) stands out as a robust framework for ensuring synchronous communication in embedded control systems. Developed to meet stringent aerospace industry criteria, TTP offers a high degree of reliability with its fault-tolerant configuration, integral to maintaining synchrony across various systems. This technology excels in environments where timing precision and data integrity are critical, facilitating accurate information exchange across diverse subsystems. TTTech’s TTP implementation adheres to the SAE AS6003 standard, making it a trusted component among industry leaders. As part of its wide-ranging applications, this protocol enhances system communication within commercial avionic solutions, providing dependable real-time data handling that ensures system stability. Beyond aviation, TTP's applications can also extend into the energy sector, demonstrating its versatility and robustness. Characterized by its deterministic nature, TTP provides a framework where every operation is scheduled, leading to predictable data flow without unscheduled interruptions. Its suitability for field-programmable gate arrays (FPGAs) allows for easy adaptation into existing infrastructures, making it a versatile tool for companies aiming to upgrade their communication systems without a complete overhaul. For engineers and developers, TTP provides a dependable foundation that streamlines the integration process while safeguarding communication integrity.

TTTech Computertechnik AG
AMBA AHB / APB/ AXI, CAN, CAN XL, CAN-FD, Ethernet, FlexRay, LIN, MIPI, Processor Core Dependent, Safe Ethernet, Temperature Sensor
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

SCR9 Processor Core

Syntacore's SCR9 processor core is a state-of-the-art, high-performance design targeted at applications requiring extensive data processing across multiple domains. It features a robust 12-stage dual-issue out-of-order pipeline and is Linux-capable. Additionally, the core supports up to 16 cores, offering superior processing power and versatility. This processor includes advanced features such as a VPU (Vector Processing Unit) and hypervisor support, allowing it to manage complex computational tasks efficiently. The SCR9 is particularly well-suited for deployments in enterprise, AI, and telecommunication sectors, reinforcing its status as a key component in next-generation computing solutions.

Syntacore
TSMC
40nm
AI Processor, Coprocessor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

ISELED Technology

ISELED represents a breakthrough in automotive lighting with its integration of RGB LED control and communication in a single, smart LED component. This innovative system simplifies lighting design by enabling digital color value input for immediate autonomous color mixing and temperature adjustments, reducing both complexity and cost in vehicles. ISELED operates by implementing a manufacturer-calibrated RGB LED setup suitable for diverse applications, from ambient to functional lighting systems within vehicles. Utilizing a bidirectional communication protocol, ISELED manages up to 4,079 addressable LEDs, offering easy installation and high precision control over individual light characteristics, ideal for creating dynamic and at times synchronized lighting across the automotive interior. This technology ultimately enhances network resilience with features like DC/DC conversion from a standard 12V battery, consistent communication despite power variations, and compatibility with software-free Ethernet bridge systems for streamlined connectivity. This strong focus on reducing production and operational costs, while simultaneously broadening lighting functionality, positions ISELED as a modern solution for smart automotive lighting architectures.

INOVA Semiconductors GmbH
Audio Interfaces, LIN, Multiprocessor / DSP, Other, Power Management, Receiver/Transmitter, Safe Ethernet, Sensor, Temperature Sensor
View Details

General Purpose Accelerator (Aptos)

The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.

Ascenium
TSMC
10nm, 12nm
CPU, Processor Core Dependent, Processor Core Independent, Processor Cores, Standard cell
View Details

NMP-550

The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent
View Details

NPU

The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.

OPENEDGES Technology, Inc.
AI Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent
View Details

RISC-V CPU IP N Class

The RISC-V CPU IP N Class is part of a comprehensive lineup offered by Nuclei, optimized for microcontroller applications. This 32-bit architecture is ideal for AIoT solutions, allowing seamless integration into innovative low-power and high-efficiency projects. As a highly configurable IP, it supports extensions in security and physical safety measures, catering to applications that demand reliability and adaptability. With a focus on configurability, the N Class can be tailored for specific system requirements by selecting only the necessary features, ensuring optimized performance and resource utilization. Designed with robust and readable Verilog coding, it facilitates effective debugging and performance, power, and area (PPA) optimization. The IP also supports Trust Execution Environment (TEE) for enhanced security, catering to a variety of IoT and embedded applications. This class offers efficient scalability, supporting several RISC-V extensions like B, K, P, and V, while also allowing for user-defined instruction expansion. Committed to delivering a highly adaptable processor solution, the RISC-V CPU IP N Class is essential for developers aiming to implement secure and flexible embedded systems.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

Digital Radio (GDR)

The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.

GIRD Systems, Inc.
3GPP-5G, 3GPP-LTE, 802.11, Coder/Decoder, CPRI, DSP Core, Ethernet, Multiprocessor / DSP, Processor Core Independent
View Details

ORC3990 – DMSS LEO Satellite Endpoint System On Chip (SoC)

The ORC3990 is a groundbreaking LEO Satellite Endpoint SoC engineered for use in the Totum DMSS Network, offering exceptional sensor-to-satellite connectivity. This SoC operates within the ISM band and features advanced RF transceiver technology, power amplifiers, ARM CPUs, and embedded memory. It boasts a superior link budget that facilitates indoor signal coverage. Designed with advanced power management capabilities, the ORC3990 supports over a decade of battery life, significantly reducing maintenance requirements. Its industrial temperature range of -40 to +85 degrees Celsius ensures stable performance in various environmental conditions. The compact design of the ORC3990 fits seamlessly into any orientation, further enhancing its ease of use. The SoC's innovative architecture eliminates the need for additional GNSS chips, achieving precise location fixes within 20 meters. This capability, combined with its global LEO satellite coverage, makes the ORC3990 a highly attractive solution for asset tracking and other IoT applications where traditional terrestrial networks fall short.

Orca Systems Inc.
Samsung
500nm
3GPP-5G, Bluetooth, Processor Core Independent, RF Modules, USB, W-CDMA, Wireless Processor
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt