Find IP Sell IP AI Assistant Chip Talk Chip Videos About Us
Log In

All IPs > Processor

Processor Semiconductor IPs

The 'Processor' category in the Silicon Hub Semiconductor IP catalog is a cornerstone of modern electronic device design. Processor semiconductor IPs serve as the brain of electronic devices, driving operations, processing data, and performing complex computations essential for a multitude of applications. These IPs include a wide variety of specific types such as CPUs, DSP cores, and microcontrollers, each designed with unique capabilities and applications in mind.

In this category, you'll find building blocks, which are fundamental components for constructing more sophisticated processors, and coprocessors that augment the capabilities of a main processor, enabling efficient handling of specialized tasks. The versatility of processor semiconductor IPs is evident in subcategories like AI processors, audio processors, and vision processors, each tailored to meet the demands of today’s smart technologies. These processors are central to developing innovative products that leverage artificial intelligence, enhance audio experiences, and enable complex image processing capabilities, respectively.

Moreover, there are security processors that empower devices with robust security features to protect sensitive data and communications, as well as IoT processors and wireless processors that drive connectivity and integration of devices within the Internet of Things ecosystem. These processors ensure reliable and efficient data processing in increasingly connected and smart environments.

Overall, the processor semiconductor IP category is pivotal for enabling the creation of advanced electronic devices across a wide range of industries, from consumer electronics to automotive systems, providing the essential processing capabilities needed to meet the ever-evolving technological demands of today's world. Whether you're looking for individual processor cores or fully integrated processing solutions, this category offers a comprehensive selection to support any design or application requirement.

All semiconductor IP
Processor
A/D Converter Amplifier Analog Comparator Analog Filter Analog Front Ends Analog Multiplexer Analog Subsystems Coder/Decoder D/A Converter DLL Graphics & Video Modules Oversampling Modulator Photonics PLL Power Management RF Modules Sensor Switched Cap Filter Temperature Sensor CAN CAN XL CAN-FD FlexRay LIN Other Safe Ethernet Arbiter Audio Controller Clock Generator CRT Controller DMA Controller GPU Input/Output Controller Interrupt Controller Keyboard Controller LCD Controller Other Peripheral Controller Receiver/Transmitter Timer/Watchdog VME Controller AMBA AHB / APB/ AXI CXL D2D Gen-Z HDMI I2C IEEE 1394 IEEE1588 Interlaken MIL-STD-1553 MIPI Multi-Protocol PHY PCI PowerPC RapidIO SAS SATA USB V-by-One VESA Embedded Memories I/O Library Other Standard cell DDR eMMC Flash Controller HBM Mobile DDR Controller Mobile SDR Controller NAND Flash NVM Express ONFI Controller Other RLDRAM Controller SD SDIO Controller SDRAM Controller SRAM Controller 2D / 3D ADPCM Audio Interfaces AV1 Camera Interface CSC DVB H.263 H.264 H.265 Image Conversion JPEG JPEG 2000 MPEG / MPEG2 MPEG 4 QOI TICO VGA WMA WMV Network on Chip Multiprocessor / DSP Processor Core Dependent Processor Core Independent AI Processor Audio Processor Building Blocks Coprocessor CPU DSP Core IoT Processor Microcontroller Other Processor Cores Security Processor Vision Processor Wireless Processor Content Protection Software Cryptography Cores Cryptography Software Library Embedded Security Modules Other Platform Security Security Protocol Accelerators Security Subsystems 3GPP-5G 3GPP-LTE 802.11 802.16 / WiMAX Bluetooth CPRI Digital Video Broadcast GPS JESD 204A / JESD 204B NFC OBSAI Other UWB W-CDMA Wireless USB ATM / Utopia CEI Cell / Packet Error Correction/Detection Ethernet Fibre Channel HDLC Interleaver/Deinterleaver Modulation/Demodulation Optical/Telecom Other
Vendor

KL730 AI SoC

The KL730 is a third-generation AI chip that integrates advanced reconfigurable NPU architecture, delivering up to 8 TOPS of computing power. This cutting-edge technology enhances computational efficiency across a range of applications, including CNN and transformer networks, while minimizing DDR bandwidth requirements. The KL730 also boasts enhanced video processing capabilities, supporting 4K 60FPS outputs. With expertise spanning over a decade in ISP technology, the KL730 stands out with its noise reduction, wide dynamic range, fisheye correction, and low-light imaging performance. It caters to markets like intelligent security, autonomous vehicles, video conferencing, and industrial camera systems, among others.

Kneron
TSMC
12nm
16 Categories
View Details

Akida 2nd Generation

The Akida 2nd Generation represents a leap forward in the realm of AI processing, enhancing upon its predecessor with greater flexibility and improved efficiency. This advanced neural processor core is tailored for modern applications demanding real-time response and ultra-low power consumption, making it ideal for compact and battery-operated devices. Akida 2nd Generation supports various programming configurations, including 8-, 4-, and 1-bit weights and activations, thus providing developers with the versatility to optimize performance versus power consumption to meet specific application needs. Its architecture is fully digital and silicon-proven, ensuring reliable deployment across diverse hardware setups. With features such as programmable activation functions and support for sophisticated neural network models, Akida 2nd Generation enables a broad spectrum of AI tasks. From object detection in cameras to sophisticated audio sensing, this iteration of the Akida processor is built to handle the most demanding edge applications while sustaining BrainChip's hallmark efficiency in processing power per watt.

BrainChip
11 Categories
View Details

Metis AIPU PCIe AI Accelerator Card

Addressing the need for high-performance AI processing, the Metis AIPU PCIe AI Accelerator Card from Axelera AI offers an outstanding blend of speed, efficiency, and power. Designed to boost AI workloads significantly, this PCIe card leverages the prowess of the Metis AI Processing Unit (AIPU) to deliver unparalleled AI inference capabilities for enterprise and industrial applications. The card excels in handling complex AI models and large-scale data processing tasks, significantly enhancing the efficiency of computational tasks within various edge settings. The Metis AIPU embedded within the PCIe card delivers high TOPs (Tera Operations Per Second), allowing it to execute multiple AI tasks concurrently with remarkable speed and precision. This makes it exceptionally suitable for applications such as video analytics, autonomous driving simulations, and real-time data processing in industrial environments. The card's robust architecture reduces the load on general-purpose processors by offloading AI tasks, resulting in optimized system performance and lower energy consumption. With easy integration capabilities supported by the state-of-the-art Voyager SDK, the Metis AIPU PCIe AI Accelerator Card ensures seamless deployment of AI models across various platforms. The SDK facilitates efficient model optimization and tuning, supporting a wide range of neural network models and enhancing overall system capabilities. Enterprises leveraging this card can see significant improvements in their AI processing efficiency, leading to faster, smarter, and more efficient operations across different sectors.

Axelera AI
13 Categories
View Details

1G to 224G SerDes

The 1G to 224G SerDes technology by Alphawave Semi is a robust connectivity solution designed for high-speed data transmission. It integrates seamlessly into various applications including Ethernet, PCI Express, and die-to-die connections, enabling fast and reliable data transfer. This technology supports a broad spectrum of signaling schemes such as PAM2, PAM4, PAM6, and PAM8, ensuring compatibility with over 30 different industry protocols and standards. As the demand for high-performance data centers and networking solutions increases, the 1G to 224G SerDes proves indispensable, delivering the speed and bandwidth required by modern systems. Alphawave Semi's SerDes supports data rates from as low as 1Gbps to a staggering 224Gbps, making it highly versatile for a multitude of configurations. Its application extends beyond traditional data centers, also covering areas like AI and 5G communication networks where latency and data throughput are critical. This flexibility is further enhanced by its low power consumption, which is essential for efficient data processing in today's power-conscious technological environment. Incorporating the 1G to 224G SerDes into your chip designs guarantees reduced latency and increased data throughput, which is vital for applications that demand real-time data processing. By ensuring high data integrity and reducing signal degradation, this SerDes solution aids in maintaining steadfast connectivity, even under heavy data loads, promising a future-ready component in the evolving tech landscape.

Alphawave Semi
TSMC
3nm, 4nm, 7nm, 10nm, 12nm
AMBA AHB / APB/ AXI, D2D, DSP Core, Ethernet, Interlaken, MIPI, Multi-Protocol PHY, PCI, USB, Wireless Processor
View Details

ADAS and Autonomous Driving

KPIT Technologies leads in the development of Advanced Driver Assistance Systems (ADAS) and autonomous driving solutions, building systems that enhance vehicle safety, comfort, and performance. These innovations extend across various aspects of vehicle automation, leveraging AI-driven data analytics and sensor fusion technologies to enable intelligent driving functions. KPIT's ADAS offerings are designed to assist drivers in complex traffic situations, reduce collision risks, and enhance the overall driving experience through adaptive, high-precision control systems. Central to KPIT's efforts in this space is the integration of state-of-the-art technologies, including machine learning algorithms and real-time data processing capabilities. These complement their extensive industry knowledge to deliver robust, scalable, and interoperable solutions that adhere to the latest automotive safety standards. Emphasizing modular design, KPIT ensures that automakers can easily integrate these technologies into existing and new vehicle platforms. KPIT's expertise extends to collaborating with automakers on developing sophisticated autonomous systems that promise to redefine the future of personal and commercial mobility. By partnering with leading automotive companies, KPIT continues to pioneer advancements in vehicular autonomy, ensuring greater safety and efficiency on roads worldwide.

KPIT Technologies
AI Processor, CAN, CAN-FD, Safe Ethernet
View Details

Akida IP

The Akida IP is an advanced processor core designed to mimic the efficient processing characteristics of the human brain. Inspired by neuromorphic engineering principles, it delivers real-time AI performance while maintaining a low power profile. The architecture of the Akida IP is sophisticated, allowing seamless integration into existing systems without the need for continuous external computation. Equipped with capabilities for processing vision, audio, and sensor data, the Akida IP stands out by being able to handle complex AI tasks directly on the device. This is done by utilizing a flexible mesh of nodes that efficiently distribute cognitive computing tasks, enabling a scalable approach to machine learning applications. Each node supports hundreds of MAC operations and can be configured to adapt to various computational requirements, making it a versatile choice for AI-centric endeavors. Moreover, the Akida IP is particularly beneficial for edge applications where low latency, high efficiency, and security are paramount. With capabilities for event-based processing and on-chip learning, it enhances response times and reduces data transfer needs, thereby bolstering device autonomy. This solidifies its position as a leading solution for embedding AI into devices across multiple industries.

BrainChip
AI Processor, Audio Processor, Coprocessor, CPU, Cryptography Cores, GPU, Input/Output Controller, IoT Processor, Platform Security, Processor Core Independent, Vision Processor
View Details

Yitian 710 Processor

The Yitian 710 Processor is a landmark server chip released by T-Head Semiconductor, representing a breakthrough in high-performance computing. This chip is designed with cutting-edge architecture that utilizes advanced Armv9 structure, accommodating a range of demanding applications. Engineered by T-Head's dedicated research team, Yitian 710 integrates high efficiency and bandwidth properties into a unique 2.5D package, housing two dies and a staggering 60 billion transistors. The Yitian 710 encompasses 128 Armv9 high-performance cores, each equipped with 64KB L1 instruction cache, 64KB L1 data cache, and 1MB L2 cache, further amplified by a collective on-chip system cache of 128MB. These configurations enable optimal data processing and retrieval speeds, making it suitable for data-intensive tasks. Furthermore, the memory subsystem stands out with its 8-channel DDR5 support, reaching peak bandwidths of 281GB/s. In terms of connectivity, the Yitian 710's I/O system includes 96 PCIe 5.0 channels with a bidirectional theoretical total bandwidth of 768GB/s, streamlining high-speed data transfer critical for server operations. Its architecture is not only poised to meet the current demands of data centers and cloud services but also adaptable for future advancements in AI inference and multimedia processing tasks.

T-Head Semiconductor
AI Processor, AMBA AHB / APB/ AXI, Audio Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Independent, Processor Cores, Vision Processor
View Details

AI Camera Module

The AI Camera Module from Altek is a versatile, high-performance component designed to meet the increasing demand for smart vision solutions. This module features a rich integration of imaging lens design and combines both hardware and software capacities to create a seamless operational experience. Its design is reinforced by Altek's deep collaboration with leading global brands, ensuring a top-tier product capable of handling diverse market requirements. Equipped to cater to AI and IoT interplays, the module delivers outstanding capabilities that align with the expectations for high-resolution imaging, making it suitable for edge computing applications. The AI Camera Module ensures that end-user diversity is meaningfully addressed, offering customization in device functionality which supports advanced processing requirements such as 2K and 4K video quality. This module showcases Altek's prowess in providing comprehensive, all-in-one camera solutions which leverage sophisticated imaging and rapid processing to handle challenging conditions and demands. The AI Camera's technical blueprint supports complex AI algorithms, enhancing not just image quality but also the device's interactive capacity through facial recognition and image tracking technology.

Altek Corporation
Samsung
22nm
2D / 3D, AI Processor, AMBA AHB / APB/ AXI, Audio Interfaces, GPU, Image Conversion, IoT Processor, JPEG, Receiver/Transmitter, SATA, Vision Processor
View Details

Speedcore Embedded FPGA IP

Speedcore embedded FPGA (eFPGA) IP represents a notable advancement in integrating programmable logic into ASICs and SoCs. Unlike standalone FPGAs, eFPGA IP lets designers tailor the exact dimensions of logic, DSP, and memory needed for their applications, making it an ideal choice for areas like AI, ML, 5G wireless, and more. Speedcore eFPGA can significantly reduce system costs, power requirements, and board space while maintaining flexibility by embedding only the necessary features into production. This IP is programmable using the same Achronix Tool Suite employed for standalone FPGAs. The Speedcore design process is supported by comprehensive resources and guidance, ensuring efficient integration into various semiconductor projects.

Achronix
TSMC
All Process Nodes
Processor Cores
View Details

Veyron V2 CPU

The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.

Ventana Micro Systems
AI Processor, Audio Processor, CPU, DSP Core, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

aiWare

aiWare is a high-performance NPU designed to meet the rigorous demands of automotive AI inference, providing a scalable solution for ADAS and AD applications. This hardware IP core is engineered to handle a wide array of AI workloads, including the most advanced neural network structures like CNNs, LSTMs, and RNNs. By integrating cutting-edge efficiency and scalability, aiWare delivers industry-leading neural processing power tailored to automobile-grade specifications.\n\nThe NPU's architecture emphasizes hardware determinism and offers ISO 26262 ASIL-B certification, ensuring that aiWare meets stringent automotive safety standards. Its efficient design also supports up to 256 effective TOPS per core, and can scale to handle thousands of TOPS through multicore integration, minimizing power consumption effectively. The aiWare's system-level optimizations reduce reliance on external memory by leveraging local memory for data management, boosting performance efficiency across varied input data sizes and complexities.\n\naiWare’s development toolkit, aiWare Studio, is distinguished by its innovative ability to optimize neural network execution without the need for manual intervention by software engineers. This empowers ai engineers to focus on refining NNs for production, significantly accelerating iteration cycles. Coupled with aiMotive's aiDrive software suite, aiWare provides an integrated environment for creating highly efficient automotive AI applications, ensuring seamless integration and rapid deployment across multiple vehicle platforms.

aiMotive
12 Categories
View Details

eSi-3250

Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, DSP Core, Microcontroller, Multiprocessor / DSP, Processor Cores
View Details

A25

The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.

Andes Technology
CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores, Standard cell
View Details

Chimera GPNPU

Chimera GPNPU is engineered to revolutionize AI/ML computational capabilities on single-core architectures. It efficiently handles matrix, vector, and scalar code, unifying AI inference and traditional C++ processing under one roof. By alleviating the need for partitioning AI workloads between different processors, it streamlines software development and drastically speeds up AI model adaptation and integration. Ideal for SoC designs, the Chimera GPNPU champions an architecture that is both versatile and powerful, handling complex parallel workloads with a single unified binary. This configuration not only boosts software developer productivity but also ensures an enduring flexibility capable of accommodating novel AI model architectures on the horizon. The architectural fabric of the Chimera GPNPU seamlessly blends the high matrix performance of NPUs with C++ programmability found in traditional processors. This core is delivered in a synthesizable RTL form, with scalability options ranging from a single-core to multi-cluster designs to meet various performance benchmarks. As a testament to its adaptability, the Chimera GPNPU can run any AI/ML graph from numerous high-demand application areas such as automotive, mobile, and home digital appliances. Developers seeking optimization in inference performance will find the Chimera GPNPU a pivotal tool in maintaining cutting-edge product offerings. With its focus on simplifying hardware design, optimizing power consumption, and enhancing programmer ease, this processor ensures a sustainable and efficient path for future AI/ML developments.

Quadric
TSMC
1000nm
17 Categories
View Details

Jotunn8 AI Accelerator

The Jotunn8 AI Accelerator is engineered for lightning-fast AI inference at unprecedented scale. It is designed to meet the demands of modern data centers by providing exceptional throughput, low latency, and optimized energy efficiency. The Jotunn8 outperforms traditional setups by allowing large-scale deployment of trained models, ensuring robust performance while reducing operational costs. Its capabilities make it ideal for real-time applications such as chatbots, fraud detection, and advanced search algorithms. What sets the Jotunn8 apart is its adaptability to various AI algorithms, including reasoning and generative models, alongside agentic AI frameworks. This seamless integration achieves near-theoretical performance, allowing the chip to excel in applications that require logical rigor and creative processing. With a focus on minimizing carbon footprint, the Jotunn8 is meticulously designed to enhance both performance per watt and overall sustainability. The Jotunn8 supports massive memory handling with HBM capability, promoting incredibly high data throughput that aligns with the needs of demanding AI processes. Its architecture is purpose-built for speed, efficiency, and the ability to scale with technological advances, providing a solid foundation for AI infrastructure looking to keep pace with evolving computational demands.

VSORA
TSMC
20nm
AI Processor, DSP Core, Processor Core Dependent
View Details

SAKURA-II AI Accelerator

SAKURA-II AI Accelerator represents EdgeCortix's latest advancement in edge AI processing, offering unparalleled energy efficiency and extensive capabilities for generative AI tasks. This accelerator is designed to manage demanding AI models, including Llama 2, Stable Diffusion, DETR, and ViT, within a slim power envelope of about 8W. With capabilities extending to multi-billion parameter models, SAKURA-II meets a wide range of edge applications in vision, language, and audio. The SAKURA-II's architecture maximizes AI compute efficiency, delivering more than twice the utilization of competitive solutions. It boasts remarkable DRAM bandwidth, essential for large language and vision models, while maintaining low power consumption. The hardware supports real-time Batch=1 processing, demonstrating its edge in performance even in constrained environments, making it a choice solution for diverse industrial AI applications. With 60 TOPS (INT8) and 30 TFLOPS (BF16) in performance metrics, this accelerator is built to exceed expectations in demanding conditions. It features robust memory configurations supporting up to 32GB of DRAM, ideal for processing intricate AI workloads. By leveraging sparse computing techniques, SAKURA-II optimizes its memory and bandwidth usage effectively, ensuring reliable performance across all deployed applications.

EdgeCortix Inc.
AI Processor, CPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Vision Processor
View Details

Metis AIPU M.2 Accelerator Module

The Metis AIPU M.2 Accelerator Module by Axelera AI is a compact and powerful solution designed for AI inference at the edge. This module delivers remarkable performance, comparable to that of a PCIe card, all while fitting into the streamlined M.2 form factor. Ideal for demanding AI applications that require substantial computational power, the module enhances processing efficiency while minimizing power usage. With its robust infrastructure, it is geared toward integrating into applications that demand high throughput and low latency, making it a perfect fit for intelligent vision applications and real-time analytics. The AIPU, or Artificial Intelligence Processing Unit, at the core of this module provides industry-leading performance by offloading AI workloads from traditional CPU or GPU setups, allowing for dedicated AI computation that is faster and more energy-efficient. This not only boosts the capabilities of the host systems but also drastically reduces the overall energy consumption. The module supports a wide range of AI applications, from facial recognition and security systems to advanced industrial automation processes. By utilizing Axelera AI’s innovative software solutions, such as the Voyager SDK, the Metis AIPU M.2 Accelerator Module enables seamless integration and full utilization of AI models and applications. The SDK offers enhancements like compatibility with various industry tools and frameworks, thus ensuring a smooth deployment process and quick time-to-market for advanced AI systems. This product represents Axelera AI’s commitment to revolutionizing edge computing with streamlined, effective AI acceleration solutions.

Axelera AI
14 Categories
View Details

xcore.ai

xcore.ai is XMOS Semiconductor's innovative programmable chip designed for advanced AI, DSP, and I/O applications. It enables developers to create highly efficient systems without the complexity typical of multi-chip solutions, offering capabilities that integrate AI inference, DSP tasks, and I/O control seamlessly. The chip architecture boasts parallel processing and ultra-low latency, making it ideal for demanding tasks in robotics, automotive systems, and smart consumer devices. It provides the toolset to deploy complex algorithms efficiently while maintaining robust real-time performance. With xcore.ai, system designers can leverage a flexible platform that supports the rapid prototyping and development of intelligent applications. Its performance allows for seamless execution of tasks such as voice recognition and processing, industrial automation, and sensor data integration. The adaptable nature of xcore.ai makes it a versatile solution for managing various inputs and outputs simultaneously, while maintaining high levels of precision and reliability. In automotive and industrial applications, xcore.ai supports real-time control and monitoring tasks, contributing to smarter, safer systems. For consumer electronics, it enhances user experience by enabling responsive voice interfaces and high-definition audio processing. The chip's architecture reduces the need for exterior components, thus simplifying design and reducing overall costs, paving the way for innovative solutions where technology meets efficiency and scalability.

XMOS Semiconductor
24 Categories
View Details

Talamo SDK

The Talamo SDK from Innatera serves as a comprehensive software development toolkit designed to maximize the capabilities of its Spiking Neural Processor (SNP) lineup. Tailored for developers and engineers, Talamo offers in-depth access to configure and deploy neuromorphic processing solutions effectively. The SDK supports the development of applications that utilize Spiking Neural Networks (SNNs) for diverse sensory processing tasks. Talamo provides a user-friendly interface that simplifies the integration of neural processing capabilities into a wide range of devices and systems. By leveraging the toolkit, developers can customize applications for specific use cases such as real-time audio analysis, touch-free interactions, and biometric data processing. The SDK comes with pre-built models and a model zoo, which helps in rapidly prototyping and deploying sensor-driven solutions. This SDK stands out by offering enhanced tools for developing low-latency, energy-efficient applications. By harnessing the temporal processing strength of SNNs, Talamo allows for the robust development of applications that can operate under strict power and performance constraints, enabling the creation of intelligent systems that can autonomously process data in real-time.

Innatera Nanosystems
AI Processor, Content Protection Software, CPU, Cryptography Cores, Multiprocessor / DSP, Processor Core Independent, Vision Processor
View Details

RV12 RISC-V Processor

The RV12 RISC-V Processor is a highly configurable, single-core CPU that adheres to RV32I and RV64I standards. It’s engineered for the embedded market, offering a robust structure based on the RISC-V instruction set. The processor's architecture allows simultaneous instruction and data memory accesses, lending itself to a broad range of applications and maintaining high operational efficiency. This flexibility makes it an ideal choice for diverse execution requirements, supporting efficient data processing through an optimized CPU framework. Known for its adaptability, the RV12 processor can support multiple configurations to suit various application demands. It is capable of providing the necessary processing power for embedded systems, boasting a reputation for stability and reliability. This processor becomes integral for designs that require a maintainability of performance without compromising on the configurability aspect, meeting the rigorous needs of modern embedded computing. The processor's support of the open RISC-V architecture ensures its capability to integrate into existing systems seamlessly. It lends itself well to both industrial and academic applications, offering a resource-efficient platform that developers and researchers can easily access and utilize.

Roa Logic BV
AI Processor, CPU, Cryptography Software Library, IoT Processor, Microcontroller, Processor Cores
View Details

Speedster7t FPGAs

The Speedster7t FPGA family is crafted for high-bandwidth tasks, tackling the usual restrictions seen in conventional FPGAs. Manufactured using the TSMC 7nm FinFET process, these FPGAs are equipped with a pioneering 2D network-on-chip architecture and a series of machine learning processors for optimal high-bandwidth performance and AI/ML workloads. They integrate interfaces for high-paced GDDR6 memory, 400G Ethernet, and PCI Express Gen5 ports. This 2D network-on-chip connects various interfaces to upward of 80 access points in the FPGA fabric, enabling ASIC-like performance, yet retaining complete programmability. The product encourages users to start with the VectorPath accelerator card which houses the Speedster7t FPGA. This family offers robust tools for applications such as 5G infrastructure, computational storage, and test and measurement.

Achronix
TSMC
7nm
Processor Cores
View Details

GenAI v1

RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.

RaiderChip
GLOBALFOUNDRIES, TSMC
28nm, 65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

GenAI v1-Q

The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.

RaiderChip
TSMC
65nm
AI Processor, AMBA AHB / APB/ AXI, Audio Controller, Coprocessor, CPU, Ethernet, Microcontroller, Multiprocessor / DSP, PowerPC, Processor Core Dependent, Processor Cores
View Details

Ceva-SensPro2 - Vision AI DSP

The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)

Ceva, Inc.
DSP Core, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

KL520 AI SoC

The KL520 marks Kneron's foray into the edge AI landscape, offering an impressive combination of size, power efficiency, and performance. Armed with dual ARM Cortex M4 processors, this chip can operate independently or as a co-processor to enable AI functionalities such as smart locks and security monitoring. The KL520 is adept at 3D sensor integration, making it an excellent choice for applications in smart home ecosystems. Its compact design allows devices powered by it to operate on minimal power, such as running on AA batteries for extended periods, showcasing its exceptional power management capabilities.

Kneron
TSMC
65nm
AI Processor, Camera Interface, Clock Generator, CPU, GPU, IoT Processor, MPEG 4, Processor Core Independent, Receiver/Transmitter, Vision Processor
View Details

Hanguang 800 AI Accelerator

The Hanguang 800 AI Accelerator by T-Head Semiconductor is a powerful AI acceleration chip designed to enhance machine learning tasks. It excels in providing the computational power necessary for intensive AI workloads, effectively reducing processing times for large-scale data frameworks. This makes it an ideal choice for organizations aiming to infuse AI capabilities into their operations with maximum efficiency. Built with an emphasis on speed and performance, the Hanguang 800 is optimized for applications requiring vast amounts of data crunching. It supports a diverse array of AI models and workloads, ensuring flexibility and robust performance across varying use cases. This accelerates the deployment of AI applications in sectors such as autonomous driving, natural language processing, and real-time data analysis. The Hanguang 800's architecture is complemented by proprietary algorithms that enhance processing throughput, competing against traditional processors by providing significant gains in efficiency. This accelerator is indicative of T-Head's commitment to advancing AI technologies and highlights their capability to cater to specialized industry needs through innovative semiconductor developments.

T-Head Semiconductor
AI Processor, CPU, IoT Processor, Processor Core Dependent, Security Processor, Vision Processor
View Details

KL530 AI SoC

The KL530 represents a significant advancement in AI chip technology with a new NPU architecture optimized for both INT4 precision and transformer networks. This SOC is engineered to provide high processing efficiency and low power consumption, making it suitable for AIoT applications and other innovative scenarios. It features an ARM Cortex M4 CPU designed for low-power operation and offers a robust computational power of up to 1 TOPS. The chip's ISP enhances image quality, while its codec ensures efficient multimedia compression. Notably, the chip's cold start time is under 500 ms with an average power draw of less than 500 mW, establishing it as a leader in energy efficiency.

Kneron
TSMC
28nm SLP
AI Processor, Camera Interface, Clock Generator, CPU, CSC, GPU, IoT Processor, Peripheral Controller, Vision Processor
View Details

KL630 AI SoC

The KL630 is a pioneering AI chipset featuring Kneron's latest NPU architecture, which is the first to support Int4 precision and transformer networks. This cutting-edge design ensures exceptional compute efficiency with minimal energy consumption, making it ideal for a wide array of applications. With an ARM Cortex A5 CPU at its core, the KL630 excels in computation while maintaining low energy expenditure. This SOC is designed to handle both high and low light conditions optimally and is perfectly suited for use in diverse edge AI devices, from security systems to expansive city and automotive networks.

Kneron
TSMC
12nm LP/LP+
ADPCM, AI Processor, Camera Interface, CPU, GPU, Input/Output Controller, Processor Core Independent, USB, VGA, Vision Processor
View Details

Maverick-2 Intelligent Compute Accelerator

The Maverick-2 Intelligent Compute Accelerator represents the pinnacle of Next Silicon's innovative approach to computational resources. This state-of-the-art accelerator leverages the Intelligent Compute Architecture for software-defined adaptability, enabling it to autonomously tailor its real-time operations across various HPC and AI workloads. By optimizing performance using insights gained through real-time telemetry, Maverick-2 ensures superior computational efficiency and reduced power consumption, making it an ideal choice for demanding computational environments.\n\nMaverick-2 brings transformative performance enhancements to large-scale scientific research and data-heavy industries by dispensing with the need for codebase modifications or specialized software stacks. It supports a wide range of familiar development tools and frameworks, such as C/C++, FORTRAN, and Kokkos, simplifying the integration process for developers and reducing time-to-discovery significantly.\n\nEngineered with advanced features like high bandwidth memory (HBM3E) and built on TSMC's 5nm process technology, this accelerator provides not only unmatched adaptability but also an energy-efficient, eco-friendly computing solution. Whether embedded in single-die PCIe cards or dual-die OCP Accelerator Modules, the Maverick-2 is positioned as a future-proof solution capable of evolving with technological advancements in AI and HPC.

Next Silicon Ltd.
TSMC
5nm
11 Categories
View Details

RISC-V Core-hub Generators

The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.

InCore Semiconductors
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

Ultra-Low-Power 64-Bit RISC-V Core

Micro Magic's Ultra-Low-Power 64-Bit RISC-V Core is a highly efficient design that operates with remarkably low power consumption, requiring only 10mW at 1GHz. This core exemplifies Micro Magic’s commitment to power efficiency, as it integrates advanced techniques to maintain high performance even at lower voltages. The core is engineered for applications where energy conservation is crucial, making it ideal for modern, power-sensitive devices. The architectural design of this RISC-V core utilizes innovative technology to ensure high-speed processing capabilities while minimizing power draw. This balance is achieved through precise engineering and the use of state-of-the-art design methodologies that reduce operational overhead without compromising performance. As a result, this core is particularly suited for applications in portable electronics, IoT devices, and other areas where low-power operation is a necessity. Micro Magic's experience in developing high-speed, low-power solutions is evident in this core's design, ensuring that it delivers reliable performance under various operational conditions. The Ultra-Low-Power 64-Bit RISC-V Core represents a significant advancement in processor efficiency, providing a robust solution for designers looking to enhance their products' capabilities while maintaining a low power footprint.

Micro Magic, Inc.
TSMC
28nm
AI Processor, CPU, IoT Processor, Multiprocessor / DSP, Processor Core Independent, Processor Cores
View Details

AX45MP

The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.

Andes Technology
2D / 3D, ADPCM, CPU, IoT Processor, Processor Core Independent, Processor Cores, Vision Processor
View Details

AndesCore Processors

AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.

Andes Technology
CPU, FlexRay, Processor Core Dependent, Processor Core Independent, Processor Cores, Security Processor
View Details

WiseEye2 AI Solution

The WiseEye2 AI Solution by Himax is a highly efficient processor tailored for AI applications, combining an ultra-low power CMOS image sensor with the HX6538 microcontroller. Designed for battery-powered devices requiring continuous operation, it significantly lowers power consumption while boosting performance. This processor leverages the Arm Cortex M55 CPU and Ethos U55 NPU to enhance inference speed and energy efficiency substantially, allowing execution of complex models with precision. Perfectly suited for applications like user-presence detection in laptops, WiseEye2 heightens security through sophisticated facial recognition and adaptive privacy settings. It automatically wakes or locks the device based on user proximity, thereby conserving power and safeguarding sensitive information. WiseEye2’s power management and neural processing units ensure constant device readiness, augmenting AI capabilities ranging from occupancy detection to smart security. This reflects Himax's dedication to providing versatile AI solutions that anticipate and respond to user needs seamlessly. Equipped with sensor fusion and sophisticated security engines, WiseEye2 maintains privacy and operational efficiency, marking a significant step forward in the integration of AI into consumer electronics, especially where energy conservation is pivotal.

Himax Technologies, Inc.
AI Processor, Embedded Security Modules, Processor Core Independent, Vision Processor
View Details

Azurite Core-hub

The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.

InCore Semiconductors
AI Processor, CPU, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

aiSim 5

aiSim 5 stands as a cutting-edge simulation tool specifically crafted for the automotive sector, with a strong focus on validating ADAS and autonomous driving solutions. It distinguishes itself with an AI-powered digital twin creation capability, offering a meticulously optimized sensor simulation environment that guarantees reproducibility and determinism. The adaptable architecture of aiSim allows seamless integration with existing industry toolchains, significantly minimizing the need for costly real-world testing.\n\nOne of the key features of aiSim is its capability to simulate various challenging weather conditions, enhancing testing accuracy across diverse environments. This includes scenarios like snowstorms, heavy fog, and rain, with sensors simulated based on physics, offering changes in conditions in real-time. Its certification with ISO 26262 ASIL-D attests to its automotive-grade quality and reliability, providing a new standard for testing high-fidelity sensor data in varied operational design domains.\n\nThe flexibility of aiSim is further highlighted through its comprehensive SDKs and APIs, which facilitate smooth integration into various systems under test. Additionally, users can leverage its extensive 3D asset library to establish detailed, realistic testing environments. AI-based rendering technologies underpin aiSim's data simulation, achieving both high efficiency and accuracy, thereby enabling rapid and effective validation of advanced driver assistance and autonomous driving systems.

aiMotive
26 Categories
View Details

ReRAM Memory

CrossBar's ReRAM Memory is designed to redefine data storage with its high-density and energy-efficient characteristics. The memory solution can achieve terabyte-scale storage on-chip, significantly surpassing traditional flash memory solutions in both speed and power consumption. Offered in a 3D cross-point architecture, it is capable of providing high performance with minimal layout overhead. Engineered for next-generation applications, this ReRAM technology boasts a performance edge with 20ns read times and 12µs write capabilities, eliminating the usual erase latency. It is significantly faster than traditional NAND flash with its lightning-fast read and write speeds, making it suitable for real-time processing required by cutting-edge applications such as AI and IoT. Security is a key feature, offering tamper-resistant provisions for cryptographic key storage ensuring robust protection against data breaches. The memory solution leverages advanced technology to deliver energy savings of up to 5x compared to eFlash, and up to 40x when compared to BLE, positioning it as an ideal choice for mobile and low-power applications.

CrossBar Inc.
CPU, Embedded Memories, Embedded Security Modules, Flash Controller, I/O Library, Mobile SDR Controller, NAND Flash, SDRAM Controller, Security Processor, SRAM Controller, Standard cell
View Details

Vehicle Engineering & Design Solutions

KPIT Technologies provides comprehensive vehicle engineering and design solutions that blend aesthetic appeal with functional efficiency. These solutions are designed to assist automakers throughout the vehicle development process, from initial concept to finished product, ensuring designs are both innovative and practical. Focusing on high precision and quality, KPIT’s engineering solutions encompass a wide range of services such as design validation, prototyping, and simulation-driven design optimizations. They utilize advanced simulation tools to refine vehicle dynamics, structural integrity, and ergonomic designs, ensuring a balance between cost-effectiveness and cutting-edge innovation. KPIT’s design approach emphasizes sustainable and intelligent engineering, enabling automakers to create vehicles that not only look good but also perform exceptionally well under various conditions. By offering tailored design and engineering solutions, KPIT enhances the ability of automotive companies to bring state-of-the-art vehicles to market swiftly and efficiently.

KPIT Technologies
Coprocessor, CPU
View Details

RISC-V CPU IP N Class

The N Class RISC-V CPU IP from Nuclei is tailored for applications where space efficiency and power conservation are paramount. It features a 32-bit architecture and is highly suited for microcontroller applications within the AIoT realm. The N Class processors are crafted to provide robust processing capabilities while maintaining a minimal footprint, making them ideal candidates for devices that require efficient power management and secure operations. By adhering to the open RISC-V standard, Nuclei ensures that these processors can be seamlessly integrated into various solutions, offering customizable options to fit specific system requirements.

Nuclei System Technology
Building Blocks, CPU, IoT Processor, Microcontroller, Processor Core Dependent, Processor Cores
View Details

3D Imaging Chip

Altek's 3D Imaging Chip is a breakthrough in the field of vision technology. Designed with an emphasis on depth perception, it enhances the accuracy of 3D scene capturing, making it ideal for applications requiring precise distance gauging such as autonomous vehicles and drones. The chip integrates seamlessly within complex systems, boasting superior recognition accuracy that ensures reliable and robust performance. Building upon years of expertise in 3D imaging, this chip supports multiple 3D modes, offering flexible solutions for devices from surveillance robots to delivery mechanisms. It facilitates medium-to-long-range detection needs thanks to its refined depth sensing capabilities. Altek's approach ensures a comprehensive package from modular design to chip production, creating a cohesive system that marries both hardware and software effectively. Deployed within various market segments, it delivers adaptable image solutions with dynamic design agility. Its imaging prowess is further enhanced by state-of-the-art algorithms that refine image quality and facilitate facial detection and recognition, thereby expanding its utility across diverse domains.

Altek Corporation
TSMC
16nm FFC/FF+
A/D Converter, Analog Front Ends, Coprocessor, Graphics & Video Modules, Image Conversion, JPEG, Oversampling Modulator, Photonics, PLL, Sensor, Vision Processor
View Details

SCR3 Microcontroller Core

Designed for efficient processing, the SCR3 microcontroller core offers a versatile solution for embedded environments. It comes equipped with a 5-stage in-order pipeline and supports both 32-bit and 64-bit symmetric multiprocessing (SMP) configurations, facilitating advanced applications. The core integrates privilege modes and includes memory protection units (MPUs), along with L1 and L2 caches, ensuring data integrity and performance. Its efficient architecture is optimized for energy-conscious applications, making it ideal for industrial, automotive, and IoT applications.

Syntacore
Building Blocks, CPU, DSP Core, Microcontroller, Processor Cores
View Details

Tianqiao-70 Low-Power Commercial Grade 64-bit RISC-V CPU

StarFive's Tianqiao-70 is engineered to deliver superior performance in a power-efficient package. This 64-bit RISC-V CPU core is designed for commercial-grade applications, where consistent and reliable performance is mandatory, yet energy consumption must be minimized. The core's architecture integrates low power design principles without compromising its ability to execute complex instructions efficiently. It is particularly suited for mobile applications, desktop clients, and intelligent gadgets requiring sustained battery life. The Tianqiao-70's design focuses on extending the operational life of devices by ensuring minimal power draw during both active and idle states. It supports an array of advanced features that cater to the latest computational demands. As an ideal solution for devices that combine portability with intensive processing demands, the Tianqiao-70 offers an optimal balance of performance and energy conservation. Its capability to adapt to various operating environments makes it a versatile option for developers looking to maximize efficiency and functionality.

StarFive Technology
AI Processor, CPU, Microcontroller, Multiprocessor / DSP, Processor Cores, Vision Processor
View Details

KL720 AI SoC

The KL720 AI SoC is designed for optimal performance-to-power ratios, achieving 0.9 TOPS per watt. This makes it one of the most efficient chips available for edge AI applications. The SOC is crafted to meet high processing demands, suitable for high-end devices including smart TVs, AI glasses, and advanced cameras. With an ARM Cortex M4 CPU, it enables superior 4K imaging, full HD video processing, and advanced 3D sensing capabilities. The KL720 also supports natural language processing (NLP), making it ideal for emerging AI interfaces such as AI assistants and gaming gesture controls.

Kneron
TSMC
16nm FFC/FF+
2D / 3D, AI Processor, Audio Interfaces, AV1, Camera Interface, CPU, GPU, Image Conversion, TICO, Vision Processor
View Details

Ncore Cache Coherent Interconnect

The Ncore Cache Coherent Interconnect is designed to tackle the complexities inherent in multicore SoC environments. By maintaining coherence across heterogeneous cores, it enables efficient data sharing and optimizes cache use. This in turn enhances the throughput of the system, ensuring reliable performance with reduced latency. The architecture supports a wide range of cores, making it a versatile option for many applications in high-performance computing. With Ncore, designers can address the challenges of maintaining data consistency across different processor cores without incurring significant power or performance penalties. The interconnect's capability to handle multicore scenarios means it is perfectly suited for advanced computing solutions where data integrity and speed are paramount. Additionally, its configuration options allow customization to meet specific project needs, maintaining flexibility in design applications. Its efficiency in multi-threading environments, coupled with robust data handling, marks it as a crucial component in designing state-of-the-art SoCs. By supporting high data throughput, Ncore keeps pace with the demands of modern processing needs, ensuring seamless integration and operation across a variety of sectors.

Arteris
15 Categories
View Details

RAIV General Purpose GPU

RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.

Siliconarts, Inc.
AI Processor, Building Blocks, CPU, GPU, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent, Processor Cores, Vision Processor, Wireless Processor
View Details

Codasip RISC-V BK Core Series

The Codasip RISC-V BK Core Series offers versatile, low-power, and high-performance solutions tailored for various embedded applications. These cores ensure efficiency and reliability by incorporating RISC-V compliance and are verified through advanced methodologies. Known for their adaptability, these cores can cater to applications needing robust performance while maintaining stringent power and area requirements.

Codasip
AI Processor, Building Blocks, CPU, DSP Core, IoT Processor, Microcontroller, Processor Core Dependent, Processor Core Independent, Processor Cores
View Details

RapidGPT - AI-Driven EDA Tool

RapidGPT by PrimisAI is a revolutionary AI-based tool that transforms the landscape of Electronic Design Automation (EDA). Using generative AI, RapidGPT facilitates a seamless transition from traditional design methods to a more dynamic and intuitive process. This tool is characterized by its ability to interpret natural language inputs, enabling hardware designers to communicate design intentions effortlessly and effectively. Through RapidGPT, engineers gain access to a powerful code assistant that simplifies the conversion of ideas into fully realized Verilog code. By integrating third-party semiconductor IP seamlessly, the tool extends beyond basic design needs to offer a comprehensive framework for accelerating development times. RapidGPT further distinguishes itself by guiding users through the entire design lifecycle, from initial concepts to complete bitstream and GDSII stages, thus redefining productivity in hardware design. With RapidGPT, PrimisAI supports a wide spectrum of interactions and is trusted by numerous companies, underscoring its reliability and impact in the field. The tool's ability to enhance productivity and reduce time-to-market makes it a preferred choice for engineers aiming to combine efficiency with innovation in their projects. Easy to integrate into existing workflows, RapidGPT sets new standards in EDA, empowering users with an unparalleled interface and experience.

PrimisAI
AMBA AHB / APB/ AXI, CPU, Ethernet, HDLC, Processor Core Independent
View Details

eSi-1650

The eSi-1650 is a compact, low-power 16-bit CPU core integrating an instruction cache, making it an ideal choice for mature process nodes reliant on OTP or Flash program memory. By omitting large on-chip RAMs, the IP core optimizes power and area efficiency and permits the CPU to capitalize on its maximum operational frequency beyond OTP/Flash constraints.

eSi-RISC
All Foundries
16nm, 90nm, 250nm, 350nm
Building Blocks, CPU, Microcontroller, Processor Cores
View Details

EW6181 GPS and GNSS Silicon

The EW6181 GPS and GNSS solution from EtherWhere is tailored for applications requiring high integration levels, offering licenses in RTL, gate-level netlist, or GDS formats. This highly adaptable IP can be ported across various technology nodes, provided an RF frontend is available. Designed to be one of the smallest and most power-efficient cores, it optimizes battery life significantly in devices such as tags and modules, making it ideal for challenging environments. The IP's strengths lie in its digital processing capabilities, utilizing cutting-edge DSP algorithms for precision and reliability in location tracking. With a digital footprint approximately 0.05mm² on a 5nm node, the EW6181 boasts a remarkably compact size, aiding in minimal component use and a streamlined Bill of Materials (BoM). Its stable firmware ensures accurate and reliable position fixations. In terms of implementation, this IP offers a combination of compact design and extreme power efficiency, providing substantial advantages in battery-operated environments. The EW6181 delivers critical support and upgrades, facilitating seamless high-reliability tracking for an array of applications demanding precise navigation.

EtherWhere Corporation
TSMC
7nm
19 Categories
View Details

NMP-750

The NMP-750 is AiM Future's powerful edge computing accelerator designed specifically for high-performance tasks. With up to 16 TOPS of computational throughput, this accelerator is perfect for automotive, AMRs, UAVs, as well as AR/VR applications. Fitted with up to 16 MB of local memory and featuring RISC-V or Arm Cortex-R/A 32-bit CPUs, it supports diverse data processing requirements crucial for modern technological solutions. The versatility of the NMP-750 is displayed in its ability to manage complex processes such as multi-camera stream processing and spectral efficiency management. It is also an apt choice for applications that require energy management and building automation, demonstrating exceptional potential in smart city and industrial setups. With its robust architecture, the NMP-750 ensures seamless integration into systems that need to handle large data volumes and support high-speed data transmission. This makes it ideal for applications in telecommunications and security where infrastructure resilience is paramount.

AiM Future
AI Processor, CPU, IoT Processor, Microcontroller, Multiprocessor / DSP, Processor Core Dependent, Processor Core Independent
View Details
Load more
Sign up to Silicon Hub to buy and sell semiconductor IP

Sign Up for Silicon Hub

Join the world's most advanced semiconductor IP marketplace!

It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!

No credit card or payment details required.

Sign up to Silicon Hub to buy and sell semiconductor IP

Welcome to Silicon Hub

Join the world's most advanced AI-powered semiconductor IP marketplace!

It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!

Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!

Switch to a Silicon Hub buyer account to buy semiconductor IP

Switch to a Buyer Account

To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.

Add new company

Switch to a Silicon Hub buyer account to buy semiconductor IP

Create a Buyer Account

To evaluate IP you need to be logged into a buyer profile. It's free to create a buyer profile for your company.

Chatting with Volt