All IPs > Processor
The 'Processor' category in the Silicon Hub Semiconductor IP catalog is a cornerstone of modern electronic device design. Processor semiconductor IPs serve as the brain of electronic devices, driving operations, processing data, and performing complex computations essential for a multitude of applications. These IPs include a wide variety of specific types such as CPUs, DSP cores, and microcontrollers, each designed with unique capabilities and applications in mind.
In this category, you'll find building blocks, which are fundamental components for constructing more sophisticated processors, and coprocessors that augment the capabilities of a main processor, enabling efficient handling of specialized tasks. The versatility of processor semiconductor IPs is evident in subcategories like AI processors, audio processors, and vision processors, each tailored to meet the demands of today’s smart technologies. These processors are central to developing innovative products that leverage artificial intelligence, enhance audio experiences, and enable complex image processing capabilities, respectively.
Moreover, there are security processors that empower devices with robust security features to protect sensitive data and communications, as well as IoT processors and wireless processors that drive connectivity and integration of devices within the Internet of Things ecosystem. These processors ensure reliable and efficient data processing in increasingly connected and smart environments.
Overall, the processor semiconductor IP category is pivotal for enabling the creation of advanced electronic devices across a wide range of industries, from consumer electronics to automotive systems, providing the essential processing capabilities needed to meet the ever-evolving technological demands of today's world. Whether you're looking for individual processor cores or fully integrated processing solutions, this category offers a comprehensive selection to support any design or application requirement.
Akida Neural Processor IP is a groundbreaking component offering a self-contained AI processing solution capable of locally executing AI/ML workloads without reliance on external systems. This IP's configurability allows it to be tailored to various applications, emphasizing space-efficient and power-conscious designs. Supporting both convolutional and fully-connected layers, along with multiple quantization formats, it addresses the data movement challenge inherent in AI, significantly curtailing power usage while maintaining high throughput rates. Akida is designed for deployment scalability, supporting as little as two nodes up to extensive networks where complex models can thrive.
The second generation of BrainChip's Akida platform expands upon its predecessor with enhanced features for even greater performance, efficiency, and accuracy in AI applications. This platform leverages advanced 8-bit quantization and advanced neural network support, including temporal event-based neural nets and vision transformers. These advancements allow for significant reductions in model size and computational requirements, making the Akida 2nd Generation a formidable component for edge AI solutions. The platform effectively supports complex neural models necessary for a wide range of applications, from advanced vision tasks to real-time data processing, all while minimizing cloud interaction to protect data privacy.
MetaTF is BrainChip's machine learning framework for developing systems on the Akida neural processor. Designed to aid in creating, training, and testing neural networks, MetaTF integrates seamlessly with TensorFlow models. Its key feature is the ability to convert CNN models to Spiking Neural Networks (SNN), facilitating low-latency, low-power operations suited for edge environments. By utilizing Python scripting and tools, MetaTF simplifies model conversion and optimization, delivering automatic CNN to SNN transitions without needing to learn new frameworks. MetaTF also includes various development tools, encompassing runtime simulation and robust testing environments.
Speedcore embedded FPGA (eFPGA) IP represents a notable advancement in integrating programmable logic into ASICs and SoCs. Unlike standalone FPGAs, eFPGA IP lets designers tailor the exact dimensions of logic, DSP, and memory needed for their applications, making it an ideal choice for areas like AI, ML, 5G wireless, and more. Speedcore eFPGA can significantly reduce system costs, power requirements, and board space while maintaining flexibility by embedding only the necessary features into production. This IP is programmable using the same Achronix Tool Suite employed for standalone FPGAs. The Speedcore design process is supported by comprehensive resources and guidance, ensuring efficient integration into various semiconductor projects.
The KL730 AI SoC is a state-of-the-art chip incorporating Kneron's third-generation reconfigurable NPU architecture, delivering unmatched computational power with capabilities reaching up to 8 TOPS. This chip's architecture is optimized for the latest CNN network models and performs exceptionally well in transformer-based applications, reducing DDR bandwidth requirements substantially. Furthermore, it supports advanced video processing functions, capable of handling 4K 60FPS outputs with superior image handling features like noise reduction and wide dynamic range support. Applications can range from intelligent security systems to autonomous vehicles and commercial robotics.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
Axelera AI has crafted a PCIe AI acceleration card, powered by their high-efficiency quad-core Metis AIPU, to tackle complex AI vision tasks. This card provides an extraordinary 214 TOPS, enabling it to process the most demanding AI workloads. Enhanced by the Voyager SDK's streamlined integration capabilities, this card promises quick deployment while maintaining superior accuracy and power efficiency. It is tailored for applications that require high throughput and minimal power consumption, making it ideal for edge computing.
The Speedster7t FPGA family is crafted for high-bandwidth tasks, tackling the usual restrictions seen in conventional FPGAs. Manufactured using the TSMC 7nm FinFET process, these FPGAs are equipped with a pioneering 2D network-on-chip architecture and a series of machine learning processors for optimal high-bandwidth performance and AI/ML workloads. They integrate interfaces for high-paced GDDR6 memory, 400G Ethernet, and PCI Express Gen5 ports. This 2D network-on-chip connects various interfaces to upward of 80 access points in the FPGA fabric, enabling ASIC-like performance, yet retaining complete programmability. The product encourages users to start with the VectorPath accelerator card which houses the Speedster7t FPGA. This family offers robust tools for applications such as 5G infrastructure, computational storage, and test and measurement.
The Tianqiao-70 is engineered for ultra-low power consumption while maintaining robust computational capabilities. This commercial-grade 64-bit RISC-V CPU core presents an ideal choice for scenarios demanding minimal power usage without conceding performance. It is primarily designed for emerging mobile applications and devices, providing both economic and environmental benefits. Its architecture prioritizes low energy profiles, making it perfect for a wide range of applications, including mobile computing, desktop devices, and intelligent IoT systems. The Tianqiao-70 fits well into environments where conserving battery life is a priority, ensuring that devices remain operational for extended periods without needing frequent charging. The core maintains a focus on energy efficiency, yet it supports comprehensive computing functions to address the needs of modern, power-sensitive applications. This makes it a flexible component in constructing a diverse array of SoC solutions and meeting specific market demands for sustainability and performance.
The Metis M.2 AI accelerator module from Axelera AI is a cutting-edge solution for embedded AI applications. Designed for high-performance AI inference, this card boasts a single quad-core Metis AIPU that delivers industry-leading performance. With dedicated 1 GB DRAM memory, it operates efficiently within compact form factors like the NGFF M.2 socket. This capability unlocks tremendous potential for a range of AI-driven vision applications, offering seamless integration and heightened processing power.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
Focused on the advancement of autonomous mobility, KPIT's ADAS and Autonomous Driving solutions aim to address the multifaceted challenges that come with higher levels of vehicle autonomy. Safety remains the top priority, necessitating comprehensive testing and robust security protocols to ensure consumer trust. Current development practices often miss crucial corner cases by concentrating largely on standard conditions. KPIT tackles these issues through a holistic, multi-layered approach. Their solutions integrate state-of-the-art AI-driven decision-making systems that extend beyond basic perception, enhancing system reliability and intelligence. They've established robust simulation environments to ensure feature development covers all conceivable driving scenarios, contributing to the broader adoption of Level 3 and up autonomous systems. The company also offers extensive validation frameworks combining various testing methodologies to continually refine and prove their systems. This ensures each autonomous feature is thoroughly vetted before deployment, firmly positioning KPIT as a trusted partner for automakers aiming to bring safe, reliable, and highly autonomous vehicles to market.
The AI Camera Module from Altek Corporation is a testament to their prowess in integrating complex imaging technologies. With substantial expertise in lens design and an adeptness for soft-hard integration capabilities, Altek partners with top global brands to supply a variety of AI-driven cameras. These cameras meet diverse customer demands in AI+IoT differentiation, edge computing, and high-resolution image requisites of 2K to 4K quality. This module's ability to seamlessly engage with the latest AI algorithms makes it ideal for smart environments requiring real-time data analysis and decision-making capabilities.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
Designed for entry-level server-class applications, the SCR9 is a 64-bit RISC-V processor core that comes equipped with cutting-edge features, such as an out-of-order superscalar pipeline, making it apt for processing-intensive environments. It supports both single and double-precision floating-point operations adhering to IEEE standards, which ensure precise computation results. This processor core is tailored for high-performance computing needs, with a focus on AI and ML, as well as conventional data processing tasks. It integrates an advanced interrupt system featuring APLIC configurations, enabling responsive operations even under heavy workloads. SCR9 supports up to 16 cores in a multi-cluster arrangement, each utilizing coherent multi-level caches to maintain rapid data processing and management. The comprehensive development package for SCR9 includes ready-to-deploy toolchains and simulators that expedite software development, particularly within Linux environments. The core is well-suited for deployment in entry-level server markets and data-intensive applications, with robust support for virtualization and heterogeneous architectures.
The Mixed-Signal CODEC by Archband integrates advanced audio and voice processing capabilities, designed to deliver high-fidelity sound in a compact form. This technology supports applications across various audio devices, ensuring quality performance even at low power consumption levels. With its ability to handle both mono and stereo channels, it is perfectly suited for modern audio systems.
The RV12 is a versatile, single-issue RISC-V compliant processor core, designed for the embedded market. With compliance to both RV32I and RV64I specifications, this core is part of Roa Logic's 32/64-bit CPU offerings. Featuring a Harvard architecture, it efficiently handles simultaneous instruction and data memory operations. The architecture is enhanced with an optimizing folded 4-stage pipeline, maximizing the overlap of execution with memory access to reduce latency and boost throughput. Flexibility is a cornerstone of the RV12 processor, offering numerous configuration options to tailor performance and efficiency. Users can select optional components such as branch prediction units, instruction and data caches, and a debug unit. This configurability allows designers to balance trade-offs between speed, power consumption, and area, optimizing the core for specific applications. The processor core supports a variety of standard software tools and comes with a full suite of development resources, including support for the Eclipse Integrated Development Environment (IDE) and GNU toolchain. The RV12 design emphasizes a small silicon footprint and power-efficient operation, making it ideal for a wide range of embedded applications.
Ventana's Veyron V2 CPU represents the pinnacle of high-performance AI and data center-class RISC-V processors. Engineered to deliver world-class performance, it supports extensive data center workloads, offering superior computational power and efficiency. The V2 model is particularly focused on accelerating AI and ML tasks, ensuring compute-intensive applications run seamlessly. Its design makes it an ideal choice for hyperscale, cloud, and edge computing solutions where performance is non-negotiable. This CPU is instrumental for companies aiming to scale with the latest in server-class technology.
BrainChip's Akida IP is an innovative neuromorphic processor that emulates the human brain's functionalities to analyze essential sensor inputs at the acquisition point. By maintaining AI/ML processes on-chip, Akida IP minimizes cloud dependency, reducing latency and enhancing data privacy. The scalable architecture supports up to 256 nodes interconnected over a mesh network, each node equipped with configurable Neural Network Layer Engines (NPEs). This event-based processor leverages data sparsity to decrease operational requirements significantly, which in turn improves performance and energy efficiency. With robust customization and the ability to perform on-chip learning, Akida IP adeptly supports a wide range of edge AI applications while maintaining a small silicon footprint.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
As the SoC that placed Kneron on the map, the KL520 AI SoC continues to enable sophisticated edge AI processing. It integrates dual ARM Cortex M4 CPUs, ideally serving as an AI co-processor for products like smart home systems and electronic devices. It supports an array of 3D sensor technologies including structured light and time-of-flight cameras, which broadens its application in devices striving for autonomous functionalities. Particularly noteworthy is its ability to maximize power savings, making it feasible to power some devices on low-voltage battery setups for extended operational periods. This combination of size and power efficiency has seen the chip integrated into numerous consumer product lines.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
The ORC3990 SoC is a state-of-the-art solution designed for satellite IoT applications within Totum's DMSS™ network. This low-power sensor-to-satellite system integrates an RF transceiver, ARM CPUs, memories, and PA to offer seamless IoT connectivity via LEO satellite networks. It boasts an optimized link budget for effective indoor signal coverage, eliminating the need for additional GNSS components. This compact SoC supports industrial temperature ranges and is engineered for a 10+ year battery life using advanced power management.
The Yitian 710 Processor is an advanced Arm-based server chip developed by T-Head, designed to meet the extensive demands of modern data centers and enterprise applications. This processor boasts 128 high-performance Armv9 CPU cores, each coupled with robust caches, ensuring superior processing speeds and efficiency. With a 2.5D packaging technology, the Yitian 710 integrates multiple dies into a single unit, facilitating enhanced computational capability and energy efficiency. One of the key features of the Yitian 710 is its memory subsystem, which supports up to 8 channels of DDR5 memory, achieving a peak bandwidth of 281 GB/s. This configuration guarantees rapid data access and processing, crucial for high-throughput computing environments. Additionally, the processor is equipped with 96 PCIe 5.0 lanes, offering a dual-direction bandwidth of 768 GB/s, enabling seamless connectivity with peripheral devices and boosting system performance overall. The Yitian 710 Processor is meticulously crafted for applications in cloud services, big data analytics, and AI inference, providing organizations with a robust platform for their computing needs. By combining high core count, extensive memory support, and advanced I/O capabilities, the Yitian 710 stands as a cornerstone for deploying powerful, scalable, and energy-efficient data processing solutions.
The eSi-3250 32-bit RISC processor core excels in applications needing efficient caching structures and high-performance computation, thanks to its support for both instruction and data caches. This core targets applications where slower memory technologies or higher core/bus clock ratios exist, by leveraging configurable caches which reduce power consumption and boost performance. This advanced processor design integrates a wide range of arithmetic capabilities, supporting IEEE-754 floating-point functions and 32-bit SIMD operations to facilitate complex data processing. It uses an optional memory management unit (MMU) for virtual memory implementation and memory protection, enhancing its functional safety in various operating environments.
This ultra-compact and high-speed H.264 core is engineered for FPGA platforms, boasting industry-leading size and performance. Capable of providing 1080p60 H.264 Baseline support, it accommodates various customization needs, including different pixel depths and resolutions. The core is particularly noted for its minimal latency of less than 1ms at 1080p30, a significant advantage over competitors. Its flexibility allows integration with a range of FPGA systems, ensuring efficient compression without compromising on speed or size. In one versatile package, users have access to a comprehensive set of encoding features including variable and fixed bit-rate options. The core facilitates simultaneous processing of multiple video streams, adapting to various compression ratios and frame types (I and P frames). Its support for advanced video input formats and compliance with ITAR guidelines make it a robust choice for both military and civilian applications. Moreover, the availability of low-cost evaluation licenses invites experimentation and custom adaptation, promoting broad application and ease of integration in diverse projects. These cores are especially optimized for low power consumption, drawing minimal resources in contrast to other market offerings due to their efficient FPGA design architecture. They include a suite of enhanced features such as an AXI wrapper for simple system integration and significantly reduced Block RAM requirements. Embedded systems benefit from its synchronous design and wide support for auxiliary functions like simultaneous stream encoding, making it a versatile addition to complex signal processing environments.
The Chimera GPNPU by Quadric redefines AI computing on devices by combining processor flexibility with NPU efficiency. Tailored for on-device AI, it tackles significant machine learning inference challenges faced by SoC developers. This licensable processor scales massively offering performance from 1 to 864 TOPs. One of its standout features is the ability to execute matrix, vector, and scalar code in a single pipeline, essentially merging the functionalities of NPUs, DSPs, and CPUs into a single core. Developers can easily incorporate new ML networks such as vision transformers and large language models without the typical overhead of partitioning tasks across multiple processors. The Chimera GPNPU is entirely code-driven, empowering developers to optimize their models throughout a device's lifecycle. Its architecture allows for future-proof flexibility, handling newer AI workloads as they emerge without necessitating hardware changes. In terms of memory efficiency, the Chimera architecture is notable for its compiler-driven DMA management and support for multiple levels of data storage. Its rich instruction set optimizes both 8-bit integer operations and complex DSP tasks, providing full support for C++ coded projects. Furthermore, the Chimera GPNPU integrates AXI Interfaces for efficient memory handling and configurable L2 memory to minimize off-chip access, crucial for maintaining low power dissipation.
The KL630 AI SoC represents Kneron's sophisticated approach to AI processing, boasting an architecture that accommodates Int4 precision and transformers, making it incredibly adept in delivering performance efficiency alongside energy conservation. This chip shines in contexts demanding high computational intensity such as city surveillance and autonomous operation. It sports an ARM Cortex A5 CPU and a specialized NPU with 1 eTOPS computational power at Int4 precision. Suitable for running diverse AI applications, the KL630 is optimized for seamless operation in edge AI devices, providing comprehensive support for industry-standard AI frameworks and displaying superior image processing capabilities.
Polar ID offers an advanced solution for secure facial recognition in smartphones. This system harnesses the revolutionary capabilities of meta-optics to capture a unique polarization signature from human faces, adding a distinct layer of security against sophisticated spoofing methods like 3D masks. With its compact design, Polar ID replaces the need for bulky optical modules and costly time-of-flight sensors, making it a cost-effective alternative for facial authentication. The Polar ID system operates efficiently under diverse lighting conditions, ensuring reliable performance both in bright sunlight and in total darkness. This adaptability is complemented by the system’s high-resolution capability, surpassing that of traditional facial recognition technologies, allowing it to function seamlessly even when users are wearing face coverings, such as glasses or masks. By incorporating this high level of precision and security, Polar ID provides an unprecedented user experience in biometric solutions. As an integrated solution, Polar ID leverages state-of-the-art polarization imaging, combined with near-infrared technology operating at 940nm, which provides robust and secure face unlock functionality for an increasing range of mobile devices. This innovation delivers enhanced digital security and convenience, significantly reducing complexity and integration costs for manufacturers, while setting a new standard for biometric authentication in smartphones and beyond.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The eSi-3200 is a versatile 32-bit RISC processor core that combines low power usage with high performance, ideal for embedded control applications using on-chip memory. Its structure supports a wide range of computational tasks with a modified-Harvard architecture that allows simultaneous instruction and data fetching. This design facilitates deterministic performance, making it perfect for real-time control. The eSi-3200 processor supports extensive arithmetic operations, offering optional IEEE-754 floating-point units for both single-precision and SIMD instructions which optimize parallel data processing. Its compatibility with AMBA AXI or AHB interconnects ensures easy integration into various systems.
The eSi-1600 is a highly efficient 16-bit RISC processor core designed for applications that require low power and cost-effective solutions. Despite its 16-bit architecture, it offers performance akin to pricier 32-bit processors, making it an ideal choice for controlling functions in mature mixed-signal processes. The eSi-1600 is renowned for its power efficiency, running applications in fewer clock cycles compared to traditional 8-bit CPUs. Its instruction set includes 92 basic instructions and the capability for 74 user-defined ones, enhancing its adaptability. With support for a wide range of peripherals through AMBA AHB and APB buses, this core is versatile for various integration needs.
The xcore.ai platform from XMOS is engineered to revolutionize the scope of intelligent IoT by offering a powerful yet cost-efficient solution that combines high-performance AI processing with flexible I/O and DSP capabilities. At its heart, xcore.ai boasts a multi-threaded architecture with 16 logical cores divided across two processor tiles, each equipped with substantial SRAM and a vector processing unit. This setup ensures seamless execution of integer and floating-point operations while facilitating high-speed communication between multiple xcore.ai systems, allowing for scalable deployments in varied applications. One of the standout features of xcore.ai is its software-defined I/O, enabling deterministic processing and precise timing accuracy, which is crucial for time-sensitive applications. It integrates embedded PHYs for various interfaces such as MIPI, USB, and LPDDR, enhancing its adaptability in meeting custom application needs. The device's clock frequency can be adjusted to optimize power consumption, affirming its cost-effectiveness for IoT solutions demanding high efficiency. The platform's DSP and AI performances are equally impressive. The 32-bit floating-point pipeline can deliver up to 1600 MFLOPS with additional block floating point capabilities, accommodating complex arithmetic computations and FFT operations essential for audio and vision processing. Its AI performance reaches peaks of 51.2 GMACC/s for 8-bit operations, maintaining substantial throughput even under intensive AI workloads, making xcore.ai an ideal candidate for AI-enhanced IoT device creation.
The EW6181 is a cutting-edge multi-GNSS silicon solution offering the lowest power consumption and high sensitivity for exemplary accuracy across a myriad of navigation applications. This GNSS chip is adept at processing signals from numerous satellite systems including GPS L1, Glonass, BeiDou, Galileo, and several augmentation systems like SBAS. The integrated chip comprises an RF frontend, a digital baseband processor, and an ARM microcontroller dedicated to operating the firmware, allowing for flexible integration across devices needing efficient power usage. Designed with a built-in DC-DC converter and LDOs, the EW6181 silicon streamlines its bill of materials, making it perfect for battery-powered devices, providing extended operational life without compromising on performance. By incorporating patent-protected algorithms, the EW6181 achieves a remarkably compact footprint while delivering superior performance characteristics. Especially suited for dynamic applications such as action cameras and wearables, its antenna diversity capabilities ensure exceptional connectivity and positioning fidelity. Moreover, by enabling cloud functionality, the EW6181 pushes boundaries in power efficiency and accuracy, catering to connected environments where greater precision is paramount.
The NaviSoC by ChipCraft is a highly integrated GNSS system-on-chip (SoC) designed to bring navigation technologies to a single die. Combining a GNSS receiver with an application processor, the NaviSoC delivers unmatched precision in a dependable, scalable, and cost-effective package. Designed for minimal energy consumption, it caters to cutting-edge applications in location-based services (LBS), the Internet of Things (IoT), and autonomous systems like UAVs and drones. This innovative product facilitates a wide range of customizations, adaptable to varied market needs. Whether the application involves precise lane-level navigation or asset tracking and management, the NaviSoC meets and exceeds market expectations by offering enhanced security and reliability, essential for synchronization and smart agricultural processes. Its compact design, which maintains high efficiency and flexibility, ensures that clients can tailor their systems to exact specifications without compromise. NaviSoC stands as a testament to ChipCraft's pioneering approach to GNSS technologies.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
The SCR7 is a 64-bit RISC-V application core crafted to meet high-performance demands of applications requiring powerful data processing. Featuring a sophisticated dual-issue pipeline with out-of-order execution, it enhances computational efficiency across varied tasks. The core is equipped with a robust floating-point unit and supports extensive RISC-V ISA extensions for advanced computing capabilities. SCR7's memory system includes L1 to L3 caches, with options for expansive up to 16MB L3 caching, ensuring data availability and integrity in demanding environments. Its multicore architecture supports up to eight cores, facilitating intensive computational tasks across industries such as AI and machine learning. Ideal for high-performance computing and big data applications, the SCR7 leverages its advanced interrupt systems and intelligent memory management for seamless operation. Comprehensive development resources, from simulators to SDKs, augment its integration across Linux-based systems, accelerating project development timelines.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The Y180 is a streamlined microprocessor design, incorporating approximately 8K gates and serving primarily as a CPU clone of the Zilog Z180. It caters to applications requiring efficient, compact processing power without extensive resource demands. Its design is particularly apt for systems that benefit from Z80 architecture compatibility, ensuring effortless integration and functionality within a variety of technological landscapes.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
The eSi-3264 is a cutting-edge 32/64-bit processor core that incorporates SIMD DSP extensions, making it suitable for applications requiring both efficient data parallelism and minimal silicon footprint. Designed for high-accuracy DSP tasks, this processor's multifunctional capabilities target audio processing, sensor hubs, and complex arithmetic operations. The eSi-3264 processor supports sizeable instruction and data caches, which significantly enhance system performance when accessing slower external memory sources. With dual and quad MAC operations that include 64-bit accumulation, it enhances DSP execution, applying 8, 16, and 32-bit SIMD instructions for real-time data handling and minimizing CPU load.
The Avispado core is a 64-bit in-order RISC-V processor that provides an excellent balance of performance and power efficiency. With a focus on energy-conscious designs, Avispado facilitates the development of machine learning applications and is prime for environments with limited silicon resources. It leverages Semidynamics' innovative Gazzillion Misses™ technology to address challenges with sparse tensor weights, enhancing energy efficiency and operational performance for AI tasks. Structured to support multiprocessor configurations, Avispado is integral in systems requiring cache coherence and high memory throughput. It is particularly suitable for setups aimed at recommendation systems due to its ability to manage numerous outstanding memory requests, thanks to its advanced memory interface architectures. Integration with Semidynamics' Vector Unit enriches its offering, allowing dense computations and providing optimal performance in handling vector tasks. The ability to engage with Linux-ready environments and support for RISC-V Vector Specification 1.0 ensures that Avispado integrates seamlessly into existing frameworks, fostering innovative applications in fields like data centers and beyond.
The Veyron V1 CPU is designed to meet the demanding needs of data center workloads. Optimized for robust performance and efficiency, it handles a variety of tasks with precision. Utilizing RISC-V open architecture, the Veyron V1 is easily integrated into custom high-performance solutions. It aims to support the next-generation data center architectures, promising seamless scalability for various applications. The CPU is crafted to compete effectively against ARM and x86 data center CPUs, providing the same class-leading performance with added flexibility for bespoke integrations.
The DisplayPort Transmitter is a highly advanced solution designed to seamlessly transmit high-definition audio and video data between devices. It adheres to the latest VESA standards, ensuring it can handle DisplayPort 1.4 and 2.1 specifications with ease. The transmitter is engineered to support a plethora of audio interfaces including I2S, SPDIF, and DMA, making it highly adaptable to a wide range of consumer and professional audio-visual equipment. With features focused on AV sync and timing recovery, it ensures smooth and uninterrupted data flow even in the most demanding applications. This transmitter is particularly beneficial for those wishing to integrate top-of-the-line audio and video synchronization within their projects, offering customizable sound settings that can accommodate unique user requirements. It's robust enough to be used across industry sectors, from high-end consumer electronics like gaming consoles and home theater systems to professional equipment used in broadcast and video wall displays. Moreover, the DisplayPort Transmitter's architecture facilitates seamless integration into existing FPGA and ASIC systems without a hitch in performance. Comprehensive compliance testing ensures that it is compatible with a wide base of devices and technologies, making it a dependable choice for developers looking to provide comprehensive DisplayPort solutions. Whether it's enhancing consumer electronics or powering complex industry-specific systems, the DisplayPort Transmitter is built to deliver exemplary performance.
The 3D Imaging Chip developed by Altek Corporation exemplifies innovation in depth sensing technology. Delving into this field for many years, Altek provides a cutting-edge module equipped for varied needs, from surveillance devices to transport robotics. This technology enhances the accuracy of recognition capabilities, paving the way for holistic hardware and software solutions from modules to chips. Altek's 3D imaging solutions are optimal for scenarios where precise distance measurement and object identification are requisite, demonstrating robustness across medium to long-range applications. As these systems mature, they continually improve the precision of spatial recognition, positioning Altek at the forefront of depth sensing innovation.
The eSi-Comms IP suite provides a highly adaptable OFDM-based MODEM and DFE portfolio, crucial for facilitating communications-oriented ASIC designs. This IP offers adept handling of many air interface standards in use today, making it ideal for 4G, 5G, Wi-Fi, and other wireless applications. The suite includes advanced DSP algorithms for ensuring robust links under various conditions, using a core design that is highly configurable to the specific needs of high-performance communication systems. Notably, it supports synchronization, equalization, and channel decoding, boasting features like BPSK to 1024-QAM demodulation and multi-antenna processing.
The Jotunn8 is engineered to redefine performance standards for AI datacenter inference, supporting prominent large language models. Standing as a fully programmable and algorithm-agnostic tool, it supports any algorithm, any host processor, and can execute generative AI like GPT-4 or Llama3 with unparalleled efficiency. The system excels in delivering cost-effective solutions, offering high throughput up to 3.2 petaflops (dense) without relying on CUDA, thus simplifying scalability and deployment. Optimized for cloud and on-premise configurations, Jotunn8 ensures maximum utility by integrating 16 cores and a high-level programming interface. Its innovative architecture addresses conventional processing bottlenecks, allowing constant data availability at each processing unit. With the potential to operate large and complex models at reduced query costs, this accelerator maintains performance while consuming less power, making it the preferred choice for advanced AI tasks. The Jotunn8's hardware extends beyond AI-specific applications to general processing (GP) functionalities, showcasing its agility. By automatically selecting the most suitable processing paths layer-by-layer, it optimizes both latency and power consumption. This provides its users with a flexible platform that supports the deployment of vast AI models under efficient resource utilization strategies. This product's configuration includes power peak consumption of 180W and an impressive 192 GB on-chip memory, accommodating sophisticated AI workloads with ease. It aligns closely with theoretical limits for implementation efficiency, accentuating VSORA's commitment to high-performance computational capabilities.
The Dynamic Neural Accelerator (DNA) II offers a groundbreaking approach to enhancing edge AI performance. This neural network architecture core stands out due to its runtime reconfigurable architecture that allows for efficient interconnections between compute components. DNA II supports both convolutional and transformer network applications, accommodating an extensive array of edge AI functions. By leveraging scalable performance, it makes itself a valuable asset in the development of systems-on-chip (SoC) solutions. DNA II is spearheaded by EdgeCortix's patented data path architecture, focusing on technical optimization to maximize available computing resources. This architecture uniquely allows DNA II to maintain low power consumption while flexibly adapting to various task demands across diverse AI models. Its higher utilization rates and faster processing set it apart from traditional IP core solutions, addressing industry demands for more efficient and effective AI processing. In concert with the MERA software stack, DNA II optimally sequences computation tasks and resource distribution, further refining efficiency and effectiveness in processing neural networks. This integration of hardware and software not only aids in reducing on-chip memory bandwidth usage but also enhances the parallel processing ability of the system, catering to the intricate needs of modern AI computing environments.
DolphinWare IPs is a versatile portfolio of intellectual property solutions that enable efficient SoC design. This collection includes various control logic components such as FIFO, arbiter, and arithmetic components like math operators and converters. In addition, the logic components span counters, registers, and multiplexers, providing essential functionalities for diverse industrial applications. The IPs in this lineup are meticulously designed to ensure data integrity, supported by robust verification IPs for AXI4, APB, SD4.0, and more. This comprehensive suite meets the stringent demands of modern electronic designs, facilitating seamless integration into existing design paradigms. Beyond their broad functionality, DolphinWare’s offerings are fundamental to applications requiring specific control logic and data integrity solutions, making them indispensable for enterprises looking to modernize or expand their product offerings while ensuring compliance with industry standards.
Topaz FPGAs are designed for high-volume production applications where cost efficiency, compact form factor, and energy efficiency are paramount. These FPGAs integrate a set of commonly used features and protocols, such as MIPI, Ethernet, and PCIe Gen3, making them ideal for use in machine vision, robotics, and consumer electronics. With logic densities ranging from 52,160 to 326,080 logic elements, Topaz FPGAs provide versatile support for complex applications while keeping power consumption low.\n\nThe advanced Quantum™ compute fabric in Topaz allows for effective packing of logic in XLR cells, which enhances the scope for innovation and design flexibility. These FPGAs excel in applications requiring substantial computational resources without a hefty power draw, ensuring broad adaptability across various use cases. Topaz's integration capabilities allow for straightforward system expansion, enabling seamless scaling of operations from R&D phases to full production.\n\nThe Topaz FPGA family is engineered to cater to extended product life cycles, which is crucial for industries like automotive and industrial automation where long-term system stability is essential. With multiple package options, including small QFP packages for reduced BoM costs, Topaz FPGAs provide an economically attractive option while ensuring support for high-speed data applications. Efinix's commitment to maintaining a stable product supply until at least 2045 assures partners of sustained innovation and reliability.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!