All IPs > Platform Level IP > Processor Core Dependent
In the realm of semiconductor IP, the Processor Core Dependent category encompasses a variety of intellectual properties specifically designed to enhance and support processor cores. These IPs are tailored to work in harmony with core processors to optimize their performance, adding value by reducing time-to-market and improving efficiency in modern integrated circuits. This category is crucial for the customization and adaptation of processors to meet specific application needs, addressing both performance optimization and system complexity management.
Processor Core Dependent IPs are integral components, typically found in applications that require robust data processing capabilities such as smartphones, tablets, and high-performance computing systems. They can also be implemented in embedded systems for automotive, industrial, and IoT applications, where precision and reliability are paramount. By providing foundational building blocks that are pre-verified and configurable, these semiconductor IPs significantly simplify the integration process within larger digital systems, enabling a seamless enhancement of processor capabilities.
Products in this category may include cache controllers, memory management units, security hardware, and specialized processing units, all designed to complement and extend the functionality of processor cores. These solutions enable system architects to leverage existing processor designs while incorporating cutting-edge features and optimizations tailored to specific application demands. Such customizations can significantly boost the performance, energy efficiency, and functionality of end-user devices, translating into better user experiences and competitive advantages.
In essence, Processor Core Dependent semiconductor IPs represent a strategic approach to processor design, providing a toolkit for customization and optimization. By focusing on interdependencies within processing units, these IPs allow for the creation of specialized solutions that cater to the needs of various industries, ensuring the delivery of high-performance, reliable, and efficient computing solutions. As the demand for sophisticated digital systems continues to grow, the importance of these IPs in maintaining competitive edge cannot be overstated.
Designed for high-performance applications, the Metis AIPU PCIe AI Accelerator Card by Axelera AI offers powerful AI processing capabilities in a PCIe card format. This card is equipped with the Metis AI Processing Unit, capable of delivering up to 214 TOPS, making it ideal for intensive AI tasks and vision applications that require substantial computational power. With support for the Voyager SDK, this card ensures seamless integration and rapid deployment of AI models, helping developers leverage existing infrastructures efficiently. It's tailored for applications that demand robust AI processing like high-resolution video analysis and real-time object detection, handling complex networks with ease. Highlighted for its performance in ResNet-50 processing, which it can execute at a rate of up to 3,200 frames per second, the PCIe AI Accelerator Card perfectly meets the needs of cutting-edge AI applications. The software stack enhances the developer experience, simplifying the scaling of AI workloads while maintaining cost-effectiveness and energy efficiency for enterprise-grade solutions.
Panmnesia's CXL 3.1 Switch is an integral component designed to facilitate high-speed, low-latency data transfers across multiple connected devices. It is architected to manage resource allocation seamlessly in AI and high-performance computing environments, supporting broad bandwidth, robust data throughput, and efficient power consumption, creating a cohesive foundation for scalable AI infrastructures. Its integration with advanced protocols ensures high system compatibility.
The Ventana Veyron V2 CPU represents a substantial upgrade in processing power, setting a new standard in AI and data center performance with its RISC-V architecture. Created for applications that demand intensive computing resources, the Veyron V2 excels in providing high throughput and superior scalability. It is aimed at cloud-native operations and intensive data processing tasks requiring robust, reliable compute power. This CPU is finely tuned for modern, virtualized environments, delivering a server-class performance tailored to manage cloud-native workloads efficiently. The Veyron V2 supports a range of integration options, making it dependably adaptable for custom silicon platforms and high-performance system infrastructures. Its design incorporates an IOMMU compliant with RISC-V standards, enabling seamless interoperability with third-party IPs and modules. Ventana's innovation is evident in the Veyron V2's capacity for heterogeneous computing configurations, allowing diverse workloads to be managed effectively. Its architecture features advanced cluster and cache infrastructures, ensuring optimal performance across large-scale deployment scenarios. With a commitment to open standards and cutting-edge technologies, the Veyron V2 is a critical asset for organizations pursuing the next level in computing performance and efficiency.
The Chimera GPNPU from Quadric is engineered to meet the diverse needs of modern AI applications, bridging the gap between traditional processing and advanced AI model requirements. It's a fully licensable processor, designed to deliver high AI inference performance while eliminating the complexity of traditional multi-core systems. The GPNPU boasts an exceptional ability to execute various AI models, including classical backbones, state-of-the-art transformers, and large language models, all within a single execution pipeline.\n\nOne of the core strengths of the Chimera GPNPU is its unified architecture that integrates matrix, vector, and scalar processing capabilities. This singular design approach allows developers to manage complex tasks such as AI inference and data-parallel processing without resorting to multiple tools or artificial partitioning between processors. Users can expect heightened productivity thanks to its modeless operation, which is fully programmable and efficiently executes C++ code alongside AI graph code.\n\nIn terms of versatility and application potential, the Chimera GPNPU is adaptable across different market segments. It's available in various configurations to suit specific performance needs, from single-core designs to multi-core clusters capable of delivering up to 864 TOPS. This scalability, combined with future-proof programmability, ensures that the Chimera GPNPU not only addresses current AI challenges but also accommodates the ever-evolving landscape of cognitive computing requirements.
xcore.ai is a versatile and powerful processing platform designed for AIoT applications, delivering a balance of high performance and low power consumption. Crafted to bring AI processing capabilities to the edge, it integrates embedded AI, DSP, and advanced I/O functionalities, enabling quick and effective solutions for a variety of use cases. What sets xcore.ai apart is its cycle-accurate programmability and low-latency control, which improve the responsiveness and precision of the applications in which it is deployed. Tailored for smart environments, xcore.ai ensures robust and flexible computing power, suitable for consumer, industrial, and automotive markets. xcore.ai supports a wide range of functionalities, including voice and audio processing, making it ideal for developing smart interfaces such as voice-controlled devices. It also provides a framework for implementing complex algorithms and third-party applications, positioning it as a scalable solution for the growing demands of the connected world.
The Metis AIPU M.2 Accelerator Module from Axelera AI is a cutting-edge solution designed for enhancing AI performance directly within edge devices. Engineered to fit the M.2 form factor, this module packs powerful AI processing capabilities into a compact and efficient design, suitable for space-constrained applications. It leverages the Metis AI Processing Unit to deliver high-speed inference directly at the edge, minimizing latency and maximizing data throughput. The module is optimized for a range of computer vision tasks, making it ideal for applications like multi-channel video analytics, quality inspection, and real-time people monitoring. With its advanced architecture, the AIPU module supports a wide array of neural networks and can handle up to 24 concurrent video streams, making it incredibly versatile for industries looking to implement AI-driven solutions across various sectors. Providing seamless compatibility with AI frameworks such as TensorFlow, PyTorch, and ONNX, the Metis AIPU integrates seamlessly with existing systems to streamline AI model deployment and optimization. This not only boosts productivity but also significantly reduces time-to-market for edge AI solutions. Axelera's comprehensive software support ensures that users can achieve maximum performance from their AI models while maintaining operational efficiency.
The SAKURA-II AI Accelerator represents a cutting-edge advancement in the field of generative AI, offering remarkable efficiency in a compact form factor. Engineered for rapid real-time inferencing, it excels in applications requiring low latency and robust performance in small, power-efficient silicon. This accelerator adeptly manages multi-billion parameter models, including Llama 2 and Stable Diffusion, under typical power requirements of 8W, catering to diverse applications from Vision to Language and Audio. Its core advantage lies in exceeding the AI compute utilization of other solutions, ensuring outstanding energy efficiency. The SAKURA-II further supports up to 32GB of DRAM, leveraging enhanced bandwidth for superior performance. Sparse computing techniques minimize memory footprint, while real-time data streaming and support for arbitrary activation functions elevate its functionality, enabling sophisticated applications in edge environments. This versatile AI accelerator not only enhances energy efficiency but also delivers robust memory management, supporting advanced precision for near-FP32 accuracy. Coupled with advanced power management, it suits a wide array of edge AI implementations, affirming its place as a leader in generative AI technologies at the edge.
The Jotunn8 AI Accelerator represents a pioneering approach in AI inference chip technology, designed to cater to the demanding needs of contemporary data centers. Its architecture is optimized for high-speed deployment of AI models, combining rapid data processing capabilities with cost-effectiveness and energy efficiency. By integrating features such as ultra-low latency and substantial throughput capacity, it supports real-time applications like chatbots and fraud detection that require immediate data processing and agile responses. The chip's impressive performance per watt metric ensures a lower operational cost, making it a viable option for scalable AI operations that demand both efficiency and sustainability. By reducing power consumption, Jotunn8 not only minimizes expenditure but also contributes to a reduced carbon footprint, aligning with the global move towards greener technology solutions. These attributes make Jotunn8 highly suitable for applications where energy considerations and environmental impact are paramount. Additionally, Jotunn8 offers flexibility in memory performance, allowing for the integration of complexity in AI models without compromising on speed or efficiency. The design emphasizes robustness in handling large-scale AI services, catering to the new challenges posed by expanding data needs and varied application environments. Jotunn8 is not simply about enhancing inference speed; it proposes a new baseline for scalable AI operations, making it a foundational element for future-proof AI infrastructure.
RaiderChip's GenAI v1 is a pioneering hardware-based generative AI accelerator, designed to perform local inference at the Edge. This technology integrates optimally with on-premises servers and embedded devices, offering substantial benefits in privacy, performance, and energy efficiency over traditional hybrid AI solutions. The design of the GenAI v1 NPU streamlines the process of executing large language models by embedding them directly onto the hardware, eliminating the need for external components like CPUs or internet connections. With its ability to support complex models such as the Llama 3.2 with 4-bit quantization on LPDDR4 memory, the GenAI v1 achieves unprecedented efficiency in AI token processing, coupled with energy savings and reduced latency. What sets GenAI v1 apart is its scalability and cost-effectiveness, significantly outperforming competitive solutions such as Intel Gaudi 2, Nvidia's cloud GPUs, and Google's cloud TPUs in terms of memory efficiency. This solution maximizes the number of tokens generated per unit of memory bandwidth, thus addressing one of the primary limitations in generative AI workflow. Furthermore, the adept memory usage of GenAI v1 reduces the dependency on costly memory types like HBM, opening the door to more affordable alternatives without diminishing processing capabilities. With a target-agnostic approach, RaiderChip ensures the GenAI v1 can be adapted to various FPGAs and ASICs, offering configuration flexibility that allows users to balance performance with hardware costs. Its compatibility with a wide range of transformers-based models, including proprietary modifications, ensures GenAI v1's robust placement across sectors requiring high-speed processing, like finance, medical diagnostics, and autonomous systems. RaiderChip's innovation with GenAI v1 focuses on supporting both vanilla and quantized AI models, ensuring high computation speeds necessary for real-time applications without compromising accuracy. This capability underpins their strategic vision of enabling versatile and sustainable AI solutions across industries. By prioritizing integration ease and operational independence, RaiderChip provides a tangible edge in applying generative AI effectively and widely.
EXOSTIV is a versatile tool providing extensive capture capabilities for monitoring FPGA internal signals. It's designed to visualize operation in real-time, thus offering immense savings by mitigating FPGA bugs during production and lowering engineering costs. The tool adapts to different prototyping boards and supports a variety of FPGA configurations. A hallmark of EXOSTIV's functionality is its ability to perform at-speed analysis in complex FPGA designs. It features robust probes like the EP16000, which connects to FPGA chip transceivers, supporting significant data rates per transceiver. This setup ensures that engineers can conduct real-world testing and accurate data capture, overcoming the hindrances often encountered with simulation-only methods. The tool boasts a user-friendly interface centered around its Core Inserter and Probe Client software, allowing for efficient IP generation and integration into the target design. By providing comprehensive connectivity options via QSFP28 and supporting multiple platforms, EXOSTIV remains an essential asset for engineers aiming to enhance their FPGA design and validation processes.
The NuLink Die-to-Die PHY for Standard Packaging represents Eliyan's cornerstone technology, engineered to harness the power of standard packaging for die-to-die interconnects. This technology circumvents the limitations of advanced packaging by providing superior performance and power efficiencies traditionally associated only with high-end solutions. Designed to support multiple standards, such as UCIe and BoW, the NuLink D2D PHY is an ideal solution for applications requiring high bandwidth and low latency without the cost and complexity of silicon interposers or silicon bridges. In practical terms, the NuLink D2D PHY enables chiplets to achieve unprecedented bandwidth and power efficiency, allowing for increased flexibility in chiplet configurations. It supports a diverse range of substrates, providing advantages in thermal management, production cycle, and cost-effectiveness. The technology's ability to split a Network on Chip (NoC) across multiple chiplets, while maintaining performance integrity, makes it invaluable in ASIC designs. Eliyan's NuLink D2D PHY is particularly beneficial for systems requiring physical separation between high-performance ASICs and heat-sensitive components. By delivering interposer-like bandwidth and power in standard organic or laminate packages, this product ensures optimal system performance across varied applications, including those in AI, data processing, and high-speed computing.
The A25 processor model is a versatile CPU suitable for a variety of embedded applications. With its 5-stage pipeline and 32/64-bit architecture, it delivers high performance even with a low gate count, which translates to efficiency in power-sensitive environments. The A25 is equipped with Andes Custom Extensions that enable tailored instruction sets for specific application accelerations. Supporting robust high-frequency operations, this model shines in its ability to manage data prefetching and cache coherence in multicore setups, making it adept at handling complex processing tasks within constrained spaces.
The NMP-750 is a high-performance accelerator designed for edge computing, particularly suited for automotive, AR/VR, and telecommunications sectors. It boasts an impressive capacity of up to 16 TOPS and 16 MB local memory, powered by a RISC-V or Arm Cortex-R/A 32-bit CPU. The three AXI4 interfaces ensure seamless data transfer and processing. This advanced accelerator supports multifaceted applications such as mobility control, building automation, and multi-camera processing. It's designed to cope with the rigorous demands of modern digital and autonomous systems, offering substantial processing power and efficiency for intensive computational tasks. The NMP-750's ability to integrate into smart systems and manage spectral efficiency makes it crucial for communications and smart infrastructure management. It helps streamline operations, maintain effective energy management, and facilitate sophisticated AI-driven automation, ensuring that even the most complex data flows are handled efficiently.
The aiWare hardware neural processing unit (NPU) stands out as a state-of-the-art solution for automotive AI applications, bringing unmatched efficiency and performance. Designed specifically for inference tasks associated with automated driving systems, aiWare supports a wide array of AI workloads including CNNs, LSTMs, and RNNs, ensuring optimal operation across numerous applications.\n\naiWare is engineered to achieve industry-leading efficiency rates, boasting up to 98% efficiency on automotive neural networks. It operates across various performance requirements, from cost-sensitive L2 regulatory applications to advanced multi-sensor L3+ systems. The hardware platform is production-proven, already implemented in several products like Nextchip's APACHE series and enjoys strong industry partnerships.\n\nA key feature of aiWare is its scalability, capable of delivering up to 1024 TOPS with its multi-core architecture, and maintaining high efficiency in diverse AI tasks. The design allows for straightforward integration, facilitating early-stage performance evaluations and certifications with its deterministic operations and minimal host CPU intervention.\n\nA dedicated SDK, aiWare Studio, furthers the potential of the NPU by providing a suite of tools focused on neural network optimization, supporting developers in tuning their AI models with fine precision. Optimized for automotive-grade applications, aiWare's technology ensures seamless integration into systems requiring AEC-Q100 Grade 2 compliance, significantly enhancing the capabilities of automated driving applications from L2 through L4.
The Hanguang 800 AI Accelerator by T-Head is designed to meet the needs of intensive machine learning workloads. Boasting superior performance, this AI accelerator leverages cutting-edge algorithms to enhance data processing capabilities, offering rapid speeds for AI tasks. It is particularly suited for deep learning applications that require high throughput and complex computation. Fitted with a highly efficient architecture, the Hanguang 800 speeds up machine learning model training and inference, enabling quicker deployments of AI solutions across industries. Its advanced design ensures compatibility with a wide range of machine learning frameworks, allowing for flexibility in AI application development and deployment. Energy efficiency is a key attribute of the Hanguang 800, incorporating modern power management features that reduce consumption without impacting performance. This makes it not only a high-performance option but also an environmentally friendly choice for businesses seeking to minimize their carbon footprint while optimizing AI processes.
The SiFive Intelligence X280 processor targets applications in machine learning and artificial intelligence, offering a high-performance, scalable architecture for emerging data workloads. As part of the Intelligence family, the X280 prioritizes a software-first methodology in processor design, addressing future ML and AI deployment needs, especially at the edge. This makes it particularly useful for scenarios requiring high computational power close to the data source. Central to its capabilities are scalable vector and matrix compute engines that can adapt to evolving workloads, thus future-proofing investments in AI infrastructure. With high-bandwidth bus interfaces and support for custom engine control, the X280 ensures seamless integration with varied system architectures, enhancing operational efficiency and throughput. By focusing on versatility and scalability, the X280 allows developers to deploy high-performance solutions without the typical constraints of more traditional platforms. It supports wide-ranging AI applications, from edge computing in IoT to advanced machine learning tasks, underpinning its role in modern and future-ready computing solutions.
The GenAI v1-Q from RaiderChip brings forth a specialized focus on quantized AI operations, reducing memory requirements significantly while maintaining impressive precision and speed. This innovative accelerator is engineered to execute large language models in real-time, utilizing advanced quantization techniques such as Q4_K and Q5_K, thereby enhancing AI inference efficiency especially in memory-constrained environments. By offering a 276% boost in processing speed alongside a 75% reduction in memory footprint, GenAI v1-Q empowers developers to integrate advanced AI capabilities into smaller, less powerful devices without sacrificing operational quality. This makes it particularly advantageous for applications demanding swift response times and low latency, including real-time translation, autonomous navigation, and responsive customer interactions. The GenAI v1-Q diverges from conventional AI solutions by functioning independently, free from external network or cloud auxiliaries. Its design harmonizes superior computational performance with scalability, allowing seamless adaptation across variegated hardware platforms including FPGAs and ASIC implementations. This flexibility is crucial for tailoring performance parameters like model scale, inference velocity, and power consumption to meet exacting user specifications effectively. RaiderChip's GenAI v1-Q addresses crucial AI industry needs with its ability to manage multiple transformer-based models and confidential data securely on-premises. This opens doors for its application in sensitive areas such as defense, healthcare, and financial services, where confidentiality and rapid processing are paramount. With GenAI v1-Q, RaiderChip underscores its commitment to advancing AI solutions that are both environmentally sustainable and economically viable.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
The NMP-350 is a cutting-edge endpoint accelerator designed to optimize power usage and reduce costs. It is ideal for markets like automotive, AIoT/sensors, and smart appliances. Its applications span from driver authentication and predictive maintenance to health monitoring. With a capacity of up to 1 TOPS and 1 MB of local memory, it incorporates a RISC-V/Arm Cortex-M 32-bit CPU and supports three AXI4 interfaces. This makes the NMP-350 a versatile component for various industrial applications, ensuring efficient performance and integration. Developed as a low-power solution, the NMP-350 is pivotal for applications requiring efficient processing power without inflating energy consumption. It is crucial for mobile and battery-operated devices where every watt conserved adds to the operational longevity of the product. This product aligns with modern demands for eco-friendly and cost-effective technologies, supporting enhanced performance in compact electronic devices. Technical specifications further define its role in the industry, exemplifying how it brings robust and scalable solutions to its users. Its adaptability across different applications, coupled with its cost-efficiency, makes it an indispensable tool for developers working on next-gen AI solutions. The NMP-350 is instrumental for developers looking to seamlessly incorporate AI capabilities into their designs without compromising on economy or efficiency.
The SiFive Essential family of processors is renowned for its flexibility and wide applicability across embedded systems. These CPU cores are designed to meet specific market needs with pre-defined, silicon-proven configurations or through use of SiFive Core Designer for custom processor builds. Serving in a range of 32-bit to 64-bit options, the Essential processors can scale from microcontrollers to robust dual-issue CPUs. Widely adopted in the embedded market, the Essential series cores stand out for their scalable performance, adapting to diverse application requirements while maintaining power and area efficiency. They cater to billions of units worldwide, indicating their trusted performance and integration across various industries. The SiFive Essential processors offer an optimal balance of power, area, and cost, making them suitable for a wide array of devices, from IoT and consumer electronics to industrial applications. They provide a solid foundation for products that require reliable performance at a competitive price.
The Time-Triggered Protocol (TTP) stands out as a robust framework for ensuring synchronous communication in embedded control systems. Developed to meet stringent aerospace industry criteria, TTP offers a high degree of reliability with its fault-tolerant configuration, integral to maintaining synchrony across various systems. This technology excels in environments where timing precision and data integrity are critical, facilitating accurate information exchange across diverse subsystems. TTTech’s TTP implementation adheres to the SAE AS6003 standard, making it a trusted component among industry leaders. As part of its wide-ranging applications, this protocol enhances system communication within commercial avionic solutions, providing dependable real-time data handling that ensures system stability. Beyond aviation, TTP's applications can also extend into the energy sector, demonstrating its versatility and robustness. Characterized by its deterministic nature, TTP provides a framework where every operation is scheduled, leading to predictable data flow without unscheduled interruptions. Its suitability for field-programmable gate arrays (FPGAs) allows for easy adaptation into existing infrastructures, making it a versatile tool for companies aiming to upgrade their communication systems without a complete overhaul. For engineers and developers, TTP provides a dependable foundation that streamlines the integration process while safeguarding communication integrity.
RAIV represents Siliconarts' General Purpose-GPU (GPGPU) offering, engineered to accelerate data processing across diverse industries. This versatile GPU IP is essential in sectors engaged in high-performance computing tasks, such as autonomous driving, IoT, and sophisticated data centers. With RAIV, Siliconarts taps into the potential of the fourth industrial revolution, enabling rapid computation and seamless data management. The RAIV architecture is poised to deliver unmatched efficiency in high-demand scenarios, supporting massive parallel processing and intricate calculations. It provides an adaptable framework that caters to the needs of modern computing, ensuring balanced workloads and optimized performance. Whether used for VR/AR applications or supporting the back-end infrastructure of data-intensive operations, RAIV is designed to meet and exceed industry expectations. RAIV’s flexible design can be tailored to enhance a broad spectrum of applications, promising accelerated innovation in sectors dependent on AI and machine learning. This GPGPU IP not only underscores Siliconarts' commitment to technological advancement but also highlights its capability to craft solutions that drive forward computational boundaries.
Syntacore's SCR9 processor core is a state-of-the-art, high-performance design targeted at applications requiring extensive data processing across multiple domains. It features a robust 12-stage dual-issue out-of-order pipeline and is Linux-capable. Additionally, the core supports up to 16 cores, offering superior processing power and versatility. This processor includes advanced features such as a VPU (Vector Processing Unit) and hypervisor support, allowing it to manage complex computational tasks efficiently. The SCR9 is particularly well-suited for deployments in enterprise, AI, and telecommunication sectors, reinforcing its status as a key component in next-generation computing solutions.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
The NMP-550 is tailored for enhanced performance efficiency, serving sectors like automotive, mobile, AR/VR, drones, and robotics. It supports applications such as driver monitoring, image/video analytics, and security surveillance. With a capacity of up to 6 TOPS and 6 MB local memory, this accelerator leverages either a RISC-V or Arm Cortex-M/A 32-bit CPU. Its three AXI4 interface support ensures robust interconnections and data flow. This performance boost makes the NMP-550 exceptionally suited for devices requiring high-frequency AI computations. Typical use cases include industrial surveillance and smart robotics, where precise and fast data analysis is critical. The NMP-550 offers a blend of high computational power and energy efficiency, facilitating complex AI tasks like video super-resolution and fleet management. Its architecture supports modern digital ecosystems, paving the way for new digital experiences through reliable and efficient data processing capabilities. By addressing the needs of modern AI workloads, the NMP-550 stands as a significant upgrade for those needing robust processing power in compact form factors.
The RISC-V CPU IP N Class is part of a comprehensive lineup offered by Nuclei, optimized for microcontroller applications. This 32-bit architecture is ideal for AIoT solutions, allowing seamless integration into innovative low-power and high-efficiency projects. As a highly configurable IP, it supports extensions in security and physical safety measures, catering to applications that demand reliability and adaptability. With a focus on configurability, the N Class can be tailored for specific system requirements by selecting only the necessary features, ensuring optimized performance and resource utilization. Designed with robust and readable Verilog coding, it facilitates effective debugging and performance, power, and area (PPA) optimization. The IP also supports Trust Execution Environment (TEE) for enhanced security, catering to a variety of IoT and embedded applications. This class offers efficient scalability, supporting several RISC-V extensions like B, K, P, and V, while also allowing for user-defined instruction expansion. Committed to delivering a highly adaptable processor solution, the RISC-V CPU IP N Class is essential for developers aiming to implement secure and flexible embedded systems.
The Zhenyue 510 SSD Controller is a flagship product in T-Head's lineup of storage solutions, designed to deliver exceptional performance for enterprise applications. It integrates seamlessly with solid-state drives (SSDs), enhancing read and write speeds while maintaining data integrity and reliability. This controller serves as a fundamental component for building robust, enterprise-grade storage systems. Engineered to support high-speed data transfers, the Zhenyue 510 employs cutting-edge technology to minimize latency and maximize throughput. This capability ensures swift access to data, optimizing performance for demanding applications such as cloud storage and big data processing. Its architecture is specifically tailored to leverage the advantages of PCIe 5.0 technology, allowing for robust data channeling and minimized bottleneck effects. Beyond speed and efficiency, the Zhenyue 510 integrates advanced error correction methods to maintain data accuracy over high-volume operations. This reliability is crucial for enterprises that require stable and consistent storage solutions. With such features, the Zhenyue 510 is poised to cater to the needs of modern data infrastructure, offering scalability and advanced functionality.
TT-Ascalon™ is a versatile RISC-V CPU core developed by Tenstorrent, emphasizing the utility of open standards to meet a diverse array of computing needs. Built to be highly configurable, TT-Ascalon™ allows for the inclusion of 2 to 8 cores per cluster complemented by a customizable L2 cache. This architecture caters to clients seeking a tailored processing solution without the limitations tied to proprietary systems. With support for CHI.E and AXI5-LITE interfaces, TT-Ascalon™ ensures robust connectivity while maintaining system integrity and performance density. Its security capabilities are premised on equivalent RISC-V primitives, ensuring a reliable and trusted environment for operations involving sensitive data. Tenstorrent’s engineering prowess, evident in TT-Ascalon™, has been shaped by experienced personnel from renowned tech giants. This IP is meant to align with various performance targets, suited for complex computational tasks that demand flexibility and efficiency in design.
GNSS Sensor Ltd offers the GNSS VHDL Library, a powerful suite designed to support the integration of GNSS capabilities into FPGA and ASIC products. The library encompasses a range of components, including configurable GNSS engines, Viterbi decoders, RF front-end control modules, and a self-test module, providing a comprehensive toolkit for developers. This library is engineered to be highly flexible and adaptable, supporting a wide range of satellite systems such as GPS, GLONASS, and Galileo, across various configurations. Its architecture aims to ensure independence from specific CPU platforms, allowing for easy adoption across different systems. The GNSS VHDL Library is instrumental in developing cost-effective and simplified system-on-chip solutions, with capabilities to support extensive configurations and frequency bandwidths. It facilitates rapid prototyping and efficient verification processes, crucial for deploying reliable GNSS-enabled devices.
Tyr AI Processor Family is engineered to bring unprecedented processing capabilities to Edge AI applications, where real-time, localized data processing is crucial. Unlike traditional cloud-based AI solutions, Edge AI facilitated by Tyr operates directly at the site of data generation, thereby minimizing latency and reducing the need for extensive data transfers to central data centers. This processor family stands out in its ability to empower devices to deliver instant insights, which is critical in time-sensitive operations like autonomous driving or industrial automation. The innovative design of the Tyr family ensures enhanced privacy and compliance, as data processing stays on the device, mitigating the risks associated with data exposure. By doing so, it supports stringent requirements for privacy while also reducing bandwidth utilization. This makes it particularly advantageous in settings like healthcare or environments with limited connectivity, where maintaining data integrity and efficiency is crucial. Designed for flexibility and sustainability, the Tyr AI processors are adept at balancing computing power with energy consumption, thus enabling the integration of multi-modal inputs and outputs efficiently. Their performance nears data center levels, yet they are built to consume significantly less energy, making them a cost-effective solution for implementing AI capabilities across various edge computing environments.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
Bluespec's Portable RISC-V Cores are designed to bring flexibility and extended functionality to FPGA platforms such as Achronix, Xilinx, Lattice, and Microsemi. They offer support for operating systems like Linux and FreeRTOS, making them versatile for various applications. These cores are accompanied by standard open-source development tools, which facilitate seamless integration and development processes. By utilizing these tools, developers can modify and enhance the cores to suit their specific needs, ensuring a custom fit for their projects. The portable cores are an excellent choice for developers looking to deploy RISC-V architecture across different FPGA platforms without being tied down to proprietary solutions. With Bluespec's focus on open-source, users can experience freedom in innovation and development without sacrificing performance or compatibility.
The Codasip RISC-V BK Core Series represents a family of processor cores that bring advanced customization to the forefront of embedded designs. These cores are optimized for power and performance, striking a fine balance that suits an array of applications, from sensor controllers in IoT devices to sophisticated automotive systems. Their modular design allows developers to tailor instructions and performance levels directly to their needs, providing a flexible platform that enhances both existing and new applications. Featuring high degrees of configurability, the BK Core Series facilitates designers in achieving superior performance and efficiency. By supporting a broad spectrum of operating requirements, including low-power and high-performance scenarios, these cores stand out in the processor IP marketplace. The series is verified through industry-leading practices, ensuring robust and reliable operation in various application environments. Codasip has made it straightforward to use and adapt the BK Core Series, with an emphasis on simplicity and productivity in customizing processor architecture. This ease of use allows for swift validation and deployment, enabling quicker time to market and reducing costs associated with custom hardware design.
The Veyron V1 CPU from Ventana Micro Systems is an industry-leading processor designed to deliver unparalleled performance for data-intensive applications. This RISC-V based CPU is crafted to meet the needs of modern data centers and enterprises, offering a sophisticated balance of power efficiency and computational capabilities. The Veyron V1 is engineered to handle complex workloads with its advanced architecture that competes favorably against current industry standards. Incorporating the latest innovations in chiplet technology, the Veyron V1 boasts exceptional scalability, allowing it to seamlessly integrate into diverse computing environments. Whether employed in a high-performance cloud server or an enterprise data center, this CPU is optimized to provide a consistent, robust performance across various applications. Its architecture supports scalable, modular designs, making it suitable for custom SoC implementations, thereby enabling faster time-to-market for new products. The Veyron V1’s compatibility with RISC-V open standards ensures versatility and adaptability, providing enterprises the freedom to innovate without the constraints of proprietary technologies. It includes support for essential system IP and interfaces, facilitating easy integration across different technology platforms. With a focus on extensible instruction sets, the Veyron V1 allows customized performance optimizations tailored to specific user needs, making it an essential tool in the arsenal of modern computing solutions.
Dyumnin's RISCV SoC is a versatile platform centered around a 64-bit quad-core server-class RISCV CPU, offering extensive subsystems, including AI/ML, automotive, multimedia, memory, cryptographic, and communication systems. This test chip can be reviewed in an FPGA format, ensuring adaptability and extensive testing possibilities. The AI/ML subsystem is particularly noteworthy due to its custom CPU configuration paired with a tensor flow unit, accelerating AI operations significantly. This adaptability lends itself to innovations in artificial intelligence, setting it apart in the competitive landscape of processors. Additionally, the automotive subsystem caters robustly to the needs of the automotive sector with CAN, CAN-FD, and SafeSPI IPs, all designed to enhance systems connectivity within vehicles. Moreover, the multimedia subsystem boasts a complete range of IPs to support HDMI, Display Port, MIPI, and more, facilitating rich audio and visual experiences across devices.
RegSpec is a cutting-edge tool that streamlines the generation of control and status register code, catering to the needs of IP designers by overcoming the limitations of traditional CSR generators. It supports complex synchronization and hardware interactions, allowing designers to automate intricate processes like pulse generation and serialization. Furthermore, it enhances verification by producing UVM-compatible code. This tool's flexibility shines as it can import and export in industry-standard formats such as SystemRDL and IP-XACT, interacting seamlessly with other CSR tools. RegSpec not only generates verilog RTL and SystemC header files but also provides comprehensive documentation across multiple formats including HTML, PDF, and Word. By transforming complex designs into streamlined processes, RegSpec plays a vital role in elevating design efficiency and precision. For system design, it creates standard C/C++ headers that facilitate firmware access, accompanied by SystemC models for advanced system modeling. Such comprehensive functionality ensures that RegSpec is invaluable for organizations seeking to optimize register specification, documentation, and CSR generation in a streamlined manner.
aiData serves as a comprehensive automated data pipeline tailored specifically for the development of ADAS and autonomous driving technologies. This solution optimizes various stages of MLOps, from data capturing to curation, significantly reducing the traditional manual workload required for assembling high-quality datasets. By leveraging cutting-edge technologies for data collection and annotation, aiData enhances the reliability and speed of deploying AD models, fostering a more efficient flow of data between developers and data scientists.\n\nOne of the standout features of aiData is its versioning system that ensures transparency and traceability throughout the data lifecycle. This system aids in curating datasets tailored for specific use cases via metadata enrichment and SQL querying, supporting seamless data management whether on-premise or cloud. Additionally, the aiData Recorder is engineered to produce high-quality datasets by enabling precise sensor calibration and synchronization, crucial for advanced driving applications.\n\nMoreover, the Auto Annotator component of aiData automates the traditionally labor-intensive process of data annotation, utilizing AI algorithms to produce annotations that meet high accuracy standards. This capability, combined with the aiData Metrics tool, allows for comprehensive validation of datasets, ensuring that they correctly reflect real-world conditions. Collectively, aiData empowers automotive developers to refine neural network algorithms and enhance detection software, accelerating the journey from MLOps to production.
The iCan PicoPop® is a miniaturized system on module (SOM) based on the Xilinx Zynq UltraScale+ Multi-Processor System-on-Chip (MPSoC). This advanced module is designed to handle sophisticated signal processing tasks, making it particularly suited for aeronautic embedded systems that require high-performance video processing capabilities. The module leverages the powerful architecture of the Zynq MPSoC, providing a robust platform for developing cutting-edge avionics and defense solutions. With its compact form factor, the iCan PicoPop® SOM offers unparalleled flexibility and performance, allowing it to seamlessly integrate into various system architectures. The high level of integration offered by the Zynq UltraScale+ MPSoC aids in simplifying the design process while reducing system latency and power consumption, providing a highly efficient solution for demanding applications. Additionally, the iCan PicoPop® supports advanced functionalities through its integration of programmable logic, multi-core processing, and high-speed connectivity options, making it ideal for developing next-generation applications in video processing and other complex avionics functions. Its modular design also allows for easy customization, enabling developers to tailor the system to meet specific performance and functionality needs, ensuring optimal adaptability for intricate aerospace environments. Overall, the iCan PicoPop® demonstrates a remarkable blend of high-performance computing capabilities and adaptable configurations, making it a valuable asset in the development of high-tech avionics solutions designed to withstand rigorous operational demands in aviation and defense.
The Maverick-2 Intelligent Compute Accelerator (ICA) is a groundbreaking innovation by Next Silicon Ltd. This architecture introduces a novel software-defined approach that adapts in real-time to optimize computational tasks, breaking the traditional constraints of CPUs and GPUs. By dynamically learning and accelerating critical code segments, Maverick-2 ensures enhanced efficiency and performance efficiency for high-performance computing (HPC), artificial intelligence (AI), and vector databases. Designers have developed the Maverick-2 to support a wide range of common programming languages, including C/C++, FORTRAN, OpenMP, and Kokkos, facilitating an effortless porting process. This robust toolchain reduces time-intensive application porting, allowing for a significant cut in development time while maximizing scientific output and insights. Developers can enjoy seamless integration into their existing workflows without needing new proprietary software stacks. A standout feature of this intelligent architecture is its ability to adjust hardware configurations on-the-fly, optimizing power efficiency and overall performance. With an emphasis on sustainable innovation, the Maverick-2 offers a performance-per-watt advantage that exceeds traditional GPU and high-end CPU solutions by over fourfold, making it a cost-effective and environmentally friendly choice for modern data centers and research facilities.
The pPLL02F Family is a versatile lineup of all-digital fractional-N PLLs designed for a wide range of clocking tasks at frequencies reaching up to 2GHz. With a robust architecture offering low jitter performance of less than 18 picoseconds RMS, these PLLs are compact (occupying less than 0.01 square millimeters) and energy-efficient, consuming under 3.5 milliwatts. Designed to support multi-PLL systems, the pPLL02F Family easily integrates into complex systems as a reliable clock source for digital systems and microprocessors. This family is built upon Perceptia's second-generation digital PLL technology, ensuring consistent performance across multiple processes while maintaining a minimal footprint compared to traditional analog PLLs. One of its standout features is its ability to operate flexibly in either integer-N or fractional-N modes, providing designers with the latitude to choose the optimal input and output frequencies for their particular applications. It also includes integrated power supply regulation for seamless sharing amongst multiple PLL instances. Available across a range of process technologies from leading foundries such as GlobalFoundries and TSMC, the pPLL02F Family is tailored to meet the varied requirements of SoC designs. It comes with comprehensive support, including integration and customization services, ensuring that it can be easily adapted and scaled to meet future technological needs.
The SiFive Performance family of processors is designed to offer top-tier performance and throughput across a range of sizes and power profiles. These cores provide highly efficient RISC-V scalar and vector computing capabilities, tailored for an optimal balance that delivers industry-leading results. With options for high-performance 64-bit out-of-order scalar engines and optional vector compute engines, the Performance series ensures customers get the maximum capabilities in computational power. Incorporating a robust architecture, these processors support extensive hardware capabilities, including full support for the RVA23 profile and an option for vector processing adjustments that maximizes computing efficiency. The SiFive Performance series has cores that cater to various needs, whether for general-purpose computing or applications requiring extensive parallel processing capabilities. SiFive's architecture allows for scalability and customization, bridging the gap between high-demand computational tasks and power efficiency. It is meticulously designed to meet the rigorous demands of modern and future computing applications, ensuring that both enterprise and consumer electronics can leverage the power of RISC-V computing. This makes it an ideal choice for developers seeking to push the boundaries of processing capabilities.
Tensix Neo is an AI-focused semiconductor solution from Tenstorrent that capitalizes on the robustness of RISC-V architecture. This IP is crafted to enhance the efficiency of both AI training and inference processes, making it a vital tool for entities needing scalable AI solutions without hefty power demands. With Tensix Neo, developers can rest assured of the silicon-proven reliability that backs its architecture, facilitating a smooth integration into existing AI platforms. The IP embraces the flexibility and customization needed for advanced AI workloads, optimizing resources and yielding results with high performance per watt. As the demand for adaptable AI solutions grows, Tensix Neo offers a future-proof platform that can accommodate rapid advancements and complex deployments in machine learning applications. By providing developers with tested and verified infrastructure, Tensix Neo stands as a benchmark in AI IP development.
Designed to accelerate the development of AI-driven solutions, the AI Inference Platform by SEMIFIVE offers a powerful infrastructure for deploying artificial intelligence applications quickly and efficiently. This platform encompasses an AI-focused architecture with silicon-proven IPs tailored specifically for machine learning tasks, providing a robust foundation for developers to build upon. The platform is equipped with high-performance processors optimized for AI workloads, including sophisticated neural processing units (NPUs) and memory interfaces that support large datasets and reduce latency in processing. It integrates seamlessly with existing tools and environments, minimizing the need for additional investments in infrastructure. Through strategic partnerships and an extensive library of pre-verified components, this platform reduces the complexity and time associated with AI application development. SEMIFIVE’s approach ensures end-users can focus on innovation rather than the underlying technology challenges, delivering faster time-to-market and enhanced performance for AI applications.
ISPido represents a fully configurable RTL Image Signal Processing Pipeline, adhering to the AMBA AXI4 standards and tailored through the AXI4-LITE protocol for seamless integration with systems such as RISC-V. This advanced pipeline supports a variety of image processing functions like defective pixel correction, color filter interpolation using the Malvar-Cutler algorithm, and auto-white balance, among others. Designed to handle resolutions up to 7680x7680, ISPido provides compatibility for both 4K and 8K video systems, with support for 8, 10, or 12-bit depth inputs. Each module within this pipeline can be fine-tuned to fit specific requirements, making it a versatile choice for adapting to various imaging needs. The architecture's compatibility with flexible standards ensures robust performance and adaptability in diverse applications, from consumer electronics to professional-grade imaging solutions. Through its compact design, ISPido optimizes area and energy efficiency, providing high-quality image processing while keeping hardware demands low. This makes it suitable for battery-operated devices where power efficiency is crucial, without sacrificing the processing power needed for high-resolution outputs.
The AON1100 offers a sophisticated AI solution for voice and sensor applications, marked by a remarkable power usage of less than 260μW during processing yet maintaining high levels of accuracy in environments with sub-0dB SNR. It is a leading option for always-on devices, providing effective solutions for contexts requiring constant machine listening ability.\n\nThis AI chip excels in processing real-world acoustic and sensor data efficiently, delivering up to 90% accuracy by employing advanced signal processing techniques. The AON1100's low power requirements make it an excellent choice for battery-operated devices, ensuring sustainable functionality through efficient power consumption over extended operational periods.\n\nThe scalability of the AON1100 allows it to be adapted for various applications, including smart homes and automotive settings. Its integration within broader AI platform strategies enhances intelligent data collection and contextual understanding capabilities, delivering transformative impacts on device interactivity and user experience.
The Codasip L-Series DSP Core offers specialized features tailored for digital signal processing applications. It is designed to efficiently handle high data throughput and complex algorithms, making it ideal for applications in telecommunications, multimedia processing, and advanced consumer electronics. With its high configurability, the L-Series can be customized to optimize processing power, ensuring that specific application needs are met with precision. One of the key advantages of this core is its ability to be finely tuned to deliver optimal performance for signal processing tasks. This includes configurable instruction sets that align precisely with the unique requirements of DSP applications. The core’s design ensures it can deliver top-tier performance while maintaining energy efficiency, which is critical for devices that operate in power-sensitive environments. The L-Series DSP Core is built on Codasip's proven processor design methodologies, integrating seamlessly into existing systems while providing a platform for developers to expand and innovate. By offering tools for easy customization within defined parameters, Codasip ensures that users can achieve the best possible outcomes for their DSP needs efficiently and swiftly.
ISPido on VIP Board is a customized runtime solution tailored for Lattice Semiconductors’ Video Interface Platform (VIP) board. This setup enables real-time image processing and provides flexibility for both automated configuration and manual control through a menu interface. Users can adjust settings via histogram readings, select gamma tables, and apply convolutional filters to achieve optimal image quality. Equipped with key components like the CrossLink VIP input bridge board and ECP5 VIP Processor with ECP5-85 FPGA, this solution supports dual image sensors to produce a 1920x1080p HDMI output. The platform enables dynamic runtime calibration, providing users with interface options for active parameter adjustments, ensuring that image settings are fine-tuned for various applications. This system is particularly advantageous for developers and engineers looking to integrate sophisticated image processing capabilities into their devices. Its runtime flexibility and comprehensive set of features make it a valuable tool for prototyping and deploying scalable imaging solutions.
The NoISA Processor is an innovative microprocessor designed by Hotwright Inc. to overcome the limitations of traditional instruction set architectures. Unlike standard processors, which rely on a fixed ALU, register file, and hardware controller, the NoISA Processor utilizes the Hotstate machine - an advanced microcoded algorithmic state machine. This technology allows for runtime reprogramming and flexibility, making it highly suitable for various applications where space, power efficiency, and adaptability are paramount. With the NoISA Processor, users can achieve significant performance improvements without the limitations imposed by fixed instruction sets. It's particularly advantageous in IoT and edge computing scenarios, offering enhanced efficiency compared to conventional softcore CPUs while maintaining lower energy consumption. Moreover, this processor is ideal for creating small, programmable state machines and Sysotlic arrays rapidly. Its unique architecture permits behavior modification through microcode, rather than altering the FPGA, thus offering unprecedented flexibility and power in adapting to specific technological needs.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!