All IPs > Platform Level IP > Processor Core Independent
In the ever-evolving landscape of semiconductor technologies, processor core independent IPs play a crucial role in designing flexible and scalable digital systems. These semiconductor technologies offer the versatility of enabling functionalities independent of a specific processor core, making them invaluable for a variety of applications where flexibility and reusability are paramount.
Processor core independent semiconductor IPs are tailored to function across different processor architectures, avoiding the constraints tied to any one specific core. This characteristic is particularly beneficial in embedded systems, where designers aim to balance cost, performance, and power efficiency while ensuring seamless integration. These IPs provide solutions that accommodate diverse processing requirements, from small-scale embedded controllers to large-scale data centers, making them essential components in the toolkit of semiconductor design engineers.
Products in this category often include memory controllers, I/O interfaces, and various digital signal processing blocks, each designed to operate autonomously from the central processor's architecture. This independence allows manufacturers to leverage these IPs in a broad array of devices, from consumer electronics to automotive systems, without the need for extensive redesigns for different processor families. Moreover, this flexibility championed by processor core independent IPs significantly accelerates the time-to-market for many devices, offering a competitive edge in high-paced industry environments.
Furthermore, the adoption of processor core independent IPs supports the development of customized, application-specific integrated circuits (ASICs) and system-on-chips (SoCs) that require unique configurations, without the overhead of processor-specific dependencies. By embracing these advanced semiconductor IPs, businesses can ensure that their devices are future-proof, scalable, and capable of integrating new functionalities as technologies advance without being hindered by processor-specific limitations. This adaptability positions processor core independent IPs as a vital cog in the machine of modern semiconductor design and innovation.
The Metis AIPU PCIe AI Accelerator Card by Axelera AI is designed for developers seeking top-tier performance in vision applications. Powered by a single Metis AIPU, this PCIe card delivers up to 214 TOPS, handling demanding AI tasks with ease. It is well-suited for high-performance AI inference, featuring two configurations: 4GB and 16GB memory options. The card benefits from the Voyager SDK, which enhances the developer experience by simplifying the deployment of applications and extending the card's capabilities. This accelerator PCIe card is engineered to run multiple AI models and support numerous parallel neural networks, enabling significant processing power for advanced AI applications. The Metis PCIe card performs at an industry-leading level, achieving up to 3,200 frames per second for ResNet-50 tasks and offering exceptional scalability. This makes it an excellent choice for applications demanding high throughput and low latency, particularly in computer vision fields.
Panmnesia's CXL 3.1 Switch is an integral component designed to facilitate high-speed, low-latency data transfers across multiple connected devices. It is architected to manage resource allocation seamlessly in AI and high-performance computing environments, supporting broad bandwidth, robust data throughput, and efficient power consumption, creating a cohesive foundation for scalable AI infrastructures. Its integration with advanced protocols ensures high system compatibility.
Universal Chiplet Interconnect Express (UCIe) is a cutting-edge technology designed to enhance chiplet-based system integrations. This innovative interconnect solution supports seamless data exchange across heterogeneous chiplets, promoting a highly efficient and scalable architecture. UCIe is expected to revolutionize system efficiencies by enabling a smoother and more integrated communication framework. By employing this technology, developers can leverage its superior power efficiency and adaptability to different mainstream technology nodes. It makes it possible to construct complex systems with reduced energy consumption while ensuring performance integrity. UCIe plays a pivotal role in accelerating the transition to the chiplet paradigm, ensuring systems are not only up to current standards but also adaptable for future advancements. Its robust framework facilitates improved interconnect strategies, crucial for next-generation semiconductor products.
The Yitian 710 Processor is T-Head's flagship ARM-based server chip that represents the pinnacle of their technological expertise. Designed with a pioneering architecture, it is crafted for high efficiency and superior performance metrics. This processor is built using a 2.5D packaging method, integrating two dies and boasting a substantial 60 billion transistors. The core of the Yitian 710 consists of 128 high-performance Armv9 CPU cores, each accompanied by advanced memory configurations that streamline instruction and data caching processes. Each CPU integrates 64KB of L1 instruction cache, 64KB of L1 data cache, and 1MB of L2 cache, supplemented by a robust 128MB system-level cache on the chip. To support expansive data operations, the processor is equipped with an 8-channel DDR5 memory system, enabling peak memory bandwidth of up to 281GB/s. Its I/O subsystem is formidable, featuring 96 PCIe 5.0 channels capable of achieving dual-direction bandwidth up to 768GB/s. With its multi-layered design, the Yitian 710 Processor is positioned as a leading solution for cloud services, data analytics, and AI operations.
xcore.ai stands as a cutting-edge processor that brings sophisticated intelligence, connectivity, and computation capabilities to a broad range of smart products. Designed to deliver optimal performance for applications in consumer electronics, industrial control, and automotive markets, it efficiently handles complex processing tasks with low power consumption and rapid execution speeds. This processor facilitates seamless integration of AI capabilities, enhancing voice processing, audio interfacing, and real-time analytics functions. It supports various interfacing options to accommodate different peripheral and sensor connections, thus providing flexibility in design and deployment across multiple platforms. Moreover, the xcore.ai ensures robust performance in environments requiring precise control and high data throughput. Its compatibility with a wide array of software tools and libraries enables developers to swiftly create and iterate applications, reducing the time-to-market and optimizing the design workflows.
Veyron V2 represents the next generation of Ventana's high-performance RISC-V CPU. It significantly enhances compute capabilities over its predecessor, designed specifically for data center, automotive, and edge deployment scenarios. This CPU maintains compatibility with the RVA23 RISC-V specification, making it a powerful alternative to the latest ARM and x86 counterparts within similar domains. Focusing on seamless integration, the Veyron V2 offers clean, portable RTL implementations with a standardized interface, optimizing its use for custom SoCs with high-core counts. With a robust 512-bit vector unit, it efficiently supports workloads requiring both INT8 and BF16 precision, making it highly suitable for AI and ML applications. The Veyron V2 is adept in handling cloud-native and virtualized workloads due to its full architectural virtualization support. The architectural advancements offer significant performance-per-watt improvements, and advanced cache and virtualization features ensure a secure and reliable computing environment. The Veyron V2 is available as both a standalone IP and a complete hardware platform, facilitating diverse integration pathways for customers aiming to harness Ventana’s innovative RISC-V solutions.
Chimera GPNPU provides a groundbreaking architecture, melding the efficiency of neural processing units with the flexibility and programmability of processors. It supports a full range of AI and machine learning workloads autonomously, eliminating the need for supplementary CPUs or GPUs. The processor is future-ready, equipped to handle new and emerging AI models with ease, thanks to its C++ programmability. What makes Chimera stand out is its ability to manage a diverse array of workloads within a singular processor framework that combines matrix, vector, and scalar operations. This harmonization ensures maximum performance for applications across various market sectors, such as automotive, mobile devices, and network edge systems. These capabilities are designed to streamline the AI development process and facilitate high-performance inference tasks, crucial for modern gadget ecosystems. The architecture is fully synthesizable, allowing it to be implemented in any process technology, from current to advanced nodes, adjusting to desired performance targets. The adoption of a hybrid Von Neuman and 2D SIMD matrix design supports a broad suite of DSP operations, providing a comprehensive toolkit for complex graph and AI-related processing.
The Jotunn 8 is heralded as the world's most efficient AI inference chip, designed to maximize AI model deployment with lightning-fast speeds and scalability. This powerhouse is crafted to efficiently operate within modern data centers, balancing critical factors such as high throughput, low latency, and optimization of power use, all while maintaining a sustainable infrastructure. With the Jotunn 8, AI investments reach their full potential through high-performance inference solutions that significantly reduce operational costs while committing to environmental sustainability. Its ultra-low latency feature is crucial for real-time applications such as chatbots and fraud detection systems. Not only does it deliver high throughput needed for demanding services like recommendation engines, but it also proves cost-efficient, aiming to lower the cost per inference crucial for businesses operating at a large scale. Additionally, the Jotunn 8 boasts performance per watt efficiency, a major factor considering that power is a significant operational expense and a driver of the carbon footprint. By implementing the Jotunn 8, businesses can ensure their AI models deliver maximum impact while staying competitive in the growing real-time AI services market. This chip lays down a new foundation for scalable AI, enabling organizations to optimize their infrastructures without compromising on performance.
Time-Triggered Ethernet (TTEthernet) is an advanced form of Ethernet designed for applications that require high levels of determinism and redundancy, particularly evident in aerospace and space projects. TTEthernet offers an integrated solution for complex systems that mandates reliable time-sensitive operations, such as those required in human spaceflight where triple redundancy is crucial for mission-critical environments. This technology supports dual fault-tolerance by using triple-redundant networks, ensuring that the system continues to function if failures occur. It's exceptionally suited for systems with rigorous safety-critical requirements and has been employed in ventures like NASA's Orion spacecraft thanks to its robust standard compliance and support for fault-tolerant synchronization protocols. Adhering to the ECSS engineering standards, TTEthernet facilitates seamless integration and enables bandwidth efficiencies that are significant for both onboard and ground-based operations. TTTech's TTEthernet solutions have been further complemented by their proprietary scheduling tools and chip IP offerings, which continue to set industry benchmarks in network precision and dependability.
The AX45MP is engineered as a high-performance processor that supports multicore architecture and advanced data processing capabilities, particularly suitable for applications requiring extensive computational efficiency. Powered by the AndesCore processor line, it capitalizes on a multicore symmetric multiprocessing framework, integrating up to eight cores with robust L2 cache management. The AX45MP incorporates advanced features such as vector processing capabilities and support for MemBoost technology to maximize data throughput. It caters to high-demand applications including machine learning, digital signal processing, and complex algorithmic computations, ensuring data coherence and efficient power usage.
The SAKURA-II AI Accelerator from EdgeCortix is a sophisticated solution designed to propel generative AI to new frontiers with impressive energy efficiency. This advanced accelerator provides unparalleled performance with high flexibility for a wide variety of applications, leveraging EdgeCortix's dedicated Dynamic Neural Accelerator architecture. SAKURA-II is optimized for real-time, low-latency AI inference on the edge, tackling demanding generative AI tasks efficiently in constrained environments. The accelerator boasts up to 60 TOPS (Tera Operations Per Second) INT8 performance, allowing it to process large neural networks with complex parameters such as Llama 2 and Stable Diffusion effectively. It supports applications across vision, language, audio, and beyond, by utilizing robust DRAM capabilities and enhanced data throughput. This allows it to outperform other solutions while maintaining a low power consumption profile typically around 8 watts. Designed for integration into small silicon spaces, SAKURA-II caters to the needs of highly efficient AI models, providing dynamic capabilities to meet the stringent requirements of next-gen applications. Thus, the SAKURA-II AI Accelerator stands out as a top choice for developers seeking seamless deployment of cutting-edge AI applications at the edge, underscoring EdgeCortix's leadership in energy-efficient AI processing.
aiWare is engineered as a high-performance neural processing unit tailored for automotive AI applications, delivering exceptional power efficiency and computational capability across a broad spectrum of neural network tasks. Its design centers around achieving the utmost efficiency in AI inference, providing flexibility and scalability for various levels of autonomous driving, from basic L2 assistance systems to complex L4 self-driving operations. The aiWare architecture exemplifies leading-edge NPU efficiencies, reaching up to 98% across diverse neural network workloads like CNNs and RNNs, making it a premier choice for AI tasks in the automotive sector. It boasts an industry-leading 1024 TOPS capability, making it suitable for multi-sensor and multi-camera setups required by advanced autonomous vehicle systems. The NPU's hardware determinism aids in achieving high ISO 26262 ASIL B certification standards, ensuring it meets the rigorous safety specifications essential in automotive applications. Incorporating an easy-to-integrate RTL design and a comprehensive SDK, aiWare simplifies system integration and accelerates development timelines for automotive manufacturers. Its highly optimized dataflow and minimal external memory traffic significantly enhance system power economy, providing crucial benefits in reducing operational costs for deployed automotive AI solutions. Vibrant with efficiency, aiWare assures OEMs the capabilities needed to handle modern automotive workloads while maintaining minimal system constraints.
The RISC-V Core-hub Generators from InCore are tailored for developers who need advanced control over their core architectures. This innovative tool enables users to configure core-hubs precisely at the instruction set and microarchitecture levels, allowing for optimized design and functionality. The platform supports diverse industry applications by facilitating the seamless creation of scalable and customizable RISC-V cores. With the RISC-V Core-hub Generators, InCore empowers users to craft their own processor solutions from the ground up. This flexibility is pivotal for businesses looking to capitalize on the burgeoning RISC-V ecosystem, providing a pathway to innovation with reduced risk and cost. Incorporating feedback from leading industry partners, these generators are designed to lower verification costs while accelerating time-to-market for new designs. Users benefit from InCore's robust support infrastructure and a commitment to simplifying complex chip design processes. This product is particularly beneficial for organizations aiming to integrate RISC-V technology efficiently into their existing systems, ensuring compatibility and enhancing functionality through intelligent automation and state-of-the-art tools.
The ORC3990 is a groundbreaking LEO Satellite Endpoint SoC engineered for use in the Totum DMSS Network, offering exceptional sensor-to-satellite connectivity. This SoC operates within the ISM band and features advanced RF transceiver technology, power amplifiers, ARM CPUs, and embedded memory. It boasts a superior link budget that facilitates indoor signal coverage. Designed with advanced power management capabilities, the ORC3990 supports over a decade of battery life, significantly reducing maintenance requirements. Its industrial temperature range of -40 to +85 degrees Celsius ensures stable performance in various environmental conditions. The compact design of the ORC3990 fits seamlessly into any orientation, further enhancing its ease of use. The SoC's innovative architecture eliminates the need for additional GNSS chips, achieving precise location fixes within 20 meters. This capability, combined with its global LEO satellite coverage, makes the ORC3990 a highly attractive solution for asset tracking and other IoT applications where traditional terrestrial networks fall short.
AndesCore Processors offer a robust lineup of high-performance CPUs tailored for diverse market segments. Employing the AndeStar V5 instruction set architecture, these cores uniformly support the RISC-V technology. The processor family is classified into different series, including the Compact, 25-Series, 27-Series, 40-Series, and 60-Series, each featuring unique architectural advances. For instance, the Compact Series specializes in delivering compact, power-efficient processing, while the 60-Series is optimized for high-performance out-of-order execution. Additionally, AndesCore processors extend customization through Andes Custom Extension, which allows users to define specific instructions to accelerate application-specific tasks, offering a significant edge in design flexibility and processing efficiency.
Dillon Engineering's 2D FFT core delivers robust performance for transforming two-dimensional data sets into the frequency domain with high precision and efficiency. By leveraging both internal and external memory between dual FFT engines, this core optimizes the data processing pipeline, ensuring fast and reliable results even as data complexity increases. Ideal for applications that handle image processing and data matrix transformations, the 2D FFT core navigates data bandwidth constraints with ease, maintaining throughput even for larger data sets. This core's design maximizes data accuracy and minimizes processing delays, crucial for applications requiring precise image recognition and analysis. Thanks to the adaptable nature provided by Dillon's ParaCore Architect, this IP core is easily customized for various FPGA and ASIC environments. Its flexibility and robust processing capabilities make the 2D FFT core a key component for cutting-edge applications in fields where data translation and processing are critical.
Poised to deliver exceptional performance in advanced applications, the SCR9 processor core epitomizes modern processing standards with its 12-stage dual-issue out-of-order pipeline and hypervisor support. Its inclusion of a vector processing unit (VPU) positions it as essential for high-performance computing tasks that require extensive parallel data processing. Suitable for high-demand environments such as enterprise data systems, AI workloads, and computationally intensive mobile applications, the SCR9 core is tailored to address high-throughput demands while maintaining reliability and accuracy. With support for symmetric multiprocessing (SMP) of up to 16 cores, this core stands as a configurable powerhouse, enabling developers to maximize processing efficiency and throughput. The SCR9's capabilities are bolstered by Syntacore’s dedication to supporting developers with comprehensive tools and documentation, ensuring efficient design and implementation. Through its blend of sophisticated features and support infrastructure, the SCR9 processor core paves the way for advancing technological innovation across numerous fields, establishing itself as a robust solution in the rapidly evolving landscape of high-performance computing.
The Ultra-Low-Power 64-Bit RISC-V Core by Micro Magic is engineered to deliver exceptional energy efficiency while maintaining high performance. This core is specifically designed to operate at 1GHz while consuming a mere 10mW of power, making it ideal for today's power-conscious applications. Utilizing advanced design techniques, this processor achieves its high performance at lower voltages, ensuring reduced power consumption without sacrificing speed. Constructed with a focus on optimizing processing capabilities, this RISC-V core is built to cater to demanding environments where energy efficiency is critical. Whether used as a standalone processor or integrated into larger systems, its low power requirements and robust performance make it highly versatile. This core also supports scalable processing with its architecture, accommodating a broad spectrum of applications from IoT devices to performance-intensive computing tasks, aligning with industry standards for modern electronic products.
The Azurite Core-hub by InCore Semiconductors is a sophisticated solution designed to offer scalable RISC-V SoCs with high-speed secure interconnect capabilities. This processor is tailored for performance-demanding applications, ensuring that systems maintain robust security while executing tasks at high speeds. Azurite leverages advanced interconnect technologies to enhance the communication between components within a SoC, making it ideal for industries that require rapid data transfer and high processing capabilities. The core is engineered to be scalable, supporting a wide range of applications from edge AI to functional safety systems, adapting seamlessly to various industry needs. Engineered with a focus on security, the Azurite Core-hub incorporates features that protect data integrity and system operation in a dynamic technological landscape. This makes it a reliable choice for companies seeking to integrate advanced RISC-V architectures into their security-focused applications, offering not just innovation but also peace of mind with its secure design.
The SiFive Essential family stands out as a versatile solution, delivering a wide range of pre-defined embedded CPU cores suitable for a variety of industrial applications. Whether you're designing for minimal area and power consumption or maximum feature capabilities, Essential offers configurations that adapt to diverse industrial needs. From compact microcontrollers to rich OS-compatible CPUs, Essential supports 32-bit and 64-bit pipelines, ensuring an optimal balance between performance and efficiency. This flexibility is enhanced by advanced tracing and debugging features, robust SoC security through WorldGuard support, and a broad array of interface options for seamless SoC integration. These comprehensive support mechanisms assure developers of maximum adaptability and accelerated integration within their designs, whether in IoT devices or control plane applications. SiFive Essential’s power efficiency and adaptability make it particularly suited for deploying customizable solutions in embedded applications. Whether the requirement is for intense computational capacity or low-power, battery-efficient tasks, Essential cores help accelerate time-to-market while offering robust performance in compact form factors, emphasizing scalable and secure solutions for a variety of applications.
Pushing the envelope of application processing, the SCR7 application core integrates a 12-stage dual-issue out-of-order pipeline for high-performance computing tasks. It is equipped with advanced cache coherency and a robust memory subsystem ideal for modern applications demanding exceptional compute power and scalability. This application core serves large-scale computing environments, addressing needs within sectors such as data centers, enterprise solutions, and AI-enhanced applications. Supporting symmetric multiprocessing (SMP) with configurations up to eight cores, the SCR7 ensures smooth and simultaneous execution of complex tasks, significantly improving throughput and system efficiency. Syntacore complements this architecture with a rich toolkit that facilitates development across diverse platforms, enhancing its adaptability to specific commercial needs. The SCR7 embodies the future of application processing with its ability to seamlessly integrate into existing infrastructures while delivering outperforming results rooted in efficient architectural design and robust support systems.
The AndeShape Platforms are designed to streamline system development by providing a diverse suite of IP solutions for SoC architecture. These platforms encompass a variety of product categories, including the AE210P for microcontroller applications, AE300 and AE350 AXI fabric packages for scalable SoCs, and AE250 AHB platform IP. These solutions facilitate efficient system integration with Andes processors. Furthermore, AndeShape offers a sophisticated range of development platforms and debugging tools, such as ADP-XC7K160/410, which reinforce the system design and verification processes, providing a comprehensive environment for the innovative realization of IoT and other embedded applications.
Dynamic Neural Accelerator II (DNA-II) by EdgeCortix enhances the processing capabilities of AI hardware through its state-of-the-art, reconfigurable architecture. This versatile IP core is tailored for edge applications, enabling seamless execution of complex AI tasks both in convolutional and transformer network contexts. With runtime configurability, DNA-II offers unparalleled efficiency, allowing optimized interconnects between compute units to maximize parallel processing. The DNA-II architecture leverages proprietary technologies to reconfigure data paths dynamically, thereby reducing on-chip memory bandwidth and achieving higher operability than standard approaches. Designed to be interfaced with various host processors, DNA-II is adaptable for multiple system-on-chip (SoC) implementations demanding high parallelism and low latency. It's a pivotal part of the SAKURA-II ecosystem, contributing significantly to its generative AI capabilities. A key advantage of DNA-II is its support for scaling up performance starting with 1K MACs, which facilitates customization across different application scales and requirements. Supported by the MERA software stack, DNA-II optimizes computation and resource allocation efficiently, making it ideal for any developer looking to enhance edge AI solutions with powerful, innovative IP.
The GSHARK is a sophisticated GPU IP tailored for embedded system devices such as digital cameras. It demonstrates exceptional graphics rendering capabilities, allowing embedded systems to experience a display quality akin to PCs and smartphones. The architecture couples high performance with outstanding power efficiency and requires a low CPU load, making it an excellent choice for enhancing embedded system graphics. This IP has gained widespread acceptance, reflected in its impressive shipment history, surpassing a hundred million units. Its reliability is bolstered by a consistent track record on commercial silicons and robust hardware accelerator IP cores. The GSHARK architecture supports a multitude of applications, enabling rich graphics functionalities like human-machine interfaces and improved user experiences on embedded platforms. By fostering smooth and smart graphics rendering, GSHARK significantly elevates the capabilities of devices integrating this solution.
The General Purpose Accelerator (Aptos) from Ascenium stands out as a redefining force in the realm of CPU technology. It seeks to overcome the limitations of traditional CPUs by providing a solution that tackles both performance inefficiencies and high energy demands. Leveraging compiler-driven architecture, this accelerator introduces a novel approach by simplifying CPU operations, making it exceptionally suited for handling generic code. Notably, it offers compatibility with the LLVM compiler, ensuring a wide range of applications can be adapted seamlessly without rewrites. The Aptos excels in performance by embracing a highly parallel yet simplified CPU framework that significantly boosts efficiency, reportedly achieving up to four times the performance of cutting-edge CPUs. Such advancements cater not only to performance-oriented tasks but also substantially mitigate energy consumption, providing a dual benefit of cost efficiency and reduced environmental impact. This makes Aptos a valuable asset for data centers seeking to optimize their energy footprint while enhancing computational capabilities. Additionally, the Aptos architecture supports efficient code execution by resolving tasks predominantly at compile-time, allowing the processor to handle workloads more effectively. This allows standard high-level language software to run with improved efficiency across diverse computing environments, aligning with an overarching goal of greener computing. By maximizing operational efficiency and reducing carbon emissions, Aptos propels Ascenium into a leading position in the sustainable and high-performance computing sector.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The Veyron V1 is a high-performance RISC-V CPU aimed at data centers and similar applications that require robust computing power. It integrates with various chiplet and IP cores, making it a versatile choice for companies looking to create customized solutions. The Veyron V1 is designed to offer competitive performance against x86 and ARM counterparts, providing a seamless transition between different node process technologies. This CPU benefits from Ventana's innovation in RISC-V technology, where efforts are placed on providing an extensible architecture that facilitates domain-specific acceleration. With capabilities stretching from hyperscale computing to edge applications, the Veyron V1 supports extensive instruction sets for high-throughput operations. It also boasts leading-edge chiplet interfaces, opening up numerous opportunities for rapid productization and cost-effective deployment. Ventana's emphasis on open standards ensures that the Veyron V1 remains an adaptable choice for businesses aiming at bespoke solutions. Its compatibility with system IP and its provision in multiple platform formats—including chiplets—enable businesses to leverage the latest technological advancements in RISC-V. Additionally, the ecosystem surrounding the Veyron series ensures support for both modern software frameworks and cross-platform integration.
The CTAccel Image Processor on Intel Agilex FPGA is designed to handle high-performance image processing by capitalizing on the robust capabilities of Intel's Agilex FPGAs. These FPGAs, leveraging the 10 nm SuperFin process technology, are ideal for applications demanding high performance, power efficiency, and compact sizes. Featuring advanced DSP blocks and high-speed transceivers, this IP thrives in accelerating image processing tasks that are typically computational-intensive when executed on CPUs. One of the main advantages is its ability to significantly enhance image processing throughput, achieving up to 20 times the speed while maintaining reduced latency. This performance prowess is coupled with low power consumption, leading to decreased operational and maintenance costs due to fewer required server instances. Additionally, the solution is fully compatible with mainstream image processing software, facilitating seamless integration and leveraging existing software investments. The adaptability of the FPGA allows for remote reconfiguration, ensuring that the IP can be tailored to specific image processing scenarios without necessitating a server reboot. This ease of maintenance, combined with a substantial boost in compute density, underscores the IP's suitability for high-demand image processing environments, such as those encountered in data centers and cloud computing platforms.
The Neural Processing Unit (NPU) offered by OPENEDGES is engineered to accelerate machine learning tasks and AI computations. Designed for integration into advanced processing platforms, this NPU enhances the ability of devices to perform complex neural network computations quickly and efficiently, significantly advancing AI capabilities. This NPU is built to handle both deep learning and inferencing workloads, utilizing highly efficient data management processes. It optimizes the execution of neural network models with acceleration capabilities that reduce power consumption and latency, making it an excellent choice for real-time AI applications. The architecture is flexible and scalable, allowing it to be tailored for specific application needs or hardware constraints. With support for various AI frameworks and models, the OPENEDGES NPU ensures compatibility and smooth integration with existing AI solutions. This allows companies to leverage cutting-edge AI performance without the need for drastic changes to legacy systems, making it a forward-compatible and cost-effective solution for modern AI applications.
The Ncore Cache Coherent Interconnect from Arteris provides a quintessential solution for handling multi-core SoC design complications, facilitating heterogeneous coherency and efficient caching. It is distinguished by its high throughput, ensuring reliable and high-performance system-on-chips (SoCs). Ncore's configurable fabric offers designers the ability to establish a multi-die, multi-protocol coherent interconnect where emerge cutting-edge technologies like RISC-V can seamlessly integrate. This IP’s adaptability and scalable design unlock broader performance trajectories, whether for small embedded systems or extensive multi-billion transistor architectures. Ncore's strength lies in its ability to offer ISO 26262 ASIL D readiness, enabling designers to adhere to stringent automotive safety standards. Furthermore, its coupling with Magillem™ automation enhances the potential for rapid IP integration, simplifying multi-die designs and compressing development timelines. In addressing modern computational demands, Ncore is reinforced by robust quality of service parameters, secure power management, and seamless integration capabilities, making it an imperative asset in constructing scalable system architectures. By streamlining memory operations and optimizing data flow, it provides bandwidth that supports both high-end automotive and complex consumer electronics, fostering innovation and market excellence.
The Codasip RISC-V BK Core Series represents a family of processor cores that bring advanced customization to the forefront of embedded designs. These cores are optimized for power and performance, striking a fine balance that suits an array of applications, from sensor controllers in IoT devices to sophisticated automotive systems. Their modular design allows developers to tailor instructions and performance levels directly to their needs, providing a flexible platform that enhances both existing and new applications. Featuring high degrees of configurability, the BK Core Series facilitates designers in achieving superior performance and efficiency. By supporting a broad spectrum of operating requirements, including low-power and high-performance scenarios, these cores stand out in the processor IP marketplace. The series is verified through industry-leading practices, ensuring robust and reliable operation in various application environments. Codasip has made it straightforward to use and adapt the BK Core Series, with an emphasis on simplicity and productivity in customizing processor architecture. This ease of use allows for swift validation and deployment, enabling quicker time to market and reducing costs associated with custom hardware design.
FlexWay Interconnect is tailored for developers aiming to integrate scalable, low-power network-on-chip (NoC) solutions into IoT edge devices and microcontroller units (MCUs). It is celebrated for its adaptability in small to medium-scale designs, facilitating efficient interconnect setup with uncomplicated, cost-effective elements. Equipped to handle expansive bandwidth demands with limited power use, FlexWay capitalizes on Arteris’ advanced algorithms and graphical interfaces for optimal chip architecture design. By supporting multi-clock, voltage, and power domains with integrated clock gating, the IP maintains thorough power management across different configurations. It is engineered to easily adapt to various protocols, promising easy integration with existing systems without sacrificing performance. FlexWay’s intelligent design offers considerable flexibility, making it a prime choice for industries grappling with significant on-chip communication demands. By simplifying the design process and ensuring energy-efficient data management, this IP is integral for bringing cutting-edge IoT applications to fruition swiftly and cost-effectively.
Dyumnin's RISCV SoC is a versatile platform centered around a 64-bit quad-core server-class RISCV CPU, offering extensive subsystems, including AI/ML, automotive, multimedia, memory, cryptographic, and communication systems. This test chip can be reviewed in an FPGA format, ensuring adaptability and extensive testing possibilities. The AI/ML subsystem is particularly noteworthy due to its custom CPU configuration paired with a tensor flow unit, accelerating AI operations significantly. This adaptability lends itself to innovations in artificial intelligence, setting it apart in the competitive landscape of processors. Additionally, the automotive subsystem caters robustly to the needs of the automotive sector with CAN, CAN-FD, and SafeSPI IPs, all designed to enhance systems connectivity within vehicles. Moreover, the multimedia subsystem boasts a complete range of IPs to support HDMI, Display Port, MIPI, and more, facilitating rich audio and visual experiences across devices.
The SiFive Performance family is at the forefront of providing maximum throughput and performance across a spectrum of computing requirements, from datacenter workloads to consumer applications. These 64-bit, out-of-order cores incorporate advanced vector processing capabilities up to 256-bit, supporting a diversity of workloads including AI. The architecture spans from three to six-wide out-of-order cores, optimized for either dedicated vector engines or a balanced energy-efficient setup, making it a versatile choice for high-performance needs. Engineered for modern AI workloads, the Performance series offers a robust compute density and performance efficiency that is ideal for both mobile and stationary infrastructure. Customers can take advantage of flexible configuration options to balance power and area constraints, thanks to SiFive's state-of-the-art RISC-V solutions. The family’s cores, such as the P400, P600, and P800 Series, offer scalability from low-power tasks to demanding datacenter applications. The series is particularly adept at handling AI workloads, making it suitable for applications that demand high-speed data processing and analysis, such as internet of things (IoT) devices, network infrastructure, and high-volume consumer electronics. Customers benefit from the ability to combine various performance cores into a unified, high-performance CPU optimized for minimal power consumption, making it possible to design systems that balance performance and efficiency.
Network on Chip (NOC-X) provides an advanced framework that orchestrates efficient communication across intricate semiconductor systems. It forms the backbone of complex data transfer within a chip or between chiplets, ensuring that the system's various components interact efficiently. The design of NOC-X prioritizes both power efficiency and high throughput, making it capable of meeting the demands of large-scale chip architectures. By embedding this technology, systems can enhance their computational ability while maintaining a balance in energy consumption, a critical factor in modern design. Its implementation facilitates improved system scalability and reliability. This makes NOC-X an essential feature in the development of cutting-edge semiconductor solutions, capable of sustaining advancements in processing capabilities and integrating seamlessly with other interconnect technologies.
The Tyr family of processors brings the cutting-edge power of Edge AI to the forefront, emphasizing real-time data processing directly at its point of origin. This capability facilitates instant insights with reduced latency and enhanced privacy, as it limits the reliance on cloud-based processing. Ideal for settings such as autonomous vehicles and smart factories, Tyr is engineered to operate faster and more secure with data-center-class performance in a compact, ultra-efficient design. The processors within the Tyr family are purpose-built to support local processing, which saves bandwidth and protects sensitive data, making it suitable for real-world applications like autonomous driving and factory automation. Edge AI is further distinguished by its ability to provide immediate analysis and decision-making capabilities. Whether it's enabling autonomous vehicles to understand their environment for safe navigation or facilitating real-time industrial automation, the Tyr processors excel in delivering low-latency, high-compute performance essential for mission-critical operations. The local data processing capabilities inherent in the Tyr line not only cut down on costs associated with bandwidth but also contribute towards compliance with stringent privacy standards. In addition to performance and privacy benefits, the Tyr family emphasizes sustainability. By minimizing cloud dependency, these processors significantly reduce operational costs and the carbon footprint, aligning with the growing demand for greener AI solutions. This combination of performance, security, and sustainability makes Tyr processors a cornerstone in advancing industrial and consumer applications using Edge AI.
The iCan PicoPop® System on Module offers a compact solution for high-performance computing in constrained environments, particularly in the realm of aerospace technology. This system on module is designed to deliver robust computing power while maintaining minimal space usage, offering an excellent ratio of performance to size. The PicoPop® excels in integrating a variety of functions onto a single module, including processing, memory, and interface capabilities, which collectively handle the demanding requirements of aerospace applications. Its efficient power consumption and powerful processing capability make it ideally suited to a range of in-flight applications and systems. This solution is tailored to support the development of sophisticated aviation systems, ensuring scalability and flexibility in deployment. With its advanced features and compact form, the iCan PicoPop® System on Module stands out as a potent component for modern aerospace challenges.
The BlueLynx Chiplet Interconnect is a sophisticated die-to-die interconnect solution that offers industry-leading performance and flexibility for both advanced and conventional packaging applications. As an adaptable subsystem, BlueLynx supports the integration of Universal Chiplet Interconnect Express (UCIe) as well as Bunch of Wires (BoW) standards, facilitating high bandwidth capabilities essential for contemporary chip designs.\n\nBlueLynx IP emphasizes seamless connectivity to on-die buses and network-on-chip (NoCs) using standards such as AMBA, AXI, and ACE among others, thereby accelerating the design process from system-on-chip (SoC) architectures to chiplet-based designs. This innovative approach not only allows for faster deployment but also mitigates development risks through a predictable and silicon-friendly design process with comprehensive support for rapid first-pass silicon success.\n\nWith BlueLynx, designers can take advantage of a highly optimized performance per watt, offering customizable configurations tailored to specific application needs across various markets like AI, high-performance computing, and mobile technologies. The IP is crafted to deliver outstanding bandwidth density and energy efficiency, bridging the requirements of advanced nodal technologies with compatibility across several foundries, ensuring extensive applicability and cost-effectiveness for diverse semiconductor solutions.
ISPido on VIP Board is a customized runtime solution tailored for Lattice Semiconductors’ Video Interface Platform (VIP) board. This setup enables real-time image processing and provides flexibility for both automated configuration and manual control through a menu interface. Users can adjust settings via histogram readings, select gamma tables, and apply convolutional filters to achieve optimal image quality. Equipped with key components like the CrossLink VIP input bridge board and ECP5 VIP Processor with ECP5-85 FPGA, this solution supports dual image sensors to produce a 1920x1080p HDMI output. The platform enables dynamic runtime calibration, providing users with interface options for active parameter adjustments, ensuring that image settings are fine-tuned for various applications. This system is particularly advantageous for developers and engineers looking to integrate sophisticated image processing capabilities into their devices. Its runtime flexibility and comprehensive set of features make it a valuable tool for prototyping and deploying scalable imaging solutions.
Tensix Neo represents a transformative leap in enhancing AI computational efficiency, specifically designed to empower developers working on sophisticated AI networks and applications. Built around a Network-on-Chip (NoC) framework, Tensix Neo optimizes performance-per-watt, a critical factor for AI processing. It supports multiple precision formats to adapt to diverse AI workloads efficiently, allowing seamless integration with existing models and enabling scalability. Careful design ensures that Tensix Neo delivers consistent high performance across varied AI tasks, from image recognition algorithms to advanced analytics, making it an essential component in the AI development toolkit. Its capability to connect with an expanding library of AI models allows developers to leverage its full potential across multiple cutting-edge applications. This synthesis of performance and efficiency makes Tensix Neo a vital player in fields requiring high adaptability and rapid processing, such as autonomous vehicles, smart devices, and dynamic data centers. Moreover, the compatibility of Tensix Neo with Tenstorrent's other solutions underscores its importance as a flexible and powerful processing core. Designed with the contemporary developer in mind, Tensix Neo integrates seamlessly with open-source resources and tools, ensuring that developers have the support and flexibility needed to meet the challenges of tomorrow's AI solutions.
The Camera ISP Core is designed to optimize image signal processing by integrating sophisticated algorithms that produce sharp, high-resolution images while requiring minimal logic. Compatible with RGB Bayer and monochrome image sensors, this core handles inputs from 8 to 14 bits and supports resolutions from 256x256 up to 8192x8192 pixels. Its multi-pixel processing capabilities per clock cycle allow it to achieve performance metrics like 4Kp60 and 4Kp120 on FPGA devices. It uses AXI4-Lite and AXI4-Stream interfaces to streamline defect correction, lens shading correction, and high-quality demosaicing processes. Advanced noise reduction features, both 2D and 3D, are incorporated to handle different lighting conditions effectively. The core also includes sophisticated color and gamma corrections, with HDR processing for combining multiple exposure images to improve dynamic range. Capabilities such as auto focus and saturation, contrast, and brightness control are further enhanced by automatic white balance and exposure adjustments based on RGB histograms and window analyses. Beyond its core features, the Camera ISP Core is available with several configurations including the HDR, Pro, and AI variations, supporting different performance requirements and FPGA platforms. The versatility of the core makes it suitable for a range of applications where high-quality real-time image processing is essential.
The CurrentRF CC-100 Power Optimizer is central to the company's innovative energy harvesting technology, utilized in devices like the PowerStic and Exodus. This optimizer is engineered to be a fundamental component in intercepting digital noise currents and recycling them back into the system, effectively reducing operational power. It supports the enhancement of system battery life by up to 40%, serving as a critical device in power-conscious design strategies for integrated circuits and electric vehicles. The CC-100 ensures power savings when systems remain active, making it a vital tool for extending battery life in IC and systems design.
The NoC Bus Interconnect by OPENEDGES is a sophisticated solution for modern semiconductor designs, providing efficient on-chip communication. This network-on-chip (NoC) architecture facilitates communication between different IP blocks within a chip, significantly enhancing data flow and reducing bottlenecks compared to traditional bus systems. This interconnect solution is designed to provide high bandwidth and low latency, supporting various data transmission protocols. It's built to be highly scalable, accommodating growing demands in complex system-on-chip (SoC) designs. The flexibility in configuration allows it to support varied application needs, making it a versatile choice for high-performance computing, data centers, and AI applications. Besides its performance advantages, the NoC Bus Interconnect offers features that ensure optimal power management, which is crucial for maintaining efficiency in energy-sensitive applications. By intelligently managing data paths and utilizing advanced buffering techniques, it effectively minimizes power usage while maximizing throughput.
The CTAccel Image Processor for Xilinx's Alveo U200 is a FPGA-based accelerator aimed at enhancing image processing workloads in server environments. Utilizing the powerful capabilities of the Alveo U200 FPGA, this processor dramatically boosts throughput and reduces processing latency for data centers. The accelerator can vastly increase image processing speed, up to 4 to 6 times that of traditional CPUs, and decrease latency likewise, ensuring that compute density in a server setting is significantly boosted. This performance uplift enables data centers to lower maintenance and operational costs due to reduced hardware requirements. Furthermore, this IP maintains full compatibility with popular image processing software like OpenCV and ImageMagick, ensuring smooth adaptation for existing workflows. The advanced FPGA partial reconfiguration technology allows for dynamic updates and adjustments, increasing the IP's pragmatism for a wide array of image-related applications and improving overall performance without the need for server reboots.
The L5-Direct GNSS Receiver represents a sophisticated leap in positioning technology, offering a robust solution that directly captures L5-band signals, ensuring high precision in urban canyons and resilience to interference and jamming. This groundbreaking technology operates independently of the legacy L1 signals, utilizing innovative Application Specific Array Processor (ASAP) architecture to optimize signal processing for GNSS applications. The receiver's capabilities include support for a multitude of satellite constellations like GPS, Galileo, QZSS, and BeiDou, providing unmatched versatility and accuracy. Engineered for environments prone to signal disruption, the L5-direct receiver employs machine learning algorithms to effectively mitigate multipath errors, leveraging data from all GNSS signals. The result is a performance that ensures reliable location data, crucial for applications ranging from wearables and IoT devices to defense systems. This technology's design incorporates a single RF chain, reducing the overall size and cost while simplifying antenna integration and system complexity. In addition to its technological prowess, the L5-direct receiver offers scalable integration potential, from standalone ASICs to IP cores adaptable across various silicon processes. Through ongoing R&D and strategic partnerships with leading foundries such as TSMC and GlobalFoundries, oneNav ensures that this receiver not only meets current demands but also evolves with future GNSS innovations, maintaining a competitive edge in global positioning solutions.
The UltraLong FFT core from Dillon Engineering offers exceptional performance for applications requiring extensive sequence lengths. This core utilizes external memory in coordination with dual FFT engines to facilitate high throughput. While it typically hinges on memory bandwidth for its speed, the UltraLong FFT effectively processes lengthy data sequences in a streamlined manner. This core is characterized by its medium to high-speed capabilities and is an excellent choice for applications where external memory can be leveraged to support processing requirements. Its architecture allows for flexible design implementation, ensuring seamless integration with existing systems, and is particularly well-suited for advanced signal processing applications in both FPGA and ASIC environments. With Dillon's ParaCore Architect tool, customization and re-targeting of the IP core towards any technology are straightforward, offering maximum adaptability. This FFT solution stands out for its capacity to manage complex data tasks, making it an ideal fit for cutting-edge technologies demanding extensive data length processing efficiency.
Designed to cater to the needs of edge computing, the Neural Network Accelerator by Gyrus AI is a powerhouse of performance and efficiency. With a focus on graph processing capabilities, this product excels in implementing neural networks by providing native graph processing. The accelerator attains impressive speeds, achieving 30 TOPS/W, while offering efficient computational power with significantly reduced clock cycles, ranging between 10 to 30 times less compared to traditional models. The design ensures that power consumption is kept at a minimum, being 10-20 times lower due to its low memory usage configuration. Beyond its power efficiency, this accelerator is designed to maximize space with a smaller die area, ensuring an 8-10 times reduction in size while maintaining high utilization rates of over 80% for various model structures. Such design optimizations make it an ideal choice for applications requiring a compact, high-performance solution capable of delivering fast computations without compromising on energy efficiency. The Neural Network Accelerator is a testament to Gyrus AI's commitment to enabling smarter edge computing solutions. Additionally, Gyrus AI has paired this technology with software tools that facilitate the execution of neural networks on the IP, simplifying integration and use in various applications. This seamless integration is part of their broader strategy to augment human intelligence, providing solutions that enhance and expand the capabilities of AI-driven technologies across industries.
Ventana's System IP suite includes an advanced IOMMU, implementing RISC-V's IOMMU v1.0 specification to ensure memory protection and efficient virtualization. This system IP is vital for secure, scalable, and efficient computing, providing the necessary infrastructure for a range of operations from high-performance data center deployments to embedded systems. Ventana's approach integrates RISC-V's Sv39 and Sv48 page-based virtual memory schemes, delivering extensive support for large memory configurations commonly needed in data centers and edge computing environments. Additionally, the Enhanced Physical Memory Protection (ePMP) feature offers fine-grained control over memory access, crucial for secure system configurations and dynamic resource allocation. The system IP is designed to be seamlessly integrated into various operating systems and hardware configurations, with full support for necessary virtualization, security, and performance optimizations. This flexibility and compliance ensure that Ventana's IP can successfully fit into a wide array of technological ecosystems, offering robust support for modern security and processing demands.
RapidGPT is a pioneering AI-based tool crafted to transform the design processes for ASIC and FPGA engineers. By leveraging advanced AI algorithms, it offers an intelligent code assistant function, providing precise, context-aware suggestions for writing HDL codes such as Verilog and VHDL. Users can efficiently translate their design ideas into complete, systematic HDL code just by describing the intended functionality. This significantly reduces the need for manual coding, allowing engineers to focus on their conceptual designs. Beyond its impressive code-generation powers, RapidGPT features an integrated IP Knowledge Base. This facility allows for the amalgamation of numerous documentation resources, creating a retrieval augmented generation (RAG) database. This database facilitates smarter design integration by using available knowledge in its responses, assisting engineers in instantiating and connecting IPs effectively. RapidGPT also excels in conversational capabilities, with a chat interface that guides users in writing, modifying, or querying HDL code. Coupled with contextual suggestions, code optimization, and the AutoReview feature, RapidGPT identifies potential design issues and offers solutions, further enhancing the creation of high-quality hardware designs. This tool not only helps streamline processes but ensures adherence to industry best practices.
The Satellite Navigation SoC Integration offering by GNSS Sensor Ltd is a comprehensive solution designed to integrate sophisticated satellite navigation capabilities into System-on-Chip (SoC) architectures. It utilizes GNSS Sensor's proprietary VHDL library, which includes modules like the configurable GNSS engine, Fast Search Engine for satellite systems, and more, optimized for maximum CPU independence and flexibility. This SoC integration supports various satellite navigation systems like GPS, Glonass, and Galileo, with efficient hardware designs that allow it to process signals across multiple frequency bands. The solution emphasizes reduced development costs and streamlining the navigation module integration process. Leveraging FPGA platforms, GNSS Sensor's solution integrates intricate RF front-end components, allowing for a robust and adaptable GNSS receiver development. The system-on-chip solution ensures high performance, with features like firmware stored on ROM blocks, obviating the need for external memory.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!