All IPs > Processor > DSP Core
In the realm of semiconductor IP, DSP Cores play a pivotal role in enabling efficient digital signal processing capabilities across a wide range of applications. Short for Digital Signal Processor Cores, these semiconductor IPs are engineered to handle complex mathematical calculations swiftly and accurately, making them ideal for integration into devices requiring intensive signal processing tasks.
DSP Core semiconductor IPs are widely implemented in industries like telecommunications, where they are crucial for modulating and encoding signals in mobile phones and other communication devices. They empower these devices to perform multiple operations simultaneously, including compressing audio, optimizing bandwidth usage, and enhancing data packets for better transmission quality. Additionally, in consumer electronics, DSP Cores are fundamental in audio and video equipment, improving the clarity and quality of sound and visuals users experience.
Moreover, DSP Cores are a linchpin in the design of advanced automotive systems and industrial equipment. In automotive applications, they assist in radar and lidar systems, crucial for autonomous driving features by processing the data needed for real-time environmental assessment. In industrial settings, DSP Cores amplify the performance of control systems by providing precise feedback loops and enhancing overall process automation and efficiency.
Silicon Hub's category for DSP Core semiconductor IPs includes a comprehensive collection of advanced designs tailored to various processing needs. These IPs are designed to integrate seamlessly into a multitude of hardware architectures, offering designers and engineers the flexibility and performance necessary to push the boundaries of technology in their respective fields. Whether for enhancing consumer experiences or driving innovation in industrial and automotive sectors, our DSP Core IPs bring unparalleled processing power to the forefront of digital innovations.
The "1G to 224G SerDes" solution from Alphawave Semi offers an extensive range of multi-standard connectivity IPs, designed to deliver optimal high-speed data transfer. These full-featured building blocks can be integrated into various chip designs, providing scalability and reliability across numerous protocols and standards. Supporting data rates from 1 Gbps to 224 Gbps, this SerDes solution accommodates diverse signaling schemes, including PAM2, PAM4, PAM6, and PAM8. Alphawave Semi's SerDes IP is engineered to meet the demands of modern communication systems, ensuring connectivity across a wide spectrum of applications. These include data centers, telecom networks, and advanced networking systems where high data transfer speeds are a necessity. This solution is crafted with energy efficiency in mind, helping reduce power consumption while maintaining a robust data connection. The SerDes solutions come equipped with advanced features like low latency and noise resilience, which are crucial for maintaining signal integrity over various transmission distances. This facilitates seamless integration into enterprises looking to boost their processing capabilities while minimizing downtime and operational inefficiencies. These capabilities make Alphawave Semi's SerDes IP a vital component in the evolving landscape of technology connectivity applications.
Quadric's Chimera GPNPU is an adaptable processor core designed to respond efficiently to the demand for AI-driven computations across multiple application domains. Offering up to 864 TOPS, this licensable core seamlessly integrates into system-on-chip designs needing robust inference performance. By maintaining compatibility with all forms of AI models, including cutting-edge large language models and vision transformers, it ensures long-term viability and adaptability to emerging AI methodologies. Unlike conventional architectures, the Chimera GPNPU excels by permitting complete workload management within a singular execution environment, which is vital in avoiding the cumbersome and resource-intensive partitioning of tasks seen in heterogeneous processor setups. By facilitating a unified execution of matrix, vector, and control code, the Chimera platform elevates software development ease, and substantially improves code maintainability and debugging processes. In addition to high adaptability, the Chimera GPNPU capitalizes on Quadric's proprietary Compiler infrastructure, which allows developers to transition rapidly from model conception to execution. It transforms AI workflows by optimizing memory utilization and minimizing power expenditure through smart data storage strategies. As AI models grow increasingly complex, the Chimera GPNPU stands out for its foresight and capability to unify AI and DSP tasks under one adaptable and programmable platform.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
The Veyron V2 CPU represents Ventana's second-generation RISC-V high-performance processor, designed for cloud, data center, edge, and automotive applications. This processor offers outstanding compute capabilities with its server-class architecture, optimized for handling complex, virtualized, and cloud-native workloads efficiently. The Veyron V2 is available as both IP for custom SoCs and as a complete silicon platform, ensuring flexibility for integration into various technological infrastructures. Emphasizing a modern architectural design, it includes full compliance with RISC-V RVA23 specifications, showcasing features like high Instruction Per Clock (IPC) and power-efficient architectures. Comprising of multiple core clusters, this CPU is capable of delivering superior AI and machine learning performance, significantly boosting throughput and energy efficiency. The Veyron V2's advanced fabric interconnects and extensive cache architecture provide the necessary infrastructure for high-performance applications, ensuring broad market adoption and versatile deployment options.
xcore.ai by XMOS is a groundbreaking solution designed to bring intelligent functionality to the forefront of semiconductor applications. It enables powerful real-time execution of AI, DSP, and control functionalities, all on a single, programmable chip. The flexibility of its architecture allows developers to integrate various computational tasks efficiently, making it a fitting choice for projects ranging from smart audio devices to automated industrial systems. With xcore.ai, XMOS provides the technology foundation necessary for swift deployment and scalable application across different sectors, delivering high performance in demanding environments.
The Jotunn8 is described as the ultimate AI inference chip, engineered to tackle the modern demands of data centers with high efficiency. This chip is optimized for speed, enabling rapid deployment of trained models with significant cost reductions and scalability. The Jotunn8 is tailored for ultra-low latency, making it suitable for real-time applications such as fraud detection and search functions. It also supports very high throughput and is designed to lower the cost per inference, a crucial factor for businesses operating at scale. The chip emphasizes power efficiency, illustrating VSORA's commitment to reducing operational expenses and environmental impact. It facilitates a new foundation for AI at scale, integrating seamlessly with various AI models like reasoning and generative AI, providing both flexibility and performance. Overall, Jotunn8 is revolutionary in its approach, delivering cutting-edge performance for demanding AI tasks while maintaining a commitment to sustainability.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
Syntacore's SCR3 microcontroller core is a versatile option for developers looking to harness the power of a 5-stage in-order pipeline. Designed to support both 32-bit and 64-bit symmetric multiprocessing (SMP) configurations, this core is perfectly aligned with the needs of embedded applications requiring moderate power and resource efficiency coupled with enhanced processing capabilities. The architecture is fine-tuned to handle a variety of workloads, ensuring a balance between performance and power usage, making it suitable for sectors such as industrial automation, automotive sensors, and IoT devices. The inclusion of privilege modes, memory protection units (MPUs), and cache systems further enhances its capabilities, particularly in environments where system security and reliability are paramount. Developers will find the SCR3 core to be highly adaptable, fitting seamlessly into designs that need scalability and modularity. Syntacore's comprehensive toolkit, combined with detailed documentation, ensures that system integration is both quick and reliable, providing a robust foundation for varied applications.
The Codasip RISC-V BK Core Series is renowned for integrating flexibility and performance scalability within a RISC-V framework. These cores are designed to cater to various application demands, from general-purpose computing to specialized tasks requiring high processing capability. The BK series supports customization that optimizes performance, power, and area based on different application scenarios. One notable feature of the BK Core Series is its ability to be tailored using Codasip Studio, which enables architects to modify microarchitectures and instruction sets efficiently. This customization is supported by a robust set of pre-verified options, ensuring quality and reliability across applications. The BK cores also boast energy efficiency, making them suitable for both power-sensitive and performance-oriented applications. Another advantage of the BK Core Series is its compatibility with a broad range of industry-standard tools and interfaces, which simplifies integration into existing systems and accelerates time to market. The series also emphasizes secure and safe design, aligning with industry standards for functional safety and security, thereby allowing integration into safety-critical environments.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The Spiking Neural Processor T1 is an advanced microcontroller engineered for highly efficient always-on sensing tasks. Integrating a low-power spiking neural network engine with a RISC-V processor core, the T1 provides a compact solution for rapid sensor data processing. Its design supports next-generation AI applications and signal processing while maintaining a minimal power footprint. The processor excels in scenarios requiring both high power efficiency and fast response. By employing a tightly-looped spiking neural network algorithm, the T1 can execute complex pattern recognition and signal processing tasks directly on-device. This autonomy enables battery-powered devices to operate intelligently and independently of cloud-based services, ideal for portable or remote applications. A notable feature includes its low-power operation, making it suitable for use in portable devices like wearables and IoT-enabled gadgets. Embedded with a RISC-V CPU and 384KB of SRAM, the T1 can interface with a variety of sensors through diverse connectivity options, enhancing its versatility in different environments.
The Tyr AI Processor Family is dedicated to enhancing Edge AI capabilities by delivering data-center-class performance in a highly efficient and compact form. This processor family operates at the edge, where real-time decision-making is crucial, directly processing data on devices to reduce latency and enhance speed. By eliminating the dependency on cloud-based solutions, Tyr offers heightened privacy and cuts bandwidth costs. This makes it ideal for applications in sectors like autonomous vehicles and industrial automation, facilitating immediate local processing and decision-making. The Tyr processors support a range of uses, from AI models that run directly on devices to federated learning, allowing continuous adaptation without constant connectivity. Their design aims to balance compute, memory, and power, enabling multi-modal inputs and outputs, which holds significant advantages for sectors dealing with large datasets, ensuring secure and efficient processing at the source. Tyr's Edge AI capabilities transform industries by providing fast, secure, and intelligent decision-making where it's most needed, strengthening operations and reducing carbon footprints through sustainable practices.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
The ISPido on VIP Board solution is designed for the Lattice Semiconductor's VIP (Video Interface Platform) board, offering real-time, high-quality image processing. It supports automatic configuration selection at boot, ensuring a balanced output or alternatively, it provides a menu interface for manual adjustments. Key features include input from two Sony IMX 214 sensors and output in HDMI format with 1920 x 1080p resolution using YCrCb 4:2:2 color space. This system supports run-time calibration via a serial port, allowing users to customize gamma tables, convolution filters, and other settings to match specific application needs. The innovative setup facilitates streamlined image processing for efficient deployment across applications requiring high-definition video processing.
The Codasip L-Series DSP Core stands out for its ability to handle computationally intensive algorithms with high efficiency, targeting applications that require significant digital signal processing capabilities. The L-Series is tailored for precision tasks such as audio processing and complex mathematical computations where performance and accuracy are imperative. This series benefits from a versatile architecture that can be customized to enhance specific signal processing needs, powered by the Codasip Studio. Modifications can be made at both the architectural and ISA levels to ensure the processor aligns perfectly with the workload's demands, enhancing performance while maintaining a compact footprint. Furthermore, the L-Series DSP cores are equipped to deliver powerful processing potential while ensuring power efficiency, essential for battery-operated devices or environments with power constraints. This series is optimal for developers seeking to implement DSP solutions in various domains, leveraging RISC-V's open standard benefits coupled with Codasip's customization tools.
The iCan PicoPopĀ® is a highly compact System on Module (SOM) based on the Zynq UltraScale+ MPSoC from Xilinx, suited for high-performance embedded applications in aerospace. Known for its advanced signal processing capabilities, it is particularly effective in video processing contexts, offering efficient data handling and throughput. Its compact size and performance make it ideal for integration into sophisticated systems where space and performance are critical.
The Veyron V1 is a high-performance RISC-V CPU designed to meet the rigorous demands of modern data centers and compute-intensive applications. This processor is tailored for cloud environments requiring extensive compute capabilities, offering substantial power efficiency while optimizing processing workloads. It provides comprehensive architectural support for virtualization and efficient task management with its robust feature set. Incorporating advanced RISC-V standards, the Veyron V1 ensures compatibility and scalability across a wide range of industries, from enterprise servers to high-performance embedded systems. Its architecture is engineered to offer seamless integration, providing an excellent foundation for robust, scalable computing designs. Equipped with state-of-the-art processing cores and enhanced vector acceleration, the Veyron V1 delivers unmatched throughput and performance management, making it suitable for use in diverse computing environments.
The Universal DSP Library is designed to simplify digital signal processing tasks. It ensures efficient and highly effective operations by offering a comprehensive suite of algorithms and functions tailored for various DSP applications. The library is engineered for optimal performance and can be easily integrated into FPGA-based designs, making it a versatile tool for any digital signal processing needs. The comprehensive nature of the Universal DSP Library simplifies the development of complex signal processing applications. It includes support for key processing techniques and can significantly reduce the time required to implement and test DSP functionalities. By leveraging this library, developers can achieve high efficiency and performance in their digital signal processing tasks, thereby optimizing overall system resources. Moreover, the DSP library is designed to be compatible with a wide range of FPGAs, providing a flexible and scalable solution. This makes it an ideal choice for developers seeking to create innovative solutions across various applications, ensuring that their designs can handle demanding signal processing requirements effectively.
ISPido is a comprehensive image signal processing (ISP) pipeline that is fully configurable via the AXI4-LITE protocol. It features a complete ISP pipeline incorporating modules for defective pixel correction, color filter array interpolation using the Malvar-Cutler algorithm, and a series of image enhancements. These include convolution filters, auto-white balance, color correction matrix, gamma correction, and color space conversion between RGB and YCbCr formats. ISPido supports resolutions up to 7680x7680, ensuring compatibility with ultra-high-definition applications, up to 8K resolution systems. It is engineered to comply with the AMBA AXI4 standards, offering versatility and easy integration into various systems, whether for FPGA, ASIC, or other hardware configurations.
Trifecta-GPU design offers an exceptional computational power utilizing the NVIDIA RTX A2000 embedded GPU. With a focus on modular test and measurement, and electronic warfare markets, this GPU is capable of delivering 8.3 FP32 TFLOPS compute performance. It is tailored for advanced signal processing and machine learning, making it indispensable for modern, software-defined signal processing applications. This GPU is a part of the COTS PXIe/CPCIe modular family, known for its flexibility and ease of use. The NVIDIA GPU integration means users can expect robust performance for AI inference applications, facilitating quick deployment in various scenarios requiring advanced data processing. Incorporating the latest in graphical performance, the Trifecta-GPU supports a broad range of applications, from high-end computing tasks to graphics-intensive processes. It is particularly beneficial for those needing a reliable and powerful GPU for modular T&M and EW projects.
TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.
The Prodigy Universal Processor by Tachyum is a versatile chip that merges the capabilities of CPUs, GPGPUs, and TPUs into a single architecture. This innovation is designed to cater to the needs of AI, HPC, and hyperscale data centers by delivering improved performance, energy efficiency, and server utilization. The chip functions as a general-purpose processor, facilitating various applications from hyperscale data centers to high-performance computing and private clouds. It boasts a seamless integration model, allowing existing software packages to run flawlessly on its uniquely designed instruction set architecture. By providing up to 18.5x increased performance and enhanced performance per watt, Prodigy stands out in the industry, tackling common issues like high power consumption and limited processor performance that currently hamper data centers. It comprises a coherent multiprocessor architecture that supports a wide range of AI and computing workloads, ultimately transforming data centers into universal computing hubs. The design not only aims to lower the total cost of ownership but also contributes to reducing carbon emissions through decreased energy requirements. Prodigyās architecture supports a diverse range of SKUs tailored to specific markets, making it adaptable to various applications. Its flexibility and superior performance capabilities position it as a significant player in advancing sustainable, energy-efficient computational solutions worldwide. The processor's ability to handle complex AI tasks with minimal energy use underlines Tachyum's commitment to pioneering green technology in the semiconductor industry.
The RFicient chip is designed to revolutionize the Internet of Things with its ultra-low power consumption. It enables devices to operate more sustainably by drastically reducing energy requirements. This is particularly important for devices in remote locations, where battery life is a critical concern. By leveraging energy harvesting and efficient power management, the RFicient chip significantly extends the operational life of IoT devices, making it ideal for widespread applications across industrial sectors.
The NX Class RISC-V CPU IP by Nuclei is characterized by its 64-bit architecture, making it a robust choice for storage, AR/VR, and AI applications. This processing unit is designed to accommodate high data throughput and demanding computational tasks. By leveraging advanced capabilities, such as virtual memory and enhanced processing power, the NX Class facilitates cutting-edge technological applications and is adaptable for integration into a vast array of high-performance systems.
Engineered for a dynamic performance footprint, the SCR4 microcontroller core offers a significant advantage with its 5-stage in-order pipeline and specialized floating-point unit (FPU). This characteristic makes it ideal for applications demanding precise computational accuracy and speed, such as control systems, network devices, and automotive technologies. Leveraging 32/64-bit capability, the SCR4 core supports symmetric multiprocessing (SMP) with the added benefit of privilege modes and a comprehensive memory architecture, which includes both L1 and L2 caches. These features make it particularly attractive for developers seeking a core that enables high data throughput while maintaining a focus on power efficiency and area optimization. Syntacore has positioned the SCR4 as a go-to core for projects requiring both power and precision, supported by a development environment that is both intuitive and comprehensive. Its applicability across various industrial sectors underscores its versatility and the robustness of the RISC-V architecture that underpins it.
The SiFive Performance family is dedicated to offering high-throughput, low-power processor solutions, suitable for a wide array of applications from data centers to consumer devices. This family includes a range of 64-bit, out-of-order cores configured with options for vector computations, making it ideal for tasks that demand significant processing power alongside efficiency. Performance cores provide unmatched energy efficiency while accommodating a breadth of workload requirements. Their architecture supports up to six-wide out-of-order processing with tailored options that include multiple vector engines. These cores are designed for flexibility, enabling various implementations in consumer electronics, network storage solutions, and complex multimedia processing. The SiFive Performance family facilitates a mix of high performance and low power usage, allowing users to balance the computational needs with power consumption effectively. It stands as a testament to SiFiveās dedication to enabling flexible tech solutions by offering rigorous processing capabilities in compact, scalable packages.
Designated as an ideal component for IoT and energy harvesting applications, Microdul's Ultra-Low-Power Temperature Sensor is designed to offer precise temperature readings with minimal energy consumption. This sensor is a cornerstone for modern industrial and domestic devices that require efficient thermal monitoring. Characterized by its low power usage, the sensor extends battery life in devices such as smart thermostats, environmental monitors, and portable health devices. It employs cutting-edge technology to maintain accuracy and reliability in various operating conditions, thereby enhancing the overall performance of the connected systems. Incorporating this sensor into IoT frameworks enables the robust collection and analysis of temperature data, essential for automation and smart systems. Whether deployed in homes or industrial settings, its efficient power consumption aligns with the demand for sustainable technology solutions. It manifests Microdul's commitment to forwarding energy efficiency and is a testament to their leadership in developing components that meet modern technological needs.
The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.
The Neural Network Accelerator from Gyrus AI is a state-of-the-art processing solution tailored for executing neural networks efficiently. Leverage this IP to achieve high-performance computing with streamlined power consumption. It features a noteworthy capability of operating at 30 TOPS/W, drastically reducing clock cycles by 10-30x compared to traditional processors. This advancement supports various neural network structures, ensuring high operational efficiency while minimizing energy demands.\n\nThe architecture of the Neural Network Accelerator is optimized for low memory usage, resulting in significantly lower power needs, which in turn reduces operational costs. Its design focuses on achieving optimal die area usage, ensuring over 80% utilization for different model structures, which supports compact and effective chip designs. This enhances the scalability and flexibility required for varied applications in edge computing.\n\nAccompanied by advanced software tools, this IP supports seamless integration into existing systems, facilitating the straightforward execution of neural networks. The tools offer robust support, helping run complex models with ease, boosting both performance and resource efficiency. This makes it ideal for companies looking to enhance their AI processing capabilities on edge devices. Its cutting-edge technology enables enterprises to maintain competitive advantages in AI-driven markets.
The Cottonpicken DSP Engine by Section5 is a proprietary digital signal processing core designed for robust performance in image processing applications. It offers functionalities like Bayer pattern decoding, YUV conversion, and matrix operations with support for programmable delays. The DSP core is capable of delivering high-speed processing with an operational clock rate of up to 150 MHz, dependent on the platform. This engine is equipped with advanced filter kernels and supports various color formats such as YUV 4:2:2 and RGB, making it ideal for handling complex visual data. Its architecture allows for the full utilization of data clock speeds, providing efficient data processing capabilities for demanding DSP applications. As a closed-source netlist object, the Cottonpicken Engine is part of Section5's development package, ensuring high quality and performance reliability. It offers a comprehensive solution for applications requiring precise and efficient image data decoding, making it a valuable addition to any image processing toolkit.
The xcore-200 series by XMOS is engineered for high-performance tasks requiring precise timing and robust audio processing capabilities. It supports a wide array of use cases with its parallel processing power, enabling extensive audio management and control in multichannel systems. This capability makes it suitable for both professional audio products and consumer-grade technology seeking reliable, low-latency performance. Ideal for developers focused on creating responsive and interactive technologies, xcore-200 provides a scalable platform to facilitate advanced audio experiences with exceptional ease.
Tensix Neo symbolizes a revolutionary step in AI acceleration, designed to meet today's high-demand computational tasks. This IP harnesses Tenstorrentās advanced Tensix cores, optimized to accelerate a diverse array of AI networks and applications. It is crafted to deliver high performance-per-watt, making it a leading choice for power-conscious enterprises and developers. The Tensix Neoās design focuses on facilitating specialized AI tasks, empowering developers to push the boundaries of their AI applications with ease and efficiency. Its adaptability is anchored by a cutting-edge network-on-chip (NoC) that supports extensive model connectivity and scaling possibilities. This ensures that solutions built with Tensix Neo can evolve seamlessly alongside emerging AI models and industry trends. A notable feature of Tensix Neo is its support for a wide range of precision formats, enabling versatile deployment options. This flexibility is crucial for developers aiming to fine-tune their applications for optimal performance, whether in cloud environments or edge devices. By offering comprehensive support for diverse AI workloads, Tensix Neo excels in demanding sectors such as data centers and media processing. Complemented by an open-source environment, Tensix Neo allows for unrestricted innovation and development. This encourages dynamic growth in the developer community and supports collaborative efforts to refine AI solutions continually. Overall, Tensix Neo represents a fusion of cutting-edge technology and community-driven enhancement, making it a cornerstone for next-generation AI processing solutions.
TimeServo is a sophisticated System Timer IP Core for FPGAs, providing high-resolution timing essential for line-rate independent packet timestamping. Its architecture allows seamless operation without the need for associated host processor interaction, leveraging a flexible PI-DPLL which utilizes an external 1 PPS signal, ensuring time precision and stability across applications. Besides functioning as a standalone timing solution within an FPGA, TimeServo offers multi-output capabilities with up to 32 independent time domains. Each time output can be individually configured, supporting multiple timing formats, including Binary 48.32 and IEEE standards, which offer great flexibility for timing-sensitive applications. TimeServo uniquely combines software control via an AXI interface with an internal, logically-heavy phase accumulator and Digital Phase Locked Loop mechanisms, achieving impressive jitter performance. Consequently, TimeServo serves as an unparalleled solution for network operators and developers requiring precise timing and synchronization in their systems.
ARC Processor IP from Synopsys is engineered to deliver high performance and superior energy efficiency for embedded applications. It comprises a customizable architecture, allowing developers to tailor it for specific application needs while maintaining low power consumption. Ideal for IoT, automotive, and high-performance computing applications, this processor IP emphasizes scalability and flexibility, enabling the creation of sophisticated system designs tailored to unique industry requirements.
Secure Protocol Engines by Secure-IC focus on enhancing security and network processing efficiency for System-on-Chip (SoC) designs. These high-performance IP blocks are engineered to handle intensive security tasks, offloading critical processes from the main CPU to improve overall system efficiency. Designed for seamless integration, these modules cater to various applications requiring stringent security standards. By leveraging cryptographic acceleration, Secure Protocol Engines facilitate rapid processing of secure communications, allowing SoCs to maintain fast response times even under high-demand conditions. The engines provide robust support for a broad range of security protocols and cryptographic functions, ensuring data integrity and confidentiality across communication channels. This ensures that devices remain secure from unauthorized access and data breaches, particularly in environments prone to cyber threats. Secure Protocol Engines are integral to designing resilient systems that need to process large volumes of secure transactions, such as in financial systems or highly regulated industrial applications. Their architecture allows for scalability and adaptability, making them suitable for both existing systems and new developments in the security technology domain.
The TSP1 Neural Network Accelerator from ABR is an advanced AI chip designed to cater to the demands of real-time processing with reduced power consumption. Harnessing state-of-the-art technologies like the Legendre Memory Unit, this chip excels in time-series data handling, making it ideal for applications that require energy efficiency without compromising on performance. Its architecture supports sophisticated signal recognition and natural language processing, facilitating its use in diverse environments. Particularly suited for battery-powered devices, the TSP1 integrates seamlessly with biosensors and voice interfaces, offering versatile application in areas such as AR/VR, smart homes, and healthcare devices. With self-sufficient processing capabilities, the chip is equipped to manage multiple sensor signals and supports interfaces such as SPI and I2C for enhanced connectivity. Designed with efficiency at its core, the TSP1 boasts features like an integrated DC-DC power supply and a compact package option, ensuring it meets the rigorous demands of edge computing. With low latency and high data efficiency, this chip sets a new standard for AI-driven innovation in technology.
The iniDSP core is a powerful 16-bit digital signal processor designed for enhancing signal processing tasks effectively. It is particularly suited for applications demanding intensive arithmetic processing such as audio processing, telecommunications, and radar systems. By offering flexible design integration and high-performance capabilities, iniDSP manages complex calculations with impressive efficiency. The processor's architecture is constructed to support and simplify the implementation of complex signal algorithms, enabling seamless integration into a range of electronic systems. With its proven application across various sectors, iniDSP provides a robust solution for engineers aiming to optimize digital signal processing.
TimbreAI T3 addresses audio processing needs by embedding AI in sound-based applications, particularly suitable for power-constrained devices like wireless headsets. It's engineered for exceptional power efficiency, requiring less than 300 µW to operate while maintaining a performance capacity of 3.2 GOPS. This AI inference engine simplifies deployment by never necessitating changes to existing trained models, thus preserving accuracy and efficiency. The TimbreAI T3's architecture ensures that it handles noise reduction seamlessly, offering core audio neural network support. This capability is complemented by its flexible software stack, further reinforcing its strength as a low-power, high-functionality solution for state-of-the-art audio applications.
Engineered for top-tier AI applications, the Origin E8 excels in delivering high-caliber neural processing for industries spanning from automotive solutions to complex data center implementations. The E8's design supports singular core performance up to 128 TOPS, while its adaptive architecture allows easy multi-core scalability to exceed PetaOps. This architecture eradicates common performance bottlenecks associated with tiling, delivering robust throughput without unnecessary power or area compromises. With an impressive suite of features, the E8 facilitates remarkable computational capacity, ensuring that even the most intricate AI networks function smoothly. This high-performance capability, combined with its relatively low power usage, positions the E8 as a leader in AI processing technologies where high efficiency and reliability are imperative.
The Blazar Bandwidth Accelerator Engine is a cutting-edge component designed to accelerate high-capacity, low-latency applications. This innovative engine focuses on in-memory compute capabilities, enhancing system efficiency by processing data directly within the memory itself, rather than relying solely on external computational processes. The Blazar engine is crafted to deliver exceptional performance, boasting a throughput of up to 640 Gbps and the capability to execute up to 5 billion reads per second. With options for integrating up to 32 RISC cores, the engine offers additional computational power, providing significant versatility and adapting to complex system requirements. This makes it an ideal choice for computational-heavy applications such as SmartNICs and SmartSwitches, where quick data access and manipulation are crucial. Furthermore, the engine's design supports dual-port memory, enabling seamless access and operation across multiple data streams. Applications that benefit from this technology include metering, statistics, and 5G network operations needing responsive data handling and processing. It is a potent tool for enhancing system operations within demanding environments where bandwidth and latency are critical factors.
The Origin E2 NPU cores offer a balanced solution for AI inference by optimizing for both power and area without compromising performance. These cores are expertly crafted to save system power in devices such as smartphones and edge nodes. Their design supports a wide variety of networks, including RNNs and CNNs, catering to the dynamic demands of consumer and industrial applications. With customizable performance ranging from 1 to 20 TOPS, they are adept at handling various AI-driven tasks while reducing latency. The E2 architecture is ingeniously configured to enable parallel processing, affording high resource utilization that minimizes memory demands and system overhead. This results in a flexible NPU architecture that serves as a reliable backbone for deploying efficient AI models across different platforms.
Catalyst-GPU represents a cost-effective and powerful graphics solution for the PXIe/CPCIe platform. Equipped with NVIDIA Quadro T600 and T1000 GPUs, this module excels in providing enhanced graphics and computing acceleration required by modern signal processing and AI applications. One of the standout features of Catalyst-GPU is its ease of programming and high compute capabilities. It meets the requirements of both Modular Test and Measurement (T&M) and Electronic Warfare (EW) sectors, offering significant performance improvements at reduced operational costs. Built as a part of the Catalyst family, this module allows access to advanced graphics capabilities of NVIDIA technology, paving the way for efficient data processing and accelerated computational tasks. The Catalyst-GPU sets itself apart as a robust choice for users needing reliable high-performance graphics within a modular system framework.
The v-MP6000UDX Visual Processing Unit is a powerhouse of the Videantis portfolio, offering extensive capabilities for handling deep learning, computer vision, and video coding across a singular architecture. This unit brings prowess in processing tasks that require real-time performance and energy efficiency, making it pivotal for next-generation intelligent devices. Designed to support multiple computational requirements, the v-MP6000UDX processes deep learning models efficiently, acting as a unified platform that negates the need for disparate hardware accelerators. This processor's architecture is optimized for running complete neural networks swiftly and at low power, facilitating applications that demand rapid computing power with minimal energy constraints. Boasting a sophisticated memory hierarchy and high-bandwidth interfaces, the processor ensures efficient data handling and processing. Its enhanced memory architecture paired with a network-on-chip design fosters an environment where high-performance computations are achieved seamlessly. This makes the v-MP6000UDX suitable for deployment in complex systems such as autonomous vehicles, mobile technology, and industrial automation, where proficient data processing and precision are critical. Incorporating the latest design principles, the v-MP6000UDX unit integrates seamlessly into devices that require extensive video processing capabilities, benefiting from a vast library of codecs and support for emerging standards in video compression. This processing unit is indispensable for businesses aiming to enhance their product offerings with cutting-edge technology.
Specialty Microcontrollers from Advanced Silicon harness the capabilities of the latest RISC-V architectures for advanced processing tasks. These microcontrollers are particularly suited to applications involving image processing, thanks to built-in coprocessing units that enhance their algorithm execution efficiency. They serve as ideal platforms for sophisticated touch screen interfaces, offering a balance of high performance, reliability, and low power consumption. The integrated features allow for the enhancement of complex user interfaces and their interaction with other system components to improve overall system functionality and user experience.
Dillon Engineering's Floating Point Library is designed to offer IEEE 754 compliant floating point arithmetic capabilities for various applications. Available as pre-designed modules, these cores enable efficient execution of complex mathematical operations, providing critical support for scientific computations and digital signal processing where precision is key. The library offers single, double, and custom precision options, catering to diverse computational needs. The inclusion of pipelined arithmetic ensures that operations such as addition, subtraction, multiplication, and division are performed swiftly and with accuracy. This enhances the library's utility across applications that rely heavily on precise numerical computation. The integration ease and adaptability of the Floating Point Library make it an indispensable resource for projects that require high computational accuracy. Its capacity to handle extensive floating-point operations effectively aids in maintaining performance standards across various processor architectures, ensuring that it remains a vital tool for computation-intensive tasks.
The Origin E6 provides a formidable edge in AI processing demands for mobile and AR/VR applications, boasting performance specifications between 16 to 32 TOPS. Tailored to accommodate the latest AI models, the E6 benefits from Expedera's distinct packet-based architecture. This cutting-edge design simplifies parallel processing, which enhances efficiency while concurrently diminishing power and resource consumption. As an NPU, it supports an extensive array of video, audio, and text-based networks, thus delivering consistent performance even under complex specifications. The E6's high utilization rates minimize wastage and amplify throughput, certifying its position as an optimal choice for forward-thinking gadgets requiring potent due to its scalable and adaptable architecture.
IMG DXS GPU is engineered to meet the needs of automotive and industrial applications where functional safety is paramount. Built on efficient PowerVR architecture, it ensures high-performance graphics rendering with a focus on reduced power consumption. The DXS technology supports comprehensive safety suites, catering to ADAS and digital cockpit applications, thereby addressing stringent automotive safety standards.
Designed specifically for the nuanced requirements of AI on-chip applications, the Calibrator for AI-on-Chips fine-tunes AI models to enhance their performance on specific hardware. This calibration tool adjusts the models for optimal execution, ensuring that chip resources are maximized for efficiency and speed. The Calibrator addresses challenges related to power usage and latency, providing tailored adjustments that fine-tune these parameters according to the hardware's unique characteristics. This approach ensures that AI models can be reliably deployed in environments with limited resources or specific operational constraints. Furthermore, the tool offers automated calibration processes to streamline customization tasks, reducing time-to-market and ensuring that AI models maintain high levels of accuracy and capability even as they undergo optimization for different chip architectures. Skymizer's Calibrator for AI-on-Chips is an essential component for developers and engineers looking to deploy AI solutions that require fine-grained control over model performance and resource management, thus securing the best possible outcomes from AI deployments.
Hypr_risc is a radar DSP accelerator enhanced by a RISC-V-based core, delivering high-efficiency computational capabilities for radar applications. Designed to cater to high-speed ADAS processing, it supports multi-radar environments and ensures optimal DSP performance. Its configurability allows it to adapt to various application parameters, balancing power, size, and computational demands across different processor architectures. Ideal for use in sophisticated automotive radar systems, it offers robust processing capabilities for advanced driver assistance.
The MIPS Atlas Series constitutes a comprehensive suite of RISC-V IP cores that have been painstakingly developed to address the demanding and diverse needs of Physical AI applications. This portfolio is tailored to facilitate real-time precision in autonomous systems. Leveraging a closed-loop structure of Sense, Think, Act, and Communicate, it aids in executing intelligent control across varied robotics and AI applications. Focused on robust computing innovations, the series significantly advances capabilities for automotive, industrial, and communication sectors by integrating a high-performance, multi-threaded architecture. This maximizes operations in environments with real-time processing demands, satisfying stringent safety and efficiency benchmarks. Engineered to benefit platforms requiring distinct, event-driven computational functionality, the Atlas Series, with its high-performance instruction sets, supports evolving autonomous requirements. It's integrated with the Atlas Explorer for further enhancement, enabling pre-silicon testing and execution, reinforcing the support for cutting-edge R&D endeavors in AI.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!