All IPs > Processor > DSP Core
In the realm of semiconductor IP, DSP Cores play a pivotal role in enabling efficient digital signal processing capabilities across a wide range of applications. Short for Digital Signal Processor Cores, these semiconductor IPs are engineered to handle complex mathematical calculations swiftly and accurately, making them ideal for integration into devices requiring intensive signal processing tasks.
DSP Core semiconductor IPs are widely implemented in industries like telecommunications, where they are crucial for modulating and encoding signals in mobile phones and other communication devices. They empower these devices to perform multiple operations simultaneously, including compressing audio, optimizing bandwidth usage, and enhancing data packets for better transmission quality. Additionally, in consumer electronics, DSP Cores are fundamental in audio and video equipment, improving the clarity and quality of sound and visuals users experience.
Moreover, DSP Cores are a linchpin in the design of advanced automotive systems and industrial equipment. In automotive applications, they assist in radar and lidar systems, crucial for autonomous driving features by processing the data needed for real-time environmental assessment. In industrial settings, DSP Cores amplify the performance of control systems by providing precise feedback loops and enhancing overall process automation and efficiency.
Silicon Hub's category for DSP Core semiconductor IPs includes a comprehensive collection of advanced designs tailored to various processing needs. These IPs are designed to integrate seamlessly into a multitude of hardware architectures, offering designers and engineers the flexibility and performance necessary to push the boundaries of technology in their respective fields. Whether for enhancing consumer experiences or driving innovation in industrial and automotive sectors, our DSP Core IPs bring unparalleled processing power to the forefront of digital innovations.
Alphawave Semi's 1G to 224G SerDes stands as a cornerstone in high-speed connectivity applications. This versatile SerDes solution supports a broad data rate range and multiple signaling schemes, such as PAM2, PAM4, PAM6, and PAM8, which adapt seamlessly to a variety of industry protocols and standards. Designed with the future of connectivity in mind, this intellectual property is critical for systems requiring robust and reliable data transmission across numerous networking environments. Notably, the 1G to 224G SerDes is engineered to deliver unparalleled performance, offering low latency and minimal power consumption. Its application is widespread in data center infrastructures, telecommunications, automotive systems, and beyond, providing the backbone for next-generation data processing and transmission needs. By integrating this SerDes, users can expect to enhance communication speed and efficiency, vital for maintaining competitive advantage in a rapidly evolving market. The ability to adapt to cutting-edge technologies, like AI and 5G, further underscores its versatility. This SerDes IP enables seamless integration of digital processing units with minimal interference, thus fostering robust system interconnections essential for high-performance computing environments.
The Ventana Veyron V2 CPU represents a substantial upgrade in processing power, setting a new standard in AI and data center performance with its RISC-V architecture. Created for applications that demand intensive computing resources, the Veyron V2 excels in providing high throughput and superior scalability. It is aimed at cloud-native operations and intensive data processing tasks requiring robust, reliable compute power. This CPU is finely tuned for modern, virtualized environments, delivering a server-class performance tailored to manage cloud-native workloads efficiently. The Veyron V2 supports a range of integration options, making it dependably adaptable for custom silicon platforms and high-performance system infrastructures. Its design incorporates an IOMMU compliant with RISC-V standards, enabling seamless interoperability with third-party IPs and modules. Ventana's innovation is evident in the Veyron V2's capacity for heterogeneous computing configurations, allowing diverse workloads to be managed effectively. Its architecture features advanced cluster and cache infrastructures, ensuring optimal performance across large-scale deployment scenarios. With a commitment to open standards and cutting-edge technologies, the Veyron V2 is a critical asset for organizations pursuing the next level in computing performance and efficiency.
The Chimera GPNPU from Quadric is engineered to meet the diverse needs of modern AI applications, bridging the gap between traditional processing and advanced AI model requirements. It's a fully licensable processor, designed to deliver high AI inference performance while eliminating the complexity of traditional multi-core systems. The GPNPU boasts an exceptional ability to execute various AI models, including classical backbones, state-of-the-art transformers, and large language models, all within a single execution pipeline.\n\nOne of the core strengths of the Chimera GPNPU is its unified architecture that integrates matrix, vector, and scalar processing capabilities. This singular design approach allows developers to manage complex tasks such as AI inference and data-parallel processing without resorting to multiple tools or artificial partitioning between processors. Users can expect heightened productivity thanks to its modeless operation, which is fully programmable and efficiently executes C++ code alongside AI graph code.\n\nIn terms of versatility and application potential, the Chimera GPNPU is adaptable across different market segments. It's available in various configurations to suit specific performance needs, from single-core designs to multi-core clusters capable of delivering up to 864 TOPS. This scalability, combined with future-proof programmability, ensures that the Chimera GPNPU not only addresses current AI challenges but also accommodates the ever-evolving landscape of cognitive computing requirements.
xcore.ai is a versatile and powerful processing platform designed for AIoT applications, delivering a balance of high performance and low power consumption. Crafted to bring AI processing capabilities to the edge, it integrates embedded AI, DSP, and advanced I/O functionalities, enabling quick and effective solutions for a variety of use cases. What sets xcore.ai apart is its cycle-accurate programmability and low-latency control, which improve the responsiveness and precision of the applications in which it is deployed. Tailored for smart environments, xcore.ai ensures robust and flexible computing power, suitable for consumer, industrial, and automotive markets. xcore.ai supports a wide range of functionalities, including voice and audio processing, making it ideal for developing smart interfaces such as voice-controlled devices. It also provides a framework for implementing complex algorithms and third-party applications, positioning it as a scalable solution for the growing demands of the connected world.
Leveraging a high-performance RISC architecture, the eSi-3250 32-bit core efficiently integrates instruction and data caches. This makes it compatible with designs utilizing slower on-chip memories such as eFlash. The core not only supports MMU for address translation but also allows for user-defined custom instructions, greatly enhancing its flexibility for specialized and high-performance applications.
The Jotunn8 AI Accelerator represents a pioneering approach in AI inference chip technology, designed to cater to the demanding needs of contemporary data centers. Its architecture is optimized for high-speed deployment of AI models, combining rapid data processing capabilities with cost-effectiveness and energy efficiency. By integrating features such as ultra-low latency and substantial throughput capacity, it supports real-time applications like chatbots and fraud detection that require immediate data processing and agile responses. The chip's impressive performance per watt metric ensures a lower operational cost, making it a viable option for scalable AI operations that demand both efficiency and sustainability. By reducing power consumption, Jotunn8 not only minimizes expenditure but also contributes to a reduced carbon footprint, aligning with the global move towards greener technology solutions. These attributes make Jotunn8 highly suitable for applications where energy considerations and environmental impact are paramount. Additionally, Jotunn8 offers flexibility in memory performance, allowing for the integration of complexity in AI models without compromising on speed or efficiency. The design emphasizes robustness in handling large-scale AI services, catering to the new challenges posed by expanding data needs and varied application environments. Jotunn8 is not simply about enhancing inference speed; it proposes a new baseline for scalable AI operations, making it a foundational element for future-proof AI infrastructure.
The eSi-3200, a 32-bit cacheless core, is tailored for embedded control with its expansive and configurable instruction set. Its capabilities, such as 64-bit multiply-accumulate operations and fixed-point complex multiplications, cater effectively to signal processing tasks like FFTs and FIRs. Additionally, it supports SIMD and single-precision floating point operations, coupled with efficient power management features, enhancing its utility for diverse embedded applications.
The Spiking Neural Processor T1 is a neuromorphic microcontroller engineered for always-on sensor applications. It utilizes a spiking neural network engine alongside a RISC-V processor core, creating an ultra-efficient single-chip solution for real-time data processing. With its optimized power consumption, it enables next-generation artificial intelligence and signal processing in small, battery-operated devices. The T1 delivers advanced applications capabilities within a minimal power envelope, making it suitable for use in devices where power and latency are critical factors. The T1 includes a compact, multi-core RISC-V CPU paired with substantial on-chip SRAM, enabling fast and responsive processing of sensor data. By employing the remarkable abilities of spiking neural networks for pattern recognition, it ensures superior power performance on signal-processing tasks. The versatile processor can execute both SNNs and conventional processing tasks, supported by various standard interfaces, thus offering maximum flexibility to developers looking to implement AI features across different devices. Developers can quickly prototype and deploy solutions using the T1's development kit, which includes software for easy integration into existing systems and tools for accurate performance profiling. The development kit supports a variety of sensor interfaces, streamlining the creation of sophisticated sensor applications without the need for extensive power or size trade-offs.
The Digital Radio (GDR) from GIRD Systems is an advanced software-defined radio (SDR) platform that offers extensive flexibility and adaptability. It is characterized by its multi-channel capabilities and high-speed signal processing resources, allowing it to meet a diverse range of system requirements. Built on a core single board module, this radio can be configured for both embedded and standalone operations, supporting a wide frequency range. The GDR can operate with either one or two independent transceivers, with options for full or half duplex configurations. It supports single channel setups as well as multiple-input multiple-output (MIMO) configurations, providing significant adaptability in communication scenarios. This flexibility makes it an ideal choice for systems that require rapid reconfiguration or scalability. Known for its robust construction, the GDR is designed to address challenging signal processing needs in congested environments, making it suitable for a variety of applications. Whether used in defense, communications, or electronic warfare, the GDR's ability to seamlessly switch configurations ensures it meets the evolving demands of modern communications technology.
The **Ceva-SensPro DSP family** unites scalar processing units and vector processing units under an 8-way VLIW architecture. The family incorporates advanced control features such as a branch target buffer and a loop buffer to speed up execution and reduce power. There are six family members, each with a different array of MACs, targeted at different application areas and performance points. These range from the Ceva-SP100, providing 128 8-bit integer or 32 16-bit integer MACs at 0.2 TOPS performance for compact applications such as vision processing in wearables and mobile devices; to the Ceva-SP1000, with 1024 8-bit or 256 16-bit MACs reaching 2 TOPS for demanding applications such as automotive, robotics, and surveillance. Two of the family members, the Ceva-SPF2 and Ceva-SPF4, employ 32 or 64 32-bit floating-point MACs, respectively, for applications in electric-vehicle power-train control and battery management. These two members are supported by libraries for Eigen Linear Algebra, MATLAB vector operations, and the TVM graph compiler. Highly configurable, the vector processing units in all family members can add domain-specific instructions for such areas as vision processing, Radar, or simultaneous localization and mapping (SLAM) for robotics. Integer family members can also add optional floating-point capabilities. All family members have independent instruction and data memory subsystems and a Ceva-Connect queue manager for AXI-attached accelerators or coprocessors. The Ceva-SensPro2 family is programmable in C/C++ as well as in Halide and Open MP, and supported by an Eclipse-based development environment, extensive libraries spanning a wide range of applications, and the Ceva-NeuPro Studio AI development environment. [**Learn more about Ceva-SensPro2 solution>**](https://www.ceva-ip.com/product/ceva-senspro2/?utm_source=silicon_hub&utm_medium=ip_listing&utm_campaign=ceva_senspro2_page)
The SCR3 is an efficient microcontroller core offered by Syntacore, designed to meet the needs of both industrial and consumer applications. It introduces a 5-stage in-order pipeline and includes privilege modes along with an MPU, as well as L1 and L2 cache support for improved performance. This core balances processing efficiency with power consumption, making it suitable for use cases that require reliable performance within space and energy constraints. The SCR3's versatility extends across various sectors, including IoT, automotive, and networking.
Tyr AI Processor Family is engineered to bring unprecedented processing capabilities to Edge AI applications, where real-time, localized data processing is crucial. Unlike traditional cloud-based AI solutions, Edge AI facilitated by Tyr operates directly at the site of data generation, thereby minimizing latency and reducing the need for extensive data transfers to central data centers. This processor family stands out in its ability to empower devices to deliver instant insights, which is critical in time-sensitive operations like autonomous driving or industrial automation. The innovative design of the Tyr family ensures enhanced privacy and compliance, as data processing stays on the device, mitigating the risks associated with data exposure. By doing so, it supports stringent requirements for privacy while also reducing bandwidth utilization. This makes it particularly advantageous in settings like healthcare or environments with limited connectivity, where maintaining data integrity and efficiency is crucial. Designed for flexibility and sustainability, the Tyr AI processors are adept at balancing computing power with energy consumption, thus enabling the integration of multi-modal inputs and outputs efficiently. Their performance nears data center levels, yet they are built to consume significantly less energy, making them a cost-effective solution for implementing AI capabilities across various edge computing environments.
The Codasip RISC-V BK Core Series represents a family of processor cores that bring advanced customization to the forefront of embedded designs. These cores are optimized for power and performance, striking a fine balance that suits an array of applications, from sensor controllers in IoT devices to sophisticated automotive systems. Their modular design allows developers to tailor instructions and performance levels directly to their needs, providing a flexible platform that enhances both existing and new applications. Featuring high degrees of configurability, the BK Core Series facilitates designers in achieving superior performance and efficiency. By supporting a broad spectrum of operating requirements, including low-power and high-performance scenarios, these cores stand out in the processor IP marketplace. The series is verified through industry-leading practices, ensuring robust and reliable operation in various application environments. Codasip has made it straightforward to use and adapt the BK Core Series, with an emphasis on simplicity and productivity in customizing processor architecture. This ease of use allows for swift validation and deployment, enabling quicker time to market and reducing costs associated with custom hardware design.
The Veyron V1 CPU from Ventana Micro Systems is an industry-leading processor designed to deliver unparalleled performance for data-intensive applications. This RISC-V based CPU is crafted to meet the needs of modern data centers and enterprises, offering a sophisticated balance of power efficiency and computational capabilities. The Veyron V1 is engineered to handle complex workloads with its advanced architecture that competes favorably against current industry standards. Incorporating the latest innovations in chiplet technology, the Veyron V1 boasts exceptional scalability, allowing it to seamlessly integrate into diverse computing environments. Whether employed in a high-performance cloud server or an enterprise data center, this CPU is optimized to provide a consistent, robust performance across various applications. Its architecture supports scalable, modular designs, making it suitable for custom SoC implementations, thereby enabling faster time-to-market for new products. The Veyron V1’s compatibility with RISC-V open standards ensures versatility and adaptability, providing enterprises the freedom to innovate without the constraints of proprietary technologies. It includes support for essential system IP and interfaces, facilitating easy integration across different technology platforms. With a focus on extensible instruction sets, the Veyron V1 allows customized performance optimizations tailored to specific user needs, making it an essential tool in the arsenal of modern computing solutions.
The eSi-3264 stands out with its support for both 32/64-bit operations, including 64-bit fixed and floating-point SIMD (Single Instruction Multiple Data) DSP extensions. Engineered for applications mandating DSP functionality, it does so with minimal silicon footprint. Its comprehensive instruction set includes specialized commands for various tasks, bolstering its practicality across multiple sectors.
The iCan PicoPop® is a miniaturized system on module (SOM) based on the Xilinx Zynq UltraScale+ Multi-Processor System-on-Chip (MPSoC). This advanced module is designed to handle sophisticated signal processing tasks, making it particularly suited for aeronautic embedded systems that require high-performance video processing capabilities. The module leverages the powerful architecture of the Zynq MPSoC, providing a robust platform for developing cutting-edge avionics and defense solutions. With its compact form factor, the iCan PicoPop® SOM offers unparalleled flexibility and performance, allowing it to seamlessly integrate into various system architectures. The high level of integration offered by the Zynq UltraScale+ MPSoC aids in simplifying the design process while reducing system latency and power consumption, providing a highly efficient solution for demanding applications. Additionally, the iCan PicoPop® supports advanced functionalities through its integration of programmable logic, multi-core processing, and high-speed connectivity options, making it ideal for developing next-generation applications in video processing and other complex avionics functions. Its modular design also allows for easy customization, enabling developers to tailor the system to meet specific performance and functionality needs, ensuring optimal adaptability for intricate aerospace environments. Overall, the iCan PicoPop® demonstrates a remarkable blend of high-performance computing capabilities and adaptable configurations, making it a valuable asset in the development of high-tech avionics solutions designed to withstand rigorous operational demands in aviation and defense.
The SiFive Performance family of processors is designed to offer top-tier performance and throughput across a range of sizes and power profiles. These cores provide highly efficient RISC-V scalar and vector computing capabilities, tailored for an optimal balance that delivers industry-leading results. With options for high-performance 64-bit out-of-order scalar engines and optional vector compute engines, the Performance series ensures customers get the maximum capabilities in computational power. Incorporating a robust architecture, these processors support extensive hardware capabilities, including full support for the RVA23 profile and an option for vector processing adjustments that maximizes computing efficiency. The SiFive Performance series has cores that cater to various needs, whether for general-purpose computing or applications requiring extensive parallel processing capabilities. SiFive's architecture allows for scalability and customization, bridging the gap between high-demand computational tasks and power efficiency. It is meticulously designed to meet the rigorous demands of modern and future computing applications, ensuring that both enterprise and consumer electronics can leverage the power of RISC-V computing. This makes it an ideal choice for developers seeking to push the boundaries of processing capabilities.
Tensix Neo is an AI-focused semiconductor solution from Tenstorrent that capitalizes on the robustness of RISC-V architecture. This IP is crafted to enhance the efficiency of both AI training and inference processes, making it a vital tool for entities needing scalable AI solutions without hefty power demands. With Tensix Neo, developers can rest assured of the silicon-proven reliability that backs its architecture, facilitating a smooth integration into existing AI platforms. The IP embraces the flexibility and customization needed for advanced AI workloads, optimizing resources and yielding results with high performance per watt. As the demand for adaptable AI solutions grows, Tensix Neo offers a future-proof platform that can accommodate rapid advancements and complex deployments in machine learning applications. By providing developers with tested and verified infrastructure, Tensix Neo stands as a benchmark in AI IP development.
ISPido represents a fully configurable RTL Image Signal Processing Pipeline, adhering to the AMBA AXI4 standards and tailored through the AXI4-LITE protocol for seamless integration with systems such as RISC-V. This advanced pipeline supports a variety of image processing functions like defective pixel correction, color filter interpolation using the Malvar-Cutler algorithm, and auto-white balance, among others. Designed to handle resolutions up to 7680x7680, ISPido provides compatibility for both 4K and 8K video systems, with support for 8, 10, or 12-bit depth inputs. Each module within this pipeline can be fine-tuned to fit specific requirements, making it a versatile choice for adapting to various imaging needs. The architecture's compatibility with flexible standards ensures robust performance and adaptability in diverse applications, from consumer electronics to professional-grade imaging solutions. Through its compact design, ISPido optimizes area and energy efficiency, providing high-quality image processing while keeping hardware demands low. This makes it suitable for battery-operated devices where power efficiency is crucial, without sacrificing the processing power needed for high-resolution outputs.
The AON1100 offers a sophisticated AI solution for voice and sensor applications, marked by a remarkable power usage of less than 260μW during processing yet maintaining high levels of accuracy in environments with sub-0dB SNR. It is a leading option for always-on devices, providing effective solutions for contexts requiring constant machine listening ability.\n\nThis AI chip excels in processing real-world acoustic and sensor data efficiently, delivering up to 90% accuracy by employing advanced signal processing techniques. The AON1100's low power requirements make it an excellent choice for battery-operated devices, ensuring sustainable functionality through efficient power consumption over extended operational periods.\n\nThe scalability of the AON1100 allows it to be adapted for various applications, including smart homes and automotive settings. Its integration within broader AI platform strategies enhances intelligent data collection and contextual understanding capabilities, delivering transformative impacts on device interactivity and user experience.
The Codasip L-Series DSP Core offers specialized features tailored for digital signal processing applications. It is designed to efficiently handle high data throughput and complex algorithms, making it ideal for applications in telecommunications, multimedia processing, and advanced consumer electronics. With its high configurability, the L-Series can be customized to optimize processing power, ensuring that specific application needs are met with precision. One of the key advantages of this core is its ability to be finely tuned to deliver optimal performance for signal processing tasks. This includes configurable instruction sets that align precisely with the unique requirements of DSP applications. The core’s design ensures it can deliver top-tier performance while maintaining energy efficiency, which is critical for devices that operate in power-sensitive environments. The L-Series DSP Core is built on Codasip's proven processor design methodologies, integrating seamlessly into existing systems while providing a platform for developers to expand and innovate. By offering tools for easy customization within defined parameters, Codasip ensures that users can achieve the best possible outcomes for their DSP needs efficiently and swiftly.
ISPido on VIP Board is a customized runtime solution tailored for Lattice Semiconductors’ Video Interface Platform (VIP) board. This setup enables real-time image processing and provides flexibility for both automated configuration and manual control through a menu interface. Users can adjust settings via histogram readings, select gamma tables, and apply convolutional filters to achieve optimal image quality. Equipped with key components like the CrossLink VIP input bridge board and ECP5 VIP Processor with ECP5-85 FPGA, this solution supports dual image sensors to produce a 1920x1080p HDMI output. The platform enables dynamic runtime calibration, providing users with interface options for active parameter adjustments, ensuring that image settings are fine-tuned for various applications. This system is particularly advantageous for developers and engineers looking to integrate sophisticated image processing capabilities into their devices. Its runtime flexibility and comprehensive set of features make it a valuable tool for prototyping and deploying scalable imaging solutions.
The Universal DSP Library is a versatile and comprehensive solution designed to simplify digital signal processing tasks in FPGA applications. It provides a robust framework for handling complex signal processing requirements, enabling developers to integrate advanced DSP functionalities efficiently into their systems. This library is crafted to offer flexibility and adaptability, supporting a wide range of applications in various industries. This DSP library stands out for its ability to handle diverse signal processing operations with ease. By offering pre-built functions and modules, it reduces the complexity traditionally associated with DSP implementation in FPGA designs. Developers can leverage this library to accelerate their development cycles, ensuring quicker time-to-market for their products. Incorporating the Universal DSP Library into an FPGA design allows for enhanced performance and efficiency, as it optimizes the processing power of FPGAs to manage demanding signal processing tasks. Its design enables seamless integration with existing systems, providing scalable solutions that can adapt to future needs. Overall, this library is an invaluable asset for any project involving digital signal processing on FPGA platforms.
The NoISA Processor is an innovative microprocessor designed by Hotwright Inc. to overcome the limitations of traditional instruction set architectures. Unlike standard processors, which rely on a fixed ALU, register file, and hardware controller, the NoISA Processor utilizes the Hotstate machine - an advanced microcoded algorithmic state machine. This technology allows for runtime reprogramming and flexibility, making it highly suitable for various applications where space, power efficiency, and adaptability are paramount. With the NoISA Processor, users can achieve significant performance improvements without the limitations imposed by fixed instruction sets. It's particularly advantageous in IoT and edge computing scenarios, offering enhanced efficiency compared to conventional softcore CPUs while maintaining lower energy consumption. Moreover, this processor is ideal for creating small, programmable state machines and Sysotlic arrays rapidly. Its unique architecture permits behavior modification through microcode, rather than altering the FPGA, thus offering unprecedented flexibility and power in adapting to specific technological needs.
The Universal Drive Controller is an advanced solution tailored for motion control applications across a range of motor types, including DC, BLDC, and stepper motors. It offers a comprehensive set of features that allows for independent position and velocity control of multiple motors directly from FPGA platforms. This flexibility makes it ideal for various industrial and commercial applications where precise motor control is paramount. With a focus on enhancing efficiency and performance, this controller simplifies the integration of motor control systems by providing a unified framework. It streamlines the management of complex control loops and ensures that each motor operates under optimal conditions. This results in improved operational stability and precision in movement, which is crucial for applications requiring high levels of accuracy. The design of the Universal Drive Controller is optimized for easy integration and configuration, supporting seamless implementation within existing setups. It promises to cut down development times and reduce complexities associated with traditional motor controller solutions. By utilizing FPGA technology, it offers a scalable and future-proof solution that can accommodate emerging requirements in motor control engineering.
Gyrus AI's Neural Network Accelerator is specifically crafted to enhance edge computing with its groundbreaking graph processing capabilities. This innovative solution achieves unparalleled efficiency with a performance of 30 trillion operations per second per Watt (TOPS/W). Such efficiency significantly enhances the speed of machine learning operations, minimizing the clock cycles required for tasks, which translates to a 10-30x reduction in clock-cycle count. As a low-power usage configuration, the Neural Network Accelerator ensures reduced energy consumption without compromising computational performance. Designed to offer seamless integration, this accelerator maximizes die area utilization over 80%, ensuring the efficient implementation of diverse model architectures. Its uniqueness lies in its software tools that complement the IP, facilitating the operation of neural networks on the IP with seamless ease. The Neural Network Accelerator is tailored to provide high performance without the trade-offs typically associated with increased power consumption, making it ideal for a variety of edge computing applications. The product serves as a critical enabler for enterprises seeking to implement sophisticated AI solutions at the edge, ensuring that their wide-ranging applications are both efficient and high-functioning. As edge devices increasingly drive innovation across industries, Gyrus AI's solution stands out for its dexterity in supporting complex model structures while conserving power, thereby catering to the modern demands of AI-driven operations.
The SCR4 combines efficiency with advanced features, offering a 5-stage in-order pipeline, an FPU (Floating Point Unit), and an MPU (Memory Protection Unit), alongside comprehensive cache support. This microcontroller core is tailored for applications necessitating precision and enhanced computational performance. Its architecture enables effective power management, making it ideal for embedded systems with strict energy budgets. By supporting a range of markets like industrial automation and sensor networks, the SCR4 core exemplifies Syntacore's adaptability and forward-thinking approach.
Secure Protocol Engines are high-performance IP blocks that focus on enhancing network and security processing capabilities in data centers. Designed to support secure communications, these engines provide fast SSL/TLS handshakes, MACsec and IPsec processing, ensuring secure data transmission across networks. They are particularly useful for offloading intensive tasks from central processing units, thereby improving overall system performance and efficiency. These engines cater to data centers and enterprises that demand high throughput and robust security measures.
TUNGA is an advanced System on Chip (SoC) leveraging the strengths of Posit arithmetic for accelerated High-Performance Computing (HPC) and Artificial Intelligence (AI) tasks. The TUNGA SoC integrates multiple CRISP-cores, employing Posit as a core technology for real-number calculations. This multi-core RISC-V SoC is uniquely equipped with a fixed-point accumulator known as QUIRE, which allows for extremely precise computations across vectors as long as 2 billion entries. The TUNGA SoC includes programmable FPGA gates for enhancing field-critical functions. These gates are instrumental in speeding up data center services, offloading tasks from the CPU, and advancing AI training and inference efficiency using non-standard data types. TUNGA's architecture is tailored for applications demanding high precision, including cryptography and variable precision computing tasks, facilitating the transition towards next-generation arithmetic. In the computational ecology, TUNGA stands out by offering customizable features and rapid processing capabilities, making it suitable not only for typical data center functions but also for complex, precision-demanding workloads. By capitalizing on Posit arithmetic, TUNGA aims to deliver more efficient and powerful computational performance, reflecting a strategic advancement in handling complex data-oriented processes.
Tailored for storage solutions, augmented reality (AR), virtual reality (VR), and artificial intelligence (AI) applications, the RISC-V CPU IP NX Class offers a 64-bit architecture engineered for high demand environments. This class is geared towards applications that require significant data throughput and processing capability. The NX Class IP thrives in systems where intensive computational processes and large data sets are handled. It employs a sophisticated architecture that allows for optimal resource management and efficiency, ensuring fast and reliable performance across diverse industries. With support for advanced RISC-V extensions, the IP delivers flexibility allowing modifications to fit user-specific requirements. Its robust design supports functional safety and enhanced security measures, making it ideal for critical systems in sectors demanding high reliability and performance. Whether for consumer devices or industrial use, the NX Class IP represents a cutting-edge solution for developers focused on creating competitive and future-ready products.
The Satellite Navigation SoC Integration offering by GNSS Sensor Ltd is a comprehensive solution designed to integrate sophisticated satellite navigation capabilities into System-on-Chip (SoC) architectures. It utilizes GNSS Sensor's proprietary VHDL library, which includes modules like the configurable GNSS engine, Fast Search Engine for satellite systems, and more, optimized for maximum CPU independence and flexibility. This SoC integration supports various satellite navigation systems like GPS, Glonass, and Galileo, with efficient hardware designs that allow it to process signals across multiple frequency bands. The solution emphasizes reduced development costs and streamlining the navigation module integration process. Leveraging FPGA platforms, GNSS Sensor's solution integrates intricate RF front-end components, allowing for a robust and adaptable GNSS receiver development. The system-on-chip solution ensures high performance, with features like firmware stored on ROM blocks, obviating the need for external memory.
iniDSP is a 16-bit DSP core optimized for a variety of applications requiring digital signal processing. The core is architected to offer high performance in signal processing tasks across different technology platforms, suitable for integration in both FPGA and ASIC implementations. It stands out due to its ability to handle complex mathematical functions effectively while maintaining low power consumption. This DSP core is structured to allow seamless integration into larger systems, providing flexibility and reliability. It is an excellent choice for developers seeking to enhance their products with sophisticated digital signal processing capabilities without extensive redesign efforts. The iniDSP can be employed in audio processing, control systems, and any application that benefits from efficient digital computation. Engineered to ensure both ease-of-use and robustness, the iniDSP core helps shorten the development cycle by offering pre-tested, reusable components tailored for rapid deployment. Its versatile nature supports a wide range of uses, making it a vital part of Inicore's offerings for innovative system solutions.
The Prodigy Universal Processor by Tachyum Inc. is a revolutionary advancement in computing technology, seamlessly integrating the functionalities of CPUs, GPUs, and TPUs into a singular, cohesive architecture. This innovative processor is engineered to deliver unparalleled performance, energy efficiency, and space optimization to meet the increasingly demanding needs of AI, high-performance computing, and hyperscale data centers. The Prodigy processor architecture supports up to 18.5x higher performance and 7.5x better performance per watt compared to its competitors, addressing prevalent challenges like excessive power consumption and server inefficiencies in existing data center frameworks. By offering various SKUs, the Prodigy processor can be tailored to a wide array of market needs, facilitating diverse applications and workloads, from high-end HPC to big AI analytics. A standout feature of the Prodigy is its versatile emulation capabilities, allowing seamless integration and evaluation in existing systems with minimal adjustments. The Prodigy provides essential tools for developers, enabling straightforward adaptation of existing applications, which can run on the Prodigy instruction set architecture without modification. This comprehensive approach not only enhances operational efficiency but also accelerates the transition to advanced computing infrastructures.
The RFicient chip stands out for its ultra-low power consumption and remarkable efficiency, making it particularly suitable for Internet of Things (IoT) applications. This chip is designed to operate in energy-constrained environments, delivering high performance while maintaining minimal energy usage. It is engineered to facilitate long-term, maintenance-free operations in IoT devices, which are often deployed in remote or hard-to-reach locations. With a focus on sustainability, the RFicient chip significantly reduces energy consumption, extending the battery life of IoT devices. Its compact and robust design allows for seamless integration into various IoT systems, from smart homes to industrial IoT networks, providing reliable connectivity and data transmission under diverse environmental conditions. This chip not only supports the efficient gathering and processing of IoT data but also furthers ecological goals by reducing the carbon footprint associated with IoT deployments.
Discover the AON1000™, a highly efficient and accurate AI processing engine for voice and sound recognition. Designed for cost-effective integration into IoT platforms, wearables, and smart homes, this AI system features ultra-low power usage and excels in environments with substantial noise. Central to the AONVoice™ family, it facilitates wake word and voice command detection, speaker identification, and sensor processing.\n\nEmploying proprietary neural network designs and tailored inference algorithms, it surpasses typical processors in power efficiency and accuracy. In real-world noisy conditions, it delivers industry-leading hit rate accuracy per microwatt. AON1000 can be configured as a standalone chip or as part of a sensor, enabling the application processor to conserve power during continuous listening.\n\nThis hardware can be paired with the AON1000 software algorithm for integration into third-party DSPs, broadening its applicability to less power-constrained environments. With its compact architecture and versatile deployment options, AON1000 paves the way for smarter, always-on applications without draining resources.
The Cottonpicken DSP Engine is an advanced digital signal processing core that features micro-coded capabilities suitable for a variety of image processing functions. Designed for high throughput, it can manage Bayer pattern decoding into multiple formats like YUV 4:2:2, YUV 4:2:0, and RGB. Additionally, the engine includes support for specific matrix operations that are cascadeable, providing flexibility in handling complex signal processing tasks. Characterized by its capability to run at a full data clock rate up to 150 MHz, the Cottonpicken DSP Engine is optimal for applications requiring immediate data handling and processing. This core functions as part of a development package, delivered as a closed-source netlist object, ensuring easy integration into larger systems. Its robust architecture is capable of performing detailed filter kernel operations such as 3x3 and 5x5 convolution, essential for high-precision tasks in various imaging solutions. Furthermore, with YUV conversion capacities supporting formats like YCrCb and YCoCg, this DSP engine ensures adaptable performance across various platforms requiring precise digital image manipulation. Given its strong performance profile, the Cottonpicken DSP Engine is ideally suited for embedded systems, where processing speed and accuracy are critical.
The TSP1 Neural Network Accelerator is a state-of-the-art AI chip designed for versatile applications across various industries, including voice interfaces, biomedical monitoring, and industrial IoT. Engineered for efficiency, the TSP1 handles complex workloads with minimal power usage, making it ideal for battery-powered devices. This AI chip is capable of advanced bio-signal classification and natural voice interface integration, providing self-contained processing for numerous sensor signal applications. A notable feature is its high-efficiency neural network processing element fabric, which empowers signal pattern recognition and other neural network tasks, thereby reducing power, cost, and latency. The TSP1 supports powerful AI inference processes with low latency, enabling real-time applications like full vocabulary speech recognition and keyword spotting with minimal energy consumption. It's equipped with multiple interfaces for seamless integration and offers robust on-chip storage for secure network and firmware management. The chip is available in various packaging options to suit different application requirements.
XDS is a cutting-edge simulation tool from Xpeedic specifically engineered for RF and microwave circuit design. Tailored to meet the rigorous demands of the high-frequency domain, XDS helps engineers and researchers overcome challenges associated with RF circuit complexities, ensuring designs achieve the required performance metrics and standards. With XDS, engineers can conduct precise and rapid simulations to assess various aspects of circuit performance, including return loss, insertion loss, and impedance matching. These capabilities are critical in optimizing RF circuits to meet system-level requirements across telecommunications, radar, and satellite communications applications. The tool is distinguished by its user-friendly interface and adept integration capabilities, contributing to a smooth workflow across various stages of circuit development. By facilitating accurate modeling and simulation, XDS empowers designers to innovate effectively and bring high-performance RF components to market swiftly and efficiently.
The Trifecta-GPU by RADX Technologies is a sophisticated, COTS-enabled PXIe/CPCIe GPU module. This advanced module integrates an NVIDIA RTX A2000 Embedded GPU, delivering impressive compute power with 8.3 FP32 TFLOPS performance. Designed specifically for modular test and measurement (T&M) and electronic warfare markets, it facilitates complex software-defined signal processing and machine/deep learning inference applications. This GPU module is part of RADX's commitment to providing high-performance, easy-to-program solutions tailored for advanced computational needs. Leveraging the inherent capabilities of NVIDIA technology, the Trifecta-GPU enables seamless integration into existing systems, offering unparalleled support for AI-driven tasks and intensive graphics operations, thereby streamlining workflows and enhancing productivity across various applications.
The AON1020™ is engineered for superior voice and audio recognition alongside sensor-supported applications, forming an integral component of the AONSens™ Neural Network cores. It includes an AI processing engine supplied in Verilog RTL, viable for synthesis in ASIC products and FPGAs, plus dedicated software.\n\nDesigned for purposes such as voice control, context detection, and sensor applications, it supports always-on multi-wake-word detection and accurate voice command recognition. With features to differentiate and accurately detect context via various sensors, this processing engine ensures reliability in changing auditory environments.\n\nAON1020 demonstrates resilience against background noise and variability, delivering speaker-independent functionality. Optimized for detecting both single and multiple commands simultaneously, it addresses diverse needs efficiently, including human activity detecting tasks, leveraging high-accuracy algorithms in dynamic scenarios.
The TimeServo System Timer offers sub-nanosecond resolution and sub-microsecond accuracy, tailored for FPGA applications that demand precise timing functions. Designed to support packet timestamping independent of line rates, this IP core can be utilized wherever high-resolution time bases are required. A standout feature of TimeServo is its PI-DPLL that allows synchronization with an external 1 PPS signal, delivering excellent syntonicity. Without relying on host processors, the TimeServo system's simplicity and effective design are harnessed to provide clean, coherent timing outputs, essential for synchronization tasks within complex FPGA applications. Additionally, when combined with a timestamp-capable MAC, the TimeServo can be expanded into the TimeServoPTP variant, enabling full IEEE-1588v2/PTP compliance. This versatility makes TimeServo a critical component for developers seeking integrated timing solutions across multiple clock domains within FPGA environments.
The ARC processors provide power-efficient and customizable computing solutions, designed to meet the specific needs of embedded applications. With flexible architecture, these processors support a wide range of functionalities, from sensor processing to multimedia.
Domain-Specific RISC-V Cores from Bluespec provide targeted acceleration for specific application areas by packaging accelerators as software threads. This approach enables developers to achieve systematic hardware acceleration, improving the performance of applications that demand high computational power. These cores are designed to support scalable concurrency, which means they can efficiently handle multiple operations simultaneously, making them ideal for complex scenarios that require high throughput and low latency. The ease of scalability ensures that developers can rapidly adapt their designs to meet evolving demands without extensive redesign. Bluespec’s domain-specific cores are well-suited for specialized markets where performance and efficiency can make a significant impact. By providing a robust platform for acceleration, Bluespec empowers developers to create competitive and rapidly deployable solutions.
The Blazar Bandwidth Accelerator Engine is a cutting-edge solution that enhances computational efficiency by integrating in-memory compute features within FPGA environments. As modern technological applications demand faster data processing capabilities, the Blazar IC emerges as a pivotal component for high-bandwidth, low-latency applications. With the potential to deliver up to 640 Gbps bandwidth, it optimizes data paths by facilitating up to 5 billion reads per second. One of the most notable attributes of the Blazar Engine is its customizable nature, thanks to the inclusion of optional RISC cores, which offer further computational power to meet application-specific demands. These RISC cores enable the Blazar to perform sophisticated computations directly in memory, thus accelerating data throughput and minimizing latency.
IRIS is Xpeedic's specialized platform dedicated to the simulation of RF and analog IC designs, providing robust support for engineers working with radio frequency components. Recognized for its high precision and speed, IRIS enables users to conduct thorough analyses, ensuring that their designs adhere to functional and performance specifications essential for modern communication systems. This tool targets the unique challenges posed by RF and analog circuitry, offering simulation results that account for non-linear dynamics and frequency-dependent behaviors inherent in these domains. Its high-fidelity models help designers optimize layout and component selection, ultimately improving device efficiency and performance across various applications. The IRIS simulation suite contributes to reduced time-to-market by streamlining the design validation process, allowing for early detection of potential issues. Engineers benefit from its integration capabilities with other Xpeedic offerings, facilitating a holistic design approach that encompasses various stages of product development. With IRIS, companies can stay ahead in the rapidly changing RF landscape, ensuring resilient designs that meet industry standards for quality and performance.
The IMG DXS GPU is designed to meet the safety and performance demands of automotive applications with a focus on advanced driver assistance systems (ADAS). Featuring a multi-core architecture with built-in functional safety mechanisms, it allows for efficient handling of mixed-criticality workloads. Its distributed safety mechanisms enable significant reductions in silicon area and power consumption, making it ideal for safety-critical environments. This GPU excels in providing high-performance visuals for in-car systems like digital instrument clusters and heads-up displays. With ISO 26262 functional safety certification, it meets stringent automotive industry standards, ensuring reliability even in fault scenarios. The IMG DXS GPU supports a wide range of graphical applications, from infotainment to vital safety systems, with hardware-accelerated graphics rendering capabilities. It is engineered for seamless integration into automotive systems, offering robust performance while maintaining energy efficiency.
xcore-200 leverages multicore microcontroller technology to deliver exceptional processing power for embedded systems. Built to handle intensive DSP and I/O tasks, it excels in environments requiring seamless integration of voice, audio, and data processing. This processor is ideally suited for applications in consumer electronics and industrial control, offering a balance of high processing capability and energy efficiency. With its deterministic performance and support for various communication interfaces, xcore-200 ensures reliable operation across diverse environments. By facilitating sophisticated signal processing and real-time control, xcore-200 enables developers to implement innovative solutions that meet demanding technical specifications. Its adaptability makes it a go-to choice for applications requiring robust computational frameworks and high-speed processing capabilities.
Designed for environments requiring extensive processing power, the Origin E8 is suitable for high-demand applications such as data centers and autonomous vehicles. This NPU supports major AI models effectively while minimizing latency and maximizing throughput. Capable of handling up to 128 TOPS, it ensures robust performance with efficient resource management.
Specialty Microcontrollers from Advanced Silicon harness the capabilities of the latest RISC-V architectures for advanced processing tasks. These microcontrollers are particularly suited to applications involving image processing, thanks to built-in coprocessing units that enhance their algorithm execution efficiency. They serve as ideal platforms for sophisticated touch screen interfaces, offering a balance of high performance, reliability, and low power consumption. The integrated features allow for the enhancement of complex user interfaces and their interaction with other system components to improve overall system functionality and user experience.
The AON1010™ is part of the AONVoice™ Neural Network cores, offering fine-tuned voice and audio recognition capabilities specifically tailored for processing microphone data. It consists of an AI processing engine provided in Verilog RTL for integration into ASICs or FPGAs, along with dedicated software.\n\nThis low-power processing engine is designed for consistent voice control, context detection, and speaker identification applications. Its always-on functionality supports multi-wake-word detection and voice command recognition, with advanced acoustic event detection for high accuracy. Robust across changing user and environmental conditions, it provides speaker-independent operation.\n\nAON1010 excels at adapting to noisy conditions, successfully recognizing multiple commands simultaneously. Its scalability makes it a suitable choice for various application-specific integrations, ensuring concurrent detection without sacrificing performance in complex auditory environments.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!