Is this your business? Claim it to manage your IP and profile
Avispado is an in-order 64-bit RISC-V core, optimized for high-efficiency and low-power operation, making it an exemplary choice for edge AI and embedded applications. This core excels in environments where energy efficiency is paramount, leveraging its in-order execution pathway while remaining power conscious. Avispado is fitted with RISC-V Vector Specification 1.0, enabling advanced acceleration for AI workloads. Adaptable to multi-core setups, Avispado facilitates multiprocessing and offers compatibility with Linux, ensuring a seamless integration into diverse computational environments. The high bandwidth access enabled by Gazzillion Misses™ technology supports efficient handling of multiple memory requests, thereby enhancing performance. Customization options for Avispado include varied cache sizes and the integration of a branch predictor, ensuring a tailored performance that meets unique computing requirements. Whether applied in IoT contexts, machine learning tasks, or edge computing scenarios, Avispado’s reliable and flexible architecture stands out for its integration and scalability.
Atrevido is a versatile 64-bit out-of-order execution RISC-V processor core known for its high performance in demanding AI and HPC applications. The core is capable of handling a wide range of tasks with a design that supports 2/3/4-wide execution paths, ensuring optimal throughput. Included within Atrevido's capabilities are integrated vector and tensor units, which deliver comprehensive acceleration for AI and machine learning tasks without latency penalties. Equipped with Gazzillion Misses™ technology, Atrevido efficiently manages memory bottlenecks, supporting up to 128 simultaneous memory requests. This enhancement is critical for handling large datasets typical in AI and high-performance computing. Designed for Linux environments, it supports multiprocessing with extensive interfaces such as AXI and CHI for high-bandwidth requirements. With customizable elements including vector specifications and cache configurations, Atrevido is engineered for precision and flexibility. Whether you are developing key-value stores, engaging in AI inference, or conducting complex data analysis, Atrevido's robust architecture is designed to provide highly efficient and rapid computational capabilities.
Vector Unit by Semidynamics is a customizable RISC-V vector processing component that plays a crucial role in accelerating AI, signal processing, and scientific computing tasks. Engineered for seamless integration with the company's RISC-V cores such as Atrevido and Avispado, this vector unit facilitates enhanced computational power dedicated to large datasets and complex algorithms. The vector unit supports a range of configurations to increase processing efficiency. From V4 to V32 setups, it handles both integer and floating-point calculations, enabling advanced mathematical operations and complex data transformations. Whether operating on Fourier transforms or managing intricate numerical datasets, the vector unit is designed to improve speed and accuracy. With its support for the RISC-V Vector Extension (RVV 1.0), this vector unit is both flexible and efficient, catering to a broad spectrum of workload demands. Its integration into AI and scientific computing models ensures balanced throughput and energy-efficient calculations, thereby serving as a critical component in high-performance computing environments.
The Tensor Unit by Semidynamics is a fully coherent and programmable RISC-V tensor processing unit engineered to deliver superior performance in AI workloads, such as neural network inferencing and training tasks. It forms a vital part of Semidynamics' integrated AI computing platform, providing enhanced acceleration capabilities for AI-specific applications. Capable of operating in conjunction with Semidynamics' Atrevido and Avispado RISC-V cores, the Tensor Unit simplifies AI processing with its scalable architecture. Offering configurations that range from T1 to T8, this unit is able to process a wide variety of AI algorithmic demands, such as those found in convolutional networks and transformers. Fully integrated with other compute elements, the Tensor Unit ensures low power consumption and reduced latency, eliminating the need for separate accelerators. This seamless integration supports advanced applications like AI inference and high-throughput AI tasks, making it essential for any setting requiring heavy-duty AI processing.
The Cervell™ NPU is a high-performance neural processing unit designed for next-generation AI applications. This scalable NPU core offers substantial AI computational power, configurable to deliver between 8 and 64 Tera Operations Per Second (TOPS) using the INT8 instruction set, and up to 256 TOPS with INT4. The Cervell™ features a standard RISC-V architecture optimized for machine learning, making it an ideal choice for LLMs, edge AI, and AI data centers. Furthermore, the NPU supports a comprehensive AI processing architecture that integrates CPU, vector, and tensor functionalities for a seamless execution of AI workloads. Its all-in-one design eliminates vendor lock-in, offering full programmability and efficiency. The Cervell™ is especially efficient in handling diverse applications, such as recommendation systems and deep learning models, providing significant acceleration and flexibility. Designed for efficiency, the Cervell™ NPU can be tailored to cater to the diverse and high-speed needs of modern AI tasks. With its ability to configure processing power and optimize delivery, it serves as a fundamental building block for scalable AI systems that demand robust yet flexible performance capabilities.
Gazzillion Misses™ technology is a cutting-edge memory management solution designed to address the latency issues typically associated with high-bandwidth data movement in AI and HPC applications. This technology allows for zero latency and maximum bandwidth, ensuring seamless integration with AI workloads that require efficient memory access and data throughput. The innovation behind Gazzillion Misses™ lies in its ability to handle a high volume of memory requests simultaneously. It is adept at managing the flow of data to and from the processor, thereby reducing bottlenecks and enhancing computational efficiency. Such capabilities are critical in environments processing large data sets, such as machine learning models and big data applications. Primarily utilized within Semidynamics' high-performance RISC-V cores, Gazzillion Misses™ technology complements other core components by optimizing memory bandwidth, significantly improving the performance of complex applications. Whether used independently or integrated into a larger system, this technology remains a cornerstone of efficient data handling and high-speed computing.
Discover Semidynamics' Cervell NPU, which tackles AI's memory bottleneck with a scalable RISC-V design, perfect for diverse deployment from edge to datacenter. Read more
AI is transforming chip design with generative techniques, enhancing productivity, and integrating disciplines, according to industry experts. Read more
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!
To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.