Is this your business? Claim it to manage your IP and profile
The Dynamic Neural Accelerator (DNA) II offers a groundbreaking approach to enhancing edge AI performance. This neural network architecture core stands out due to its runtime reconfigurable architecture that allows for efficient interconnections between compute components. DNA II supports both convolutional and transformer network applications, accommodating an extensive array of edge AI functions. By leveraging scalable performance, it makes itself a valuable asset in the development of systems-on-chip (SoC) solutions. DNA II is spearheaded by EdgeCortix's patented data path architecture, focusing on technical optimization to maximize available computing resources. This architecture uniquely allows DNA II to maintain low power consumption while flexibly adapting to various task demands across diverse AI models. Its higher utilization rates and faster processing set it apart from traditional IP core solutions, addressing industry demands for more efficient and effective AI processing. In concert with the MERA software stack, DNA II optimally sequences computation tasks and resource distribution, further refining efficiency and effectiveness in processing neural networks. This integration of hardware and software not only aids in reducing on-chip memory bandwidth usage but also enhances the parallel processing ability of the system, catering to the intricate needs of modern AI computing environments.
The SAKURA-II AI accelerator is designed specifically to address the challenges of energy efficiency and processing demands in edge AI applications. This powerhouse delivers top-tier performance while maintaining a compact and low-power silicon architecture. The key advantage of SAKURA-II is its capability to handle vision and Generative AI applications with unmatched efficiency, thanks to the integration of the Dynamic Neural Accelerator (DNA) core. This core exhibits run-time reconfigurability that supports multiple neural network models simultaneously, adapting in real-time without compromising on speed or accuracy. Focusing on the demanding needs of modern AI applications, the SAKURA-II easily manages models with billions of parameters, such as Llama 2 and Stable Diffusion, all within a mere power envelope of 8W. It supports a large memory bandwidth and DRAM capacity, ensuring smooth handling of complex workloads. Furthermore, its multiple form factors, including modules and cards, allow for versatile system integration and rapid development, significantly shortening the time-to-market for AI solutions. EdgeCortix has engineered the SAKURA-II to offer superior DRAM bandwidth, allowing for up to 4x the DRAM bandwidth of other accelerators, crucial for low-latency operations and nimbly executing large-scale AI workflows such as Language and Vision Models. Its architecture promises higher AI compute utilization than traditional solutions, thus delivering significant energy efficiency advantages.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.