Is this your business? Claim it to manage your IP and profile
The SAKURA-II AI Accelerator represents a cutting-edge advancement in the field of generative AI, offering remarkable efficiency in a compact form factor. Engineered for rapid real-time inferencing, it excels in applications requiring low latency and robust performance in small, power-efficient silicon. This accelerator adeptly manages multi-billion parameter models, including Llama 2 and Stable Diffusion, under typical power requirements of 8W, catering to diverse applications from Vision to Language and Audio. Its core advantage lies in exceeding the AI compute utilization of other solutions, ensuring outstanding energy efficiency. The SAKURA-II further supports up to 32GB of DRAM, leveraging enhanced bandwidth for superior performance. Sparse computing techniques minimize memory footprint, while real-time data streaming and support for arbitrary activation functions elevate its functionality, enabling sophisticated applications in edge environments. This versatile AI accelerator not only enhances energy efficiency but also delivers robust memory management, supporting advanced precision for near-FP32 accuracy. Coupled with advanced power management, it suits a wide array of edge AI implementations, affirming its place as a leader in generative AI technologies at the edge.
The Dynamic Neural Accelerator II (DNA-II) from EdgeCortix is an innovative architectural design that delivers high efficiency and exceptional parallelism for edge AI applications. Its unique runtime reconfigurable interconnects enable flexibility and scalable performance tailored to various AI workloads. Supporting both convolutional and transformer networks, DNA-II is integral to numerous system-on-chip (SoC) implementations, enhancing EdgeCortix's SAKURA-II AI Accelerators' performance and efficiency. A patent-backed data path reconfiguration technology allows DNA-II to optimize parallelism, minimize power consumption, and improve overall capability in handling complex neural networks. Additionally, it significantly reduces the reliance on on-chip memory bandwidth, enabling faster, more efficient task execution. DNA-II works seamlessly with the MERA software stack to ensure the optimal scheduling and allocation of computational resources, fostering enhanced AI model processing and efficient edge computing. Its adaptable architecture supports a wide spectrum of AI applications, making it a critical component of EdgeCortix's commitment to advancing edge AI technologies.
The MERA Compiler and Framework by EdgeCortix streamlines the deployment of neural network models across varied hardware architectures while maintaining efficiency and performance. MERA acts as a platform-agnostic toolset, featuring comprehensive APIs, code-generation capabilities, and runtime support, which facilitate the deployment of pre-trained deep neural networks. This compiler supports the integration of advanced AI applications in vision, audio, and language processing, helping developers optimize deployment workflows using familiar platforms. MERA uniquely enhances integration ease across AMD, Intel, Arm, and RISC-V processors with built-in heterogeneous support, simplifying EdgeCortix AI platform assimilation into existing systems. Pre-defined models from Hugging Face or EdgeCortix Model Library are optimized through post-training calibration and quantization, making MERA an invaluable resource for AI inference development. Beyond its software stack, MERA's contributions span comprehensive toolkits that include runtime configuration and simulation capabilities. This compiler empowers developers to scale AI inference from modeling to deployment effortlessly while achieving market-leading energy efficiencies when used alongside SAKURA-II modules.
EdgeCortix secures series B funding, nearing $100M. Discover their strategic expansion in AI processing for diverse industries. Insights from key investors. Read more
Explore the breakthrough collaboration between EdgeCortix and Renesas with their RUHMI Framework, simplifying AI deployment on MCUs and MPUs. Read more
EdgeCortix to advance the NovaEdge AI chiplet with NEDO funding for energy-efficient edge solutions in AI, set to revolutionize multiple tech sectors. Read more
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!
To evaluate IP you need to be logged into a buyer profile. Select a profile below, or create a new buyer profile for your company.