The memBrain™ neuromorphic memory solution addresses the challenge faced by edge AI processing tasks, such as video and voice recognition, which demand intensive data operations. It leverages SuperFlash® technology principles, specifically optimizing it for neural network inference via Vector Matrix Multiplication (VMM). Unlike typical digital processors, memBrain™ offers a compute-in-memory approach. This allows it to store neural network weights directly in the memory's floating gates, effectively cutting down data transfer latency from off-chip memories. This design not only slashes power consumption by up to 20 times in comparison with traditional digital approaches but also reduces cost and enhances frame latency of AI inferences.
One of the key advancements offered by memBrain™ is in its dynamic Multiply-Accumulate (MAC) operations, pivotal for AI and DNNs. By implementing analog operation within the memory cells, it performs these MACs with notable energy efficiency, facilitating large-scale neural networks through its modular 'Tile' configurations. Each Tile is capable of substantial operations with low power consumption, making it suitable for deeply embedded, battery-dependent devices.
This innovative technology enables seamless integration of AI capabilities into edge devices, enhancing functionality while adhering to power and cost constraints. It offers the edge AI devices an architectural advantage by enabling a more efficient data processing model that combines memory and compute capabilities.