Chip Talk > From Compute-in-SRAM to Side-Channel Attacks: Highlights from Recent Semiconductor Research
Published September 23, 2025
In a rapidly evolving semiconductor landscape, staying abreast of cutting-edge research can provide significant competitive advantages. The ongoing exploration within this space continues to push the boundaries of technology. Recently, several pivotal papers have surfaced, shedding light on novel concepts ranging from Compute-in-SRAM to undervolting-based static side-channel attacks. Here's a closer look at the highlights.
A remarkable contribution comes from researchers at Cornell University, USC, and MIT, focusing on characterizing realistic workloads on Compute-in-SRAM devices. Traditionally, SRAM has functioned solely as a memory component; however, with compute-in-memory paradigms, it's increasingly being leveraged for processing tasks directly. This advancement promises to alleviate memory bandwidth limitations and enhance processing speeds, crucial for AI applications. More on this research can be found at Semiconductor Engineering.
SCREME offers a scalable framework for resilient memory design, developed by scholars from the University of Central Florida and the University of Rochester, among others. As data generation expands exponentially, resiliency in memory devices becomes paramount to ensure consistent performance and robust data integrity. With memory devices susceptible to a spectrum of failure modes, innovative approaches like SCREME are crucial for future-proofing data storage systems.
Another significant advancement is in the realm of RISC-V custom instructions, as explored by Tampere University. By automatically retargeting hardware and code generation, this work seeks to optimize instruction sets for specific applications. This customization aligns with the growing desire for versatile, tailor-made processing solutions, especially relevant in IoT and embedded systems.
Research from Forschungszentrum Jülich and RWTH Aachen integrates analog in-memory computing with attention mechanisms for large language models (LLMs). This approach aims to reduce power consumption and increase efficiency, crucial for deploying LLMs in mobile and edge devices. This work underscores a significant trend towards energy-efficient AI processing, vital for sustainable technology development.
Efforts at the Georgia Institute of Technology and associated universities have facilitated hardware acceleration of the Kolmogorov-Arnold network (KAN) for large systems. By enhancing computational efficiency and scalability, this research presents substantial implications for data-intensive applications, from scientific simulations to complex system modeling.
Modern attacks against hardware demonstrate that security must remain a top priority. A new study from Worcester Polytechnic Institute and Ruhr University Bochum unveils Chypnosis, an undervolting-based method capable of extracting secrets via static side-channel attacks. This insight into security vulnerabilities emphasizes the continual need for robust protective mechanisms as hardware becomes more complex and interconnected.
In summary, these innovations reflect the dynamic nature of semiconductor research. As we look to the future, these breakthrough studies lay the foundation for more efficient, resilient, and secure semiconductor technologies. Encouragingly, each paper contributes uniquely to overcoming current limitations, effectively paving the way for the next generation of semiconductor design.
Join the world's most advanced semiconductor IP marketplace!
It's free, and you'll get all the tools you need to discover IP, meet vendors and manage your IP workflow!
No credit card or payment details required.
Join the world's most advanced AI-powered semiconductor IP marketplace!
It's free, and you'll get all the tools you need to advertise and discover semiconductor IP, keep up-to-date with the latest semiconductor news and more!
Plus we'll send you our free weekly report on the semiconductor industry and the latest IP launches!