This tech reduces AI's power consumption by 1,000 times
Researchers from the University of Minnesota Twin Cities in the US, have developed a groundbreaking technology known as Computational Random-Access Memory (CRAM). This tech has the potential to decrease energy consumption for artificial intelligence (AI) computing, by at least a thousand times compared to current standards. The findings were published in a peer-reviewed study in NPJ Unconventional Computing.
A solution to high energy consumption
The research team identified that current AI computing systems consume up to 200 times more energy, due to constant data transfer between processing components and storage locations. To address this issue, they developed the CRAM technology. This innovative approach involves integrating a high-density and reconfigurable spintronic in-memory compute substrate, directly into memory cells, significantly reducing energy consumption.
CRAM outperforms existing processing-in-memory solutions
CRAM's methodology differs from existing processing-in-memory solutions like Samsung's PIM technology, which incorporates a processing computing unit within the memory core. In contrast, with CRAM, data is processed entirely within the computer's memory array without ever leaving it. This unique approach results in an energy consumption improvement "on the order of 1,000x over a state-of-the-art solution," according to the research team.
CRAM's efficiency and speed surpass expectations
In tests, CRAM demonstrated exceptional efficiency and speed. When given a MNIST handwritten digit classifier task, used to train AI systems to recognize handwriting, CRAM was 2,500 times more energy-efficient and 1,700 times quicker than a near-memory processing system using 16nm technology node. These results highlight the significant potential of this new technology in enhancing AI processing performance while reducing energy consumption.
Research team seeks patents and industry collaboration
The research team, led by postdoctoral researcher Yang Lv from the University of Minnesota's Department of Electrical and Computer Engineering, has applied for several patents based on this new technology. They aim to collaborate with leaders in the semiconductor industry, including those in Minnesota. The goal is to provide large-scale demonstrations and produce hardware that can improve AI functionality while also making it more efficient.