DeepMind's AlphaTensor AI makes complex matrix multiplications look like games
We all have done matrix multiplications in schools, haven't we? If you remember correctly, they weren't so difficult. But do you know that outside the classroom, this humble mathematical operation's influence ranges from processing images to generating graphics for computer games and much more? Google's DeepMind has developed an AI dubbed 'AlphaTensor' that can make all these complex computational calculations look like a game.
Why does this story matter?
Algorithms are present to make computational tasks easy. Human-made algorithms have gotten us this far but we can't deny their inefficiencies. That's where AI-generated algorithms come into play. AIs like AlphaTensor can take computational tasks to a different level than what was perceivable once. It is matrix multiplication now, but it could be something else tomorrow. The potential is limitless with these AI-generated algorithms.
Multiplying larger matrices efficiently is still a big struggle
Matrix multiplication gets awfully difficult in the digital world. Algorithms have long been used to find efficient solutions to matrix multiplications. In 1969, mathematician Volker Strassen found a way to multiply a pair of 2x2 matrices in just seven multiplications. This was much more efficient than the standard algorithm. However, multiplying larger matrices, even a pair of 3x3 matrices, is still a huge struggle.
The AI is reinforced if it achieves a multistep goal
Researchers at DeepMind used two methods to enable AlphaTensor to find algorithms for matrix multiplications. Firstly, they used 'reinforcement learning,' a form of machine learning where an AI agent (usually a neural network) is rewarded for achieving a multistep goal. As reinforcement, the agent's internal parameters are updated to make future success likely. AlphaTensor also uses a game-playing method called 'tree search.'
What is 'tree search'?
In 'tree search,' while planning its next action, AlphaTensor explores the outcomes of branching possibilities. The neural network helps it decide the best path by predicting the most promising outcomes at each step. As the AI learns, it uses outcomes from its games as feedback to improve the neural network. This feedback process improves tree search.
Finding efficient algorithms is converted into a one-player game
Now let's have a look at what AlphaTensor does. The problem of finding efficient algorithms for multiplying matrices is converted into a single-player game. Each game is a puzzle that starts with a grid of numbers filled in correctly. The AI has to get all the numbers to zero in the fewest steps. It can only make those moves that are allowed.
The agent has to choose from million of moves
Each move made by the AI is a calculation. When inverted, these moves combine entries from two matrices to create an entry in the output matrix. This is a difficult process as the agent has to pick from millions of moves. Researchers showed AlphaTensor some successful games so that it doesn't have to start from scratch.
The AI came with more efficient solutions in many cases
Researchers at DeepMind tested AlphaTensor on input matrices up to 5x5. At times, it came up with shortcuts devised by Strassen and other mathematicians, but other times, it came up with more efficient solutions. It tackled larger matrix multiplications by creating a meta-algorithm that breaks them down into smaller problems. In many cases, the AI sped up matrix multiplications by several percentages.
AlphaTensor will surpass the realm of human intuition: DeepMind
"Through learning, AlphaTensor gradually improves over time, re-discovering historical fast matrix multiplication algorithms such as Strassen's, eventually surpassing the realm of human intuition and discovering algorithms faster than previously known," said DeepMind. "It has got this amazing intuition by playing these games," said Pushmeet Kohli of DeepMind.