Google's DeepMind AI can create 3D models of 2D images
Google's UK-based sister company DeepMind has developed an artificial intelligence (AI) system that can create scenes in 3D after having observed them in the form of 2D images. Researchers have trained the AI to learn like humans and guess what pictures might look like from different angles while only observing them from one, flat perspective.
Developing a neural network that can learn from its surroundings
Scientists have developed a neural network called the General Query Network (GQN). It is programmed to observe its surroundings and train only on that data, and doesn't require humans to input labeled data in order to identify what it is looking at. So the neural network classifies objects and makes inferences only on the data it is being presented with.
It observes only three images to predict a 3D version
Even when not armed with any data beforehand, GQN is able to imagine what a picture might look like in 3D and renders an entire scene in different angles. The AI has shown results in a controlled environment as well as with randomly generated images.
Working towards "fully unsupervised scene understanding"
So far, the AI has only been tested on synthetic scenes. Since this is GQN-based, to want the AI to train with real-world images, it is imperative that it be tested on realistic scenes from photographs. In the future, DeepMind's AI might be able to recreate identical 3D scenes from the real world on-demand using only their 2D photographs.