NVIDIA Taught an AI to Create Fully-Textured 3D Models from Just a Single 2D Image
And it only takes 100 milliseconds.
Graphics processing technology company NVIDIA has announced that its global team of researchers has managed to teach an AI to generate 3D models from a single 2D image completely autonomously. The news marks an incredible technological advancement, as 3D models previously required either more than one image providing views of a single object from different angles or human intervention.
According to NVIDIA, the new tool — called a differentiable interpolation-based renderer (DIB-R) — can render a fully-textured 3D model from just one photograph in a matter of milliseconds. To be able to do so, NVIDIA trained the neural network by feeding it a vast amount of datasets that include 3D models, images that were transformed into 3D models, and a collection of images showcasing a particular object from different angles. After two days of the process, the AI was able to generate 3D models from previously-unanalyzed images without the need for any human intervention. The company says the technology can significantly change how robotics and other autonomous technology — such as self-driving cars — function, as they can better adapt to their surroundings if they are able to generate 3D models of the environment around them.
“This is essentially the first time ever that you can take just about any 2D image and predict relevant 3D properties,” says Jun Gao, one of the researchers who worked on the project.
For those who wish to learn more about the new technology, you can access NVIDIA’s full paper over on its website.
Elsewhere in tech, Oculus’ latest update for the Quest allows it to track your hands without the need for controllers.