An artificial intelligence algorithm can transform still images into a high-resolution, explorable 3D world, with potential implications for film effects and virtual reality.
By feeding the neural network a selection of images of a scene and a rough 3D model of the scene created automatically using off-the-shelf software called , it is able to accurately visualise what the scene would look like from any viewpoint.
The neural network, developed by and colleagues at the University of Erlangen-Nuremberg in Germany, is different to previous systems because it is able to extract physical properties from still images.
Advertisement
āWe can change the camera pose and therefore get a new view of the object,ā he says.
The system could technically create an explorable 3D world from just two images, but it wouldnāt be very accurate. āThe more images you have, the better the quality,ā says Rückert. āThe model cannot create stuff it hasnāt seen.ā
use between 300 and 350 images captured from different angles. Rückert hopes to improve the system by having it simulate how light bounces off objects in the scene to reach the camera, which would mean fewer still images are needed for accurate 3D rendering.
āUntil now, creating photorealistic images from 3D reconstructions wasn’t fully automated and always had perceptible flaws,ā says , founder of New York-based company Abound Labs, who works on 3D capture software.
Still images can be turned into a 3D world Darius Ruckert et al.
While Field points out the system still requires the input of accurate 3D data, and doesn’t yet work for moving objects, āthe rendering quality is unparalleledā, he says. āIt’s proof that automated photorealism is possible.ā
Field believes the technology will be used for generating visual effects in films and virtual reality walkthroughs of locations. āIt’s going to accelerate the already-hot research field of machine learning-based rendering for computer generated imagery,ā he says.
Reference:
Topics:



