[ad_1]
Capturing and reproducing realistic, real-world objects for any virtual environment is complex and time-consuming. Imagine using a conventional camera with a built-in flash — from any mobile device or off-the-shelf digital camera — to simplify this task. A global team of computer scientists have developed a novel method that replicates physical objects for the virtual and augmented reality space just using a point-and-shoot camera with a flash, without the need for additional, and oftentimes expensive, supporting hardware.
“To faithfully reproduce a real-world object in the VR/AR environment, we need to replicate the 3D geometry and appearance of the object,” says Min H. Kim, associate professor of computer science at KAIST in South Korea and lead author of the research. “Traditionally, this has been either done manually by 3D artists, which is a labor-intensive task, or by using specialized, expensive hardware. Our method is straightforward, cheaper and efficient, and reproduces realistic 3D objects by just taking photos from a single camera with a built-in flash.”
Kim and his collaborators, Diego Gutierrez, professor of computer science at Universidad de Zaragoza in Spain, and KAIST PhD students Giljoo Nam and Joo Ho Lee, will present this new work at SIGGRAPH Asia 2018 in Tokyo 4 December to 7 December. The annual conference features the most respected technical and creative members in the field of computer graphics and interactive techniques, and showcases leading edge research in science, art, gaming and animation, among other sectors.
Existing approaches for the acquisition of physical objects require specialized hardware setups to achieve geometry and appearance modeling of the desired objects. Those setups might include a 3D laser scanner or multiple cameras, or a lighting dome with more than a hundred light sources. In contrast, this new technique only needs a single camera, to produce high-quality outputs.
“Many traditional methods using a single camera can capture only the 3D geometry of objects, but not the complex reflectance of real-world objects, given by the SVBRDF,” notes Kim. SVBRDF, which stands for spatially-varying bidirectional reflectance distribution functions, is key in obtaining an object’s real-world shape and appearance. “Using only 3D geometry cannot reproduce the realistic appearance of the object in the AR/VR environment. Our technique can capture high-quality 3D geometry as well as its material appearance so that the objects can be realistically rendered in any virtual environment.”
The group demonstrated their framework using a digital camera, the Nikon D7000 and the built-in camera of an Android mobile phone, in a series of examples in their paper, “Practical SVBRDF Acquisition of 3D Objects with Unstructured Flash Photography.” The novel algorithm, which does not require any input geometry of the target object, successfully captured the geometry and appearance of 3D objects with basic, flash photography and reproduced consistent results. Examples that were showcased in the work included diverse set of objects that spanned a wide range of geometries and materials, including metal, wood, plastic, ceramic, resin and paper, and comprised of complex shapes like a finely detailed mini-statute of Nefertiti.
In future work, the researchers hope to further simplify the capture process or extending the method to include dynamic geometry or larger scenes, for instance.
Story Source:
Materials provided by Association for Computing Machinery. Note: Content may be edited for style and length.
[ad_2]