As the demand for realistic volumetric video for AR experiences begins to grow (along with the facilities and services available for capturing it), Google researchers have discovered how to improve the format.
The team has a system called "The Relightables", which consists of a spherical cage with 330 programmable LED lights and around 100 cameras designed to capture volumetric video.
According to the team, the Relightables system realizes three innovations. First, the system's cameras operate as depth sensors and capture 12.4MP depth maps of the subject. Secondly, the team created a geometric and machine learning reconstruction pipeline to synthesize video from the recorded data.
Finally, the system also captures reflection maps, synchronized with the depth maps, which allows the lighting of the subject to be manipulated in an AR or VR scene instead of the lighting of preserve the original studio. This information is obtained by alternating flashes of two different color gradient patterns, which the system uses to derive the reflective properties of the subject during video reconstruction.
Having recently published the study through Association for Computing Machinery, the team presented its findings at SIGGRAPH Asia 2019.
"Although significant progress has been made in the field of volumetric recording systems, focusing on 3D geometric reconstruction with textures with high resolution, much less work has been done to restore the photometric characteristics needed for relighting, "the team wrote in its paper summary.
"By contrast, a large number of works have tackled relightable acquisition for image-based approaches, which photograph the subject under a series of basic lighting conditions and combine the images to show the subject as they would appear in a target lighting environment. However, these approaches are not yet adapted for use in context or a high resolution volumetric capture system, our method combines this ability to realistically re-illuminate people for arbitrary environments, with the benefits of free-view volumetric capture and new levels of geometric accuracy for dynamic performance. "
The volumetric video segment has grown steadily over the past two years. In addition to Sony & # 39; s Mixed Reality Capture Studios, Sony has set up its own studio, while Verizon has bought Jaunt for his dive in volumetric recording technology.
The use cases for volumetric video in augmented reality experiences, both now and in the future, are plentiful. The New York Times has demonstrated how volumetric video can improve augmented reality content for compelling storytelling, while 8th Wall has expanded the format to its web-based AR platform.
Meanwhile, Magic Leap's acquisition of [1 9459022] Mimesys and its real-time holographic video call technology show how volumetric video recording will change the way people will communicate in the near future.
But Google & # 39; s Creative Lab has devised another use case in collaboration with Opera Queensland: virtual opera. The prototype experience, which debuts on SIGGRAPH, features three artists that have been captured via the Relightables system.
With the contribution of Google, the extra realism of 3D content from volumetric video from Relightables makes it even more difficult to distinguish AR from reality.