For most regular users, LiDAR sensors for accurate depth sensing remain the exclusive domain of Apple iPhones and iPads, but Google is helping Android device makers close the depth gap on the software side through the ARCore toolkit.
After introducing new AR features for Search and Maps and previewing the stunning Project Starline 3D video conferencing concept during Tuesday’s I / O keynote presentation, Google unceremoniously announced ARCore 1.24 that two new AR capabilities: Raw Depth API and Recording and Playback API, with the former enabling more realistic AR experiences and more accurate occlusion in the absence of LiDAR.
The Raw Depth API builds on the existing Depth API by capturing additional depth data so that apps can render realistic AR experiences through a standard smartphone camera. However, Android devices with depth-sensing time-of-flight sensors will provide higher quality experiences.
“The new ARCore Raw Depth API provides more detailed representations of the geometry of objects in the scene by generating ‘raw’ depth maps with associated reliability maps,” said Google AR product managers Ian Zhang and Zeina Oweis. “These raw depth maps contain raw data points, and the reliability maps provide confidence in the depth estimate for each pixel in the raw depth map.”
The result of the aggregated data is improved geometry recognition, which means greater precision in depth measurement and a better understanding of the environment for realistically anchoring AR content in physical environments.
One of the first apps to take advantage of the Raw Depth API is TikTok. The app’s Green Screen Projector effect envelops objects where the Raw Depth API has a high degree of confidence with photos from the user’s camera roll.
Other early adopters include the virtual Rube Goldberg machine game AR Doodads, the measurement app AR Connect, the 3D scan app 3D Live Scanner and Teamviewer’s lifeAR for remote assistance.
In addition, with the recording and playback API, ARCore gives apps the ability to capture inertial measurement units (IMU) and depth data in video images. For developers, this means they can test AR apps without going into the field for different real-world environments.
The recording and playback API also creates a new type of AR experience for end users, allowing them to add virtual content to videos. For example, SK Telecom’s JumpAR allows users to interact with video from locations in South Korea and add AR content. Meanwhile VoxPlop! by Nexus Studios gives users the ability to add 3D characters to videos and share them with others, who in turn can edit them with AR content.
Google will cover these capabilities in greater … ahem … depth during the New Capabilities in ARCore session, debuting Wednesday.
Ironically, it was Google that first introduced depth sensors through its Tango hardware platform, but adoption was limited to just two commercial devices. Apple responded with ARKit, in which version 1.0 could detect horizontal surfaces through a standard iPhone camera. Google turned from the hardware approach and adapted Tango’s software to ARCore.
Now Apple has made depth sensors mainstream with its iPhone Pro and iPad Pro series, while Google relies on software to capture depth data.
Despite the constant iterations of ARKit and ARCore, the proliferation of mobile AR apps hasn’t really taken the world by storm. Instead, the AR platforms for Snapchat (which also supports the ARCore Depth API) and Facebook / Instagram, with their many bite-sized AR experiences, are the ones that grab the attention of developers, creators and brands.
Of course, the four companies and their respective mobile platforms are essential public beta tests for the next era in mobile computing – smartglasses.