As brands and content creators create more augmented reality experiences, the demand for tools to create in-kind 3D content is growing.
And Apple may have just leveled the playing field for creating 3D content.
On Monday, at WWDC 2021, Apple introduced Object Capture, a photogrammetry tool built on the Swift programming language and coming to the Monterey edition of macOS via RealityKit 2, the next version of Apple’s AR engine.
Object Capture stitches a series of photos together to create a 3D model of the subject. Users can sequentially take photos with their iPhones, iPads or other cameras and then import the images into Reality Kit 2 to generate the 3D model. Users can also preview the content through AR Quick Look of the model to confirm the accuracy.
Content generated via Object Capture can then be used in AR experiences created via Reality Composer or Xcode, as well as in third-party platforms such as Unity MARS and Maxon’s Cinema 4D. It is unclear whether LiDAR via iPhone and iPad is required for Object Capture.
Arts and crafts market Etsy and furniture retailer Wayfair are among the early adopters of the technology. The latter will use Object Capture to extend the products that customers can view through ARKit in their mobile app.
Along with Object Capture, Apple is adding a new set of APIs through RealityKit 2 for “more realistic and complex AR experiences with greater visual, audio and animation control, including custom render passes and dynamic shaders.”
Monterey will be out this fall as a free software update, but for those bleeding out, it’s available as a developer beta today, with the public beta coming next month.
As important as ARKit has been in giving developers the ability to integrate AR into mobile apps, creating the 3D content that underpins those experiences is a different set of tasks.
Giving anyone a Mac or Macbook and an iPhone or iPad the ability to create their own 3D objects without third-party software or hardware could be a quantum leap in AR content distribution.