Apple is making the development of AR apps more accessible to the masses by letting developers create 3D models from 2D photos.
Apple announced the release of RealityKit 2, Apple’s suite of technologies for AR development. Apple has the largest AR platform in the world, with 1 billion AR-enabled iPhones and iPads in circulation globally running RealityKit- the rendering, animation, audio, and physics engine built for AR.
Now, the RealityKit 2 update brings an easier way for developers to create 3D models in a very short amount of time from 2D images taken on any iOS or macOS device.
Related | Apple Brings FaceTime To Android And Windows Via The Web
This process is made possible by using 2D pictures of an object from all angles (including the bottom) and uploading them to macOS Monterey through the Object Capture API. A few lines of code later and the three-dimensional model is ready.
The images themselves can be shot using the Qlone camera app, or any other image capturing app on an iPhone, iPad, DSLR, or drone.
This is a great update, as it will allow developers to overcome the time- and money-consuming rendering stage in the development of their AR apps.
The news comes together with a few other RealityKit updates. For example, RealityKit 2 improves on visual, audio, and animation controls aimed at the development of AR experiences, and the new version includes custom shaders that give developers more control over the rendering process.
The updates also offer a more detailed look-and-feel for AR objects, dynamic loading for assets, the ability to build an Entity Component System to organize assets in an AR scene; and the ability to create player-controlled characters so users can jump, scale, and explore AR worlds in games that are based on RealityKit.
RealityKit is already being used heavily by online retailers, with big retailers like Wayfair or Etsy using Object Capture to create virtual representations of their merchandise.