If you’ve been developing AR apps for a while now, most of you would be either using Vuforia, Wikitude, or ARcore/ARKit depending which platform you a part off. All these SDK’s are great and have really awesome features for both marker and markerless AR but is that enough?
Of course, not, nothing is ever enough. Us as humans are always looking for the next best thing, also known as the shiny object syndrome. So what does this mean for AR, how could we possibly make it better?
For one, we could have a better understanding of our environment. I’m sure you have experienced a case where your AR game object goes behind a wall but you can still see it. This happens because current SDK does not handle occlusion. So the game object should temporarily vanish when any object in the real world comes between that virtual object and the camera.
6D.AI have released their SDK which features an occlusion algorithm. But at the moment, it is taking a while to obtain a license to use this SDK. Niantic the makers of the popular game Pokemon Go, also plan to host occlusion in their SDK. However, it is not publicly available and no word on when it will be released. Another highly needed feature is multiplayer. ARCore has cloud anchors which enables multi-view sessions. But it far from an easy to implement, full-fledged multiplayer API like Photon. I cover a video on cloud Anchors and Multiplayer AR gaming at this Link. Niantic state that they are working on a 1-click solution for enabling multiplayer functionality within games and apps. This would be really nice to have. Because AR collaborates with actual reality, it makes sense to make space collaborate and share your experience with others.
Beyond these forward-thinking companies, I’m really sure that both Google and Apple, will add occlusion to their AR arsenal for ARCore and ARKit respectively.
Earlier I mention that AR would work much better or rather create a more immersive experience if it was paired with Artificial intelligence to obtain a better understanding of the world. Think about it. What if you were able to identify, a table, a chair amongst other daily objects in your surroundings? Maybe if the AR SDK was able to identify not just an object, but also smart objects and connect via the Internet of Things IoT to give you and stats of the object in Augmented Reality as well as control of the appliance you wish to control, just by looking at it.
Of course, the sky is the limit with regards to how to improve AR. It’s great that we have smartphones to bridge the gap until we are able to have affordable and ergonomic AR glasses sometime in the future.
What most likely be constant, from my view is that we will most likely still be using Unity. Our AR SDK may switch a couple of times depending on the state-of-the-art features of the AR SDK vendors.
Cheers for now!