Augmented Reality For Home Automation

The smart home is one of the most existing developments currently going on but has still to really hit a homerun. Most of the devices are cumbersome to configure and use, and often requires an app to be installed. For instance, a phone app to turn your light on is one of the most roundabout and slow ways to turn on a light bulb and doesn’t improve on the generic light switch. I can get up from the couch and flip the switch faster than unlocking my phone, find and launch the app, wait for a connection, just to turn off the light or change the brightness. Now there are other advantages like scheduling etc. but we’re giving up an awful lot of simplicity in the process. I believe that lights and other ordinary devices that we are putting “smarts” into needs to add value without taking away the existing value.

But even the good old light switch isn’t the perfect solution. How often have you come into a room you haven’t been to before, and trying to figure out which switch turns on which light? Perhaps there’s a better more natural way by taking some cues from Natural User Interfaces.

As a comparison thinks of how we interact with people and ask them to do something. There the most natural way is to look at a person and ask them to do something, for instance, “Could you please pass the salt?” The voice is the action and the who is based on who you’re looking at. It’s a completely natural way for interacting with us. A switch is a very indirect way. You move a lever, and perhaps it connects some wires that ultimately runs through a bulb somewhere else in the room. Wouldn’t it be much better if we could just look at a light and tell it to turn on or change its brightness like we’re used to interacting with each other? Anyone entering a room knows exactly how to turn on any device simply by looking at it and say “on”. No need to guess light switches, or find the app that works with the particular bulb. Farfetched? Not really.

We can use augmented reality to tell you about what in the smart home that can be interacted with, their status etc. by overlaying them with information (for instance currently playing a song on the speaker), but also understand what you’re looking at and use that as the context for voice commands. This makes voice control much more natural and can greatly improve interpreting the intent of the user.

In this project, we’ll be using Microsoft’s HoloLens to make real objects in the smart home augmented with information and controllable with gestures and voice commands.

Now I just said that launching an app on a phone was a slow cumbersome way to control a light. Surely putting on a HoloLens, wearing it around the house and launching apps to flip light switches aren’t much better, and I agree that is the state today. However it’s safe to assume these types of devices will shrink down to a practical wearable – it’s not farfetched to think we in the near future would rather wear a pair of light smart glasses instead of carrying our phones around in our pockets all day – so keep a mind open and think of this project as what could be in the near future, but at the same time marvel that we already today can use existing technology to truly make interacting with the devices in your home natural.

To Learn more about Home Automation Click Here.

To Learn more about Augmented Reality Click Here.

Ritesh
administrator
Ritesh Kanjee has over 7 years in Printed Circuit Board (PCB) design as well in image processing and embedded control. He completed his Masters Degree in Electronic engineering and published a paper for IEEE called Vision-based adaptive Cruise control using Pattern matching (on Google Scholar). His work was implemented in LabVIEW. He works as an Embedded Electronic Engineer in defence research. He has experience in FPGA design with programming in both VHDL and Verilog.