MIT Fluid Interfaces Reality Editor

The Reality Editor is a new kind of tool for empowering you to connect and manipulate the functionality of physical objects. Just point the camera of your smartphone at an object built with the Open Hybrid Platform and its invisible capabilities will become visible for you to edit. Drag a virtual line from one object to another and create a new relationship between these objects. With this simplicity, you are able to master the entire scope of connected objects.

It has three main functionalities:
The first functionality of the Reality Editor is a directly mapped interface that allows one to operate interactions with real objects right on the spot. The benefits of such direct mapping can be explored with a simple example scenario: Imagine a house with a kitchen, living room, bedroom and bathroom, where every room contains at least one light. A typical graphical user interface for controlling the lights on a computer screen or smartphone would represent the lights by numbers, lists or with a categorization of symbols, but never mapped to the actual position in one’s home.
With the Reality Editor, one can just hold a device over the light that needs to be controlled and a virtual object is displayed which can be manipulated to change the light’s settings. No mental relation between object and virtual interface needs to be remembered. A minimum amount of abstraction and mental demand is achieved when a user has a direct view of the object of interest and manipulates it on the spot. The Reality Editor provides the described direct mapping of operation and therefore changes complex operations and programming into more intuitive tasks. Objects can be operated on the spot.

Aside from the operation of an object, another functionality of the Editor is a visual editing functionality. This editing functionality lets one reprogram and manipulate the behaviour of the devices objects to combine their functionalities. Of different physical objects. In this mode, every knob, button, speaker or screen of the smart object has a virtual tag. These tags can be used to connect the functions of an object with other tags of the same or other objects. For example, the virtual object for a desk light can consist of a switch tag and brightness, hue and saturation tags. The switch provides the on and off functionality. Connecting the switch tag to the brightness tag represents the basic functionality of the desk light.
By connecting tags of different objects, the user can program multi-object functionality. For example, a radio has the functionality of a tuning knob, a volume knob and a speaker. By disconnecting the tuning knob and Session: Poster, Demo, & Video Presentations UbiComp’13, September 8–12, 2013, Zurich, Switzerland 309 volume knob and connecting the output of the tuning knob with the input of the light switch and connecting the output of the light switch with the input of the volume knob, the radio turns on and off whenever the light is turned on and off. This simple example shows how editing everyday objects and their functionalities can be flexible, creative, and personalised.
As a third function, the Reality Editor provides the ability to freeze the image of an object so that all interactions can be performed on a still image. This makes it possible for a user to use the reality editor, to investigate and program smarter objects from anywhere, even when the user is not in the vicinity of the object Since the interface is still shown on top of a real image of the device being controlled, a strong connection between the real object and the Mixed Reality Editor remains. For example, one can freeze an image of the radio and take it into the living room and still be able to operate the radio from the distance. These “frozen” images can be placed in a memory bar that is located on the side of the Editor. In order to save the image the user has to tap the screen for 3 seconds in an area that is not occupied by the graphical interface of the shown object and then moves it to a free spot in the memory bar. A user can click on the spot to edit the interface that has been stored.

The Reality Editor demonstrates how a direct mapped interface can be used to enable interfacing with a varied range of physical objects, thereby providing a very simple programming behaviour and interactions of the physical world.

To Learn more on Augmented Reality, Click Here


Ritesh Kanjee has over 7 years in Printed Circuit Board (PCB) design as well in image processing and embedded control. He completed his Masters Degree in Electronic engineering and published a paper for IEEE called Vision-based adaptive Cruise control using Pattern matching (on Google Scholar). His work was implemented in LabVIEW. He works as an Embedded Electronic Engineer in defence research. He has experience in FPGA design with programming in both VHDL and Verilog.