The Kinect exploded on the gaming and natural user interface scene. People had it hacked within a few days and a collective desire to see how a depth sensing camera can be used was born. Caught up in the same energy the UM3D Lab started playing with the hacks coming out and seeing how they could be used with other technology. After some initial tests, and the release of the official SDK from Microsoft, we dove into deeper development with the device.
In an effort to improve interactivity in the MIDEN, the Kinect has been applied as a way of representing the physical body in a virtual space. By analyzing the data received from the Kinect, the UM3D Lab’s rendering engine can create a digital model of the body. This body represents an avatar that corresponds to the user’s location in space, allowing them to interact with virtual objects. Because the MIDEN offers the user perspective and depth perception, interaction feels more natural than maneuvering an avatar on a screen; the user can reach out and directly “touch” objects.