Last week, Microsoft announced the general availability of a SDK for its Kinect platform. For those of you that have been living in a cave for the past year, Kinect is a motion sensing controller for Xbox 360 that can interpret actions as minute as discerning the motions of individual finger tips. Now Microsoft has made a SDK available so that developers can (legally) develop their own non Xbox applications that leverage the platform.
Nintendo’s Wii recently achieved widespread commercial success intertwining physical activity and video gaming. The Kinect looks to take that experience one step forward, removing the need for a controller. Is the “Natural User Interface” coined by Microsoft and offered through Kinect the next game-changer for user experience?
Four years ago, the introduction of the iPhone revolutionized the mobile phone industry with its sleek design and intuitive, touch user interface. We soon saw its advanced capabilities percolate into other device classes, reshaping customer requirements and expectations.
Imagine technology that would allow an engineer to control a humanoid robot for scientific research, a firefighter to quickly launch a rescue mission within a harsh environment, or even a way to engage the physically disabled in activity when lack of precise tactile control would have previously precluded it. We now expect that the Kinect and other, similar gaming platforms that offer pervasive interactions with technology and media will help drive the evolution of the next generation of embedded systems, not only in terms of device specifications, but also in terms of new use cases entirely.
So if this type of human-to-machine interaction is on the horizon, how will this impact the embedded community? What stakeholder in the embedded device supply chain will be expected to provide this functionality? Embedded OS providers, software development or HMI modeling tools vendors, or the OEMs themselves?