When Will We See Tony Stark’s (AKA Iron Man) Tech In The Real World?

RASPUTIN

Much later in the early 2000s, we started moving towards natural user interfaces in the form of touchscreens in cell phones really bad touchscreens, I might add. Touchscreens in mobile phones were, once again, vastly improved when Apple introduced the iPhone (2007) with capacitive touchscreen technology that accepted multi-touch input.

Now, we’re on the brink of new kind of user-interface: it replaces the mouse and physical keyboard with the human hand and the human voice. We’ve been seeing it in movies for a long time. The most memorable such user-interface was in Tom Cruise’s Minority Report where users would manipulate the on-screen items with a flick of the wrist, zoom in/out with gestures, transfer data from external memory to personal terminal by just dragging and dropping data items with their hands. Check it out here:

A more recent example of touch + voice-based user-interfaces comes in the Iron Man movies. The UIs in Iron Man builds up on the ones shown in Minority Report with voice commands and manipulation of digital items (e.g. 3D holograms) as if they are part of the real world.

Pretty amazing, huh?

At first sight, you might think that we’re still a few decades away from being able to interact with our computers like the genius/billionaire/playboy/philanthropist in Iron Man or Anderton in Minority Report, but the truth is: the future is already here!

Advanced Gestures (Almost!)

Heard of the Kinect? The $149 full-body motion-sensing input device that you can get for the Xbox 360? While its main use is for playing brainless video games, developers have been able to pull off Minority Report-like gestures with it using the Kinect SDK for Windows PCs. The latest Xbox experience lets you feel a little like Anderton with basic swiping using Kinect. You can also use voice-commands like Xbox, play disc! to make Xbox do stuff for you without you moving any limbs.

More recently, we talked about a new kind of input device called the Leap. Leap uses 3D motion-sensing technology that is focused more towards closing the gap between the real world and the digital world.

Like Iron Man, it lets you manipulate digital objects as if they were real-world objects, and vice versa. It lets you drag/rotate digital worlds just like you would drag and rotate items in real life, you can pick up a pencil so you can write on whatever’s on your display, pinch to zoom involving movement of your arm (instead of fingers like on iOS devices), slicing and dicing fruits in Fruit Ninja, play shooters by using your hand as if it were clutching a gun. Folks looking to incorporate Leap into their own software can use APIs.

Advanced Voice Control (Still Lots Of Catching Up To Do)

If you’ve seen Iron Man, you’ll know about Stark’s personal assistant JARVIS. JARVIS is an advanced artificially intelligent computer program that assists Stark in his daily life. It can take all sorts of reasonable commands to complement the touch-based user-interface (download this, show me that). JARVIS also acts as a friend, giving Stark technical advice whenever it can.

We have, to some extent, already achieved this. iPhone 4S owners are already  familiar with Siri the personal assistant. It is essentially a voice-based interface to your device’s built-in apps, letting you make calls/texts/email, set reminders / alarms / calendar events etc. etc. Siri even does small talk!

Voice control and interaction with AIs is where we far behind what’s been portrayed in Hollywood sci-fi flicks. Voice-based assistants like Siri are still not as seamless as JARVIS.

Completing a single command requires you to long-press the Home button, utter a command, wait while your voice recording is sent to Apple servers, processed and returned. Depending on your accent, the voice command may or may not work. Initiating the next command requires you to tap the purple mic button.

It’s much better than voice assistants of yesteryears, but it is nowhere near JARVIS or HAL9000. To match its sci-fi equivalent, Siri needs to be hyper-aware of the user, grow alongside its master and just be able to take in a super-wide variety of commands. While Apple slowly and steadily improves Siri, we have our hopes pinned on Google’s Majel which is expected to be made part of Android later this year.

For Our Readers

I’ve mentioned three devices so far: the Kinect, Leap and Siri on iPhone 4S. These devices separately offer part of what makes up the user-interfaces in Minority Report and Iron Man. I just hope some company comes along and just merges them together to form the ultimate user-interface with which the boundary between the digital and real world ceases to exist.

There are many devices / services that offer novel ways of user-interaction I have either don’t know of or have intentionally left out. That’s where you, the reader, comes in. Our comments section over on Facebook and Google+ is open and waiting for your thoughts on the topic.

What do you think? How far/close are we from having our own Iron Man-like computers?

For discussion on this topic: Check out the threads on Facebook or Google+.

You can follow us on Twitter, add us to your circle on Google+ or like our Facebook page to keep yourself updated on all the latest from Microsoft, Google, Apple and the web.

In the last two decades, we’ve seen great shifts in how we interact with our computers. Before the 80s, most if not all user input came through the keyboard. Then, Apple copied and improved Xerox PARC’s graphical interface which used not just the keyboard but a then-innovative pointing device called the mouse. This mouse was shipped with the original Apple Macintosh in 1984.

Much later in the early 2000s, we started moving towards natural user interfaces in the form of touchscreens in cell phones really bad touchscreens, I might add. Touchscreens in mobile phones were, once again, vastly improved when Apple introduced the iPhone (2007) with capacitive touchscreen technology that accepted multi-touch input.

Now, we’re on the brink of new kind of user-interface: it replaces the mouse and physical keyboard with the human hand and the human voice. We’ve been seeing it in movies for a long time. The most memorable such user-interface was in Tom Cruise’s Minority Report where users would manipulate the on-screen items with a flick of the wrist, zoom in/out with gestures, transfer data from external memory to personal terminal by just dragging and dropping data items with their hands. Check it out here:

A more recent example of touch + voice-based user-interfaces comes in the Iron Man movies. The UIs in Iron Man builds up on the ones shown in Minority Report with voice commands and manipulation of digital items (e.g. 3D holograms) as if they are part of the real world.

Pretty amazing, huh?

At first sight, you might think that we’re still a few decades away from being able to interact with our computers like the genius/billionaire/playboy/philanthropist in Iron Man or Anderton in Minority Report, but the truth is: the future is already here!

Advanced Gestures (Almost!)

Heard of the Kinect? The $149 full-body motion-sensing input device that you can get for the Xbox 360? While its main use is for playing brainless video games, developers have been able to pull off Minority Report-like gestures with it using the Kinect SDK for Windows PCs. The latest Xbox experience lets you feel a little like Anderton with basic swiping using Kinect. You can also use voice-commands like Xbox, play disc! to make Xbox do stuff for you without you moving any limbs.

More recently, we talked about a new kind of input device called the Leap. Leap uses 3D motion-sensing technology that is focused more towards closing the gap between the real world and the digital world.

Like Iron Man, it lets you manipulate digital objects as if they were real-world objects, and vice versa. It lets you drag/rotate digital worlds just like you would drag and rotate items in real life, you can pick up a pencil so you can write on whatever’s on your display, pinch to zoom involving movement of your arm (instead of fingers like on iOS devices), slicing and dicing fruits in Fruit Ninja, play shooters by using your hand as if it were clutching a gun. Folks looking to incorporate Leap into their own software can use APIs.

Advanced Voice Control (Still Lots Of Catching Up To Do)

If you’ve seen Iron Man, you’ll know about Stark’s personal assistant JARVIS. JARVIS is an advanced artificially intelligent computer program that assists Stark in his daily life. It can take all sorts of reasonable commands to complement the touch-based user-interface (download this, show me that). JARVIS also acts as a friend, giving Stark technical advice whenever it can.

We have, to some extent, already achieved this. iPhone 4S owners are already  familiar with Siri the personal assistant. It is essentially a voice-based interface to your device’s built-in apps, letting you make calls/texts/email, set reminders / alarms / calendar events etc. etc. Siri even does small talk!

Voice control and interaction with AIs is where we far behind what’s been portrayed in Hollywood sci-fi flicks. Voice-based assistants like Siri are still not as seamless as JARVIS.

Completing a single command requires you to long-press the Home button, utter a command, wait while your voice recording is sent to Apple servers, processed and returned. Depending on your accent, the voice command may or may not work. Initiating the next command requires you to tap the purple mic button.

It’s much better than voice assistants of yesteryears, but it is nowhere near JARVIS or HAL9000. To match its sci-fi equivalent, Siri needs to be hyper-aware of the user, grow alongside its master and just be able to take in a super-wide variety of commands. While Apple slowly and steadily improves Siri, we have our hopes pinned on Google’s Majel which is expected to be made part of Android later this year.

For Our Readers

I’ve mentioned three devices so far: the Kinect, Leap and Siri on iPhone 4S. These devices separately offer part of what makes up the user-interfaces in Minority Report and Iron Man. I just hope some company comes along and just merges them together to form the ultimate user-interface with which the boundary between the digital and real world ceases to exist.

There are many devices / services that offer novel ways of user-interaction I have either don’t know of or have intentionally left out. That’s where you, the reader, comes in. Our comments section over on Facebook and Google+ is open and waiting for your thoughts on the topic.

What do you think? How far/close are we from having our own Iron Man-like computers?

For discussion on this topic: Check out the threads on Facebook or Google+.

You can follow us on Twitter, add us to your circle on Google+ or like our Facebook page to keep yourself updated on all the latest from Microsoft, Google, Apple and the web.