With teleoperated robots it is relatively easy to experience telepresence--just put a wireless camera on a radio controlled truck and you can try it. Basically you feel like you are viewing the world from the point of view of the radio-controlled vehicle.

This clip from a Jame Bond movie is realistic in that he is totally focused on the telepresence via his cell phone to remotely drive a car, with only a few brief local interruptions.


It's also interesting that the local and remote physical spaces intersected, but he was still telepresenced to the car's point of view.

Humans cannot process more than one task simultaneously--but they can quickly switch between tasks (although context switching can be very tiresome in my experience). Humans can also execute a learned script in the background while focusing on a task--for instance driving (the script) while texting (the focus). Unfortunately, the script cannot handle unexpected problems like a large ladder falling off of a van in front of you in the highway (which happened to me a month ago). You have to immediately drop the focused task of texting and focus on avoiding a collision.

In the military, historically, one or more people would be dedicated to operating a single robot. The robot operator would be in a control station, a Hummer, or have a suitcase-style control system set up near a Hummer with somebody guarding them. You can't operate the robot and effectively observe your own situation at the same time. If somebody shoots you, it might be too late to task switch. Also people under stress can't handle as much cognitive load. When under fire, just like when giving a public presentation, you are often dumber than normal.

But what if you want to operate a robot while being dismounted (not in a Hummer) and mobile (walking/running around)? Well my robot interface (for Small Unmanned Ground Vehicle) enables that. The human constraints are still there, of course, so the user will never have complete awareness immediate surroundings simultaneously as operating the robot--but the user can switch between those situations almost instantly. However, this essay is not about the interface itself, but about an interesting usage in which you can see yourself from the point of view of the robot. So all you need to know about this robot interface is that it is a wearable computer system with a monocular head-mounted display.

An Army warfighter using one of our wearable robot control systems

One effective method I noticed while operating the robot at the Pentagon a few years ago is to follow myself. This allows me to be in telepresence and still walk relatively safely and quickly. Since I can see myself from the point of view of the robot, I will see any obvious dangers near my body. It was quite easy to get into this out-of-body mode of monitoring myself.

Unfortunately, this usage is not appropriate for many scenarios. Often times you want the robot to be ahead of you, hopefully keeping you out of peril. In many cases neither you or the robot will be in line-of-sight with each other.

As interaction design and autonomy improve for robots, they will more often than not autonomously follow their leaders, so a human will not have to manually drive them. However, keeping yourself in the view of cameras (or other sensors) could still be useful--you might be cognitively loaded with other tasks such as controlling arms attached to the robot, high level planning of robots, viewing information, etc., while being mobile yourself.

This is just one of many strange new interaction territories brought about by mobile robots. Intelligent software and new interfaces will make some of the interactions easier/better, but they will be constrained by human factors.