Banner
    Following Myself With Robots
    By Samuel Kenyon | July 10th 2010 08:29 PM | 6 comments | Print | E-mail | Track Comments
    About Samuel

    Software engineer, AI researcher, interaction designer (IxD), actor, writer, atheist transhumanist. My blog will attempt to synthesize concepts...

    View Samuel's Profile
    With teleoperated robots it is relatively easy to experience telepresence--just put a wireless camera on a radio controlled truck and you can try it. Basically you feel like you are viewing the world from the point of view of the radio-controlled vehicle.

    This clip from a Jame Bond movie is realistic in that he is totally focused on the telepresence via his cell phone to remotely drive a car, with only a few brief local interruptions.


    It's also interesting that the local and remote physical spaces intersected, but he was still telepresenced to the car's point of view.

    Humans cannot process more than one task simultaneously--but they can quickly switch between tasks (although context switching can be very tiresome in my experience). Humans can also execute a learned script in the background while focusing on a task--for instance driving (the script) while texting (the focus). Unfortunately, the script cannot handle unexpected problems like a large ladder falling off of a van in front of you in the highway (which happened to me a month ago). You have to immediately drop the focused task of texting and focus on avoiding a collision.

    In the military, historically, one or more people would be dedicated to operating a single robot. The robot operator would be in a control station, a Hummer, or have a suitcase-style control system set up near a Hummer with somebody guarding them. You can't operate the robot and effectively observe your own situation at the same time. If somebody shoots you, it might be too late to task switch. Also people under stress can't handle as much cognitive load. When under fire, just like when giving a public presentation, you are often dumber than normal.

    But what if you want to operate a robot while being dismounted (not in a Hummer) and mobile (walking/running around)? Well my robot interface (for Small Unmanned Ground Vehicle) enables that. The human constraints are still there, of course, so the user will never have complete awareness immediate surroundings simultaneously as operating the robot--but the user can switch between those situations almost instantly. However, this essay is not about the interface itself, but about an interesting usage in which you can see yourself from the point of view of the robot. So all you need to know about this robot interface is that it is a wearable computer system with a monocular head-mounted display.

    An Army warfighter using one of our wearable robot control systems

    One effective method I noticed while operating the robot at the Pentagon a few years ago is to follow myself. This allows me to be in telepresence and still walk relatively safely and quickly. Since I can see myself from the point of view of the robot, I will see any obvious dangers near my body. It was quite easy to get into this out-of-body mode of monitoring myself.

    Unfortunately, this usage is not appropriate for many scenarios. Often times you want the robot to be ahead of you, hopefully keeping you out of peril. In many cases neither you or the robot will be in line-of-sight with each other.

    As interaction design and autonomy improve for robots, they will more often than not autonomously follow their leaders, so a human will not have to manually drive them. However, keeping yourself in the view of cameras (or other sensors) could still be useful--you might be cognitively loaded with other tasks such as controlling arms attached to the robot, high level planning of robots, viewing information, etc., while being mobile yourself.

    This is just one of many strange new interaction territories brought about by mobile robots. Intelligent software and new interfaces will make some of the interactions easier/better, but they will be constrained by human factors.

    Comments

    Stellare
    Cool story. I love robots. :-)

    However, I will protest against one of your statements  - very friendly protest though. :-)

    "Humans cannot process more than one task simultaneously"

    First I was reminded that I am able to follow multiple trains of thoughts - like simultaneously fully aware that I am doing that exactly. I used to be able to handle more levels when I was younger. Now, I have a feeling my 'hard disk' is about to crash just about anytime. :-)

    If we move on to physical actions kind of tasks, I still can handle that. Horrified by the thought of being bored or missing out on something, I often relax by reading, watching TV and craft at the very same time. You could argue that I shift between the tasks very quickly, but that would only partly be true. Knitting and reading, or knitting and watching TV happens definitely literally at the same time. No switch.

    I also eat, talk on the phone and drive my car at the very same time (manual gear, mind you and no hands-free). Not so legal perhaps, but physically possible.

    I'm sure I am not unique. Jet Pilots must be pretty good at these things - and much faster than me.

    Still I'd love to have one of those follow myself robots. :-)
    Bente Lilja Bye is the author of Lilja - A bouquet of stories about the Earth
    SynapticNulship
    I'll admit that my statement "Humans cannot process more than one task simultaneously" is pretty crude.  Perhaps it would be better for me to reframe that in terms of focus, since we definitely have multiple trains/threads (pick a metaphor :) going on.

    And of course physically you are constrained by modes, e.g. if your eyes are pointed at a TV then you're not looking at knitting unless you hold the knitting up in front of the TV screen, and then you wouldn't be able to see all of the TV pixels.
    Stellare
    Ah, so you are talking about eye movements. I am able to knit without looking at the knit-work so I do not need my eyes to be two places at the same time. Hence I can simultaneously conduct two things.

    However, we can use side-views and observe several things at the same time. Good racing car drivers etc are probably good at doing this. I know I look for police cars hiding in the vicinity while driving fast....:-)

    In your follow me robot context you are referring to eye processing I guess, rather than any kind of task processing.
    Bente Lilja Bye is the author of Lilja - A bouquet of stories about the Earth
    SynapticNulship
    In your follow me robot context you are referring to eye
    processing I guess, rather than any kind of task processing.

    No.  I just added that in as another related issue in my previous comment, and I probably have just confused matters by doing so.  My essay was in fact referring to task processing in general--specifically problem solving and other cognitively complex tasks that require conscious attention.  But, even complex tasks--like driving or knitting--can be learned and then done subconsciously (at least until an unexpected problem occurs). 

    As Missy Cummings puts it, "In complex problem solving tasks, humans are serial processors in that they can only solve a single complex problem or task at a time [13], and while they can rapidly switch between tasks, any sequence of tasks requiring complex cognition will form a queue..." [1].

    Nothing you have described goes against this concept.  It's also important to consider how interruptions from the various threads and sensory modalities automatically refocus your conscious attention.

    [1] Cummings, M.L.,& Mitchell P.J., Predicting Controller Capacity in Remote Supervision of Multiple Unmanned Vehicles, IEEE Systems, Man, and Cybernetics,Part A Systems and Humans, (2008) 38(2), p. 451-460.
    Stellare
    Maybe it is a confusion about terms and definition (me not knowing the professional language here), but if knitting and driving are complex tasks, then I'd say that Cummings statement is inaccurate.

    If she means learning complex tasks, she might be more correct. But we might find examples that prove this wrong too. And I think it is a smooth transition between being extremely focused and subconscious.

    In either case it is probably a good approach to things related to robotics.

    So, are you building advanced robots, then? :-)
    Bente Lilja Bye is the author of Lilja - A bouquet of stories about the Earth
    SynapticNulship
    So, are you building advanced robots, then? :-)

    Yes, although "advanced" is not actually my focus.  That should come about as a side effect of making usable robots that enable people to achieve goals.