Banner
    What Is A Room?
    By Samuel Kenyon | February 5th 2013 01:05 AM | 34 comments | Print | E-mail | Track Comments
    About Samuel

    Software engineer, AI researcher, interaction designer (IxD), actor, writer, atheist transhumanist. My blog will attempt to synthesize concepts...

    View Samuel's Profile
    We all share the concept of rooms. I suspect it's common and abstract enough to span cultures and millennia of history.


    The aspects of things that are most important for us are hidden because of their simplicity and familiarity. (One is unable to notice something because it is always before one's eyes.)
    --Wittgenstein

    Rooms are so common that at first it seems silly to even talk about rooms as an abstract concept. Yet, the simple obvious things are often important. Simple human things are often also quite difficult for computers and artificial intelligence.

    Is there a such thing as a room? It seems to be a category. Categories like this are probably the results of our minds' development and learning.

    What I mean by "room" is an enclosed space (or mostly enclosed) with doorways that afford a human to enter and leave the space. Typically there is a sense of a floor, a ceiling, and walls.

    My definition segues to the room frame using direction-nemes as hypothesized by Minsky [1]. A direction-neme is a way to organize a wall-like area into nine sub-areas.



    However, you actually need a direction-neme for all six surfaces of the 3-D room (it's basically a cube). But there might be a mental type which is just for rooms. It may or may not be composed of those surface concepts.



    I found a paper from the 1970s [2] about natural categories which specifies "objects." It states that we make categories based on:

    • Similar features ("clusters of co-occurring attributes").

    • How our bodies interact with the objects.

    • Similarity in shape.

    • Averages of object shapes can be identified as the object category.


    Object categories should be extendable even to those objects which surround our human bodies. We don't think of the outside surfaces and appearance of a single room as we do with many other objects. We think of the inside surfaces, and how it feels to be in there.

    So, am I just rambling or is there a point to this? No, there is no point. Certainly it's unsettling to realize that some of our most basic categories do not really exist. They are merely constructed by our minds with some simple rules.



    If you are remotely interested in the computers you interact with all the time--phones, tablets, pads, desktops, cars etc.--something to think about is whether we have shortchanged rooms as a user interface metaphor.

    References

    [1] M. Minsky. The Society of Mind. New York: Simon and Schuster, 1986, pp. 249-250.

    [2] E. Rosch , C. B. Mervis , W. D. Gray , D. M. Johnson&P. Boyes-braem, "Basic objects in natural categories," Cognitive Psychology, 1976.

    Image Credits
    1. Gatsby
    2. S. Kenyon, based on Minsky[1] which was illustrated by Juliana Lee.
    3. Matthew Paulson
    4. sigma

    Comments

    Hi, we do have the concept of room used in the internet: a chatroom.
    It's an enclosed space where a group of people talk. Maybe it's not every room in the house, it's more like the living room, but it's clear that we're using the word.
    Also, you can think of facebook's "wall" as the internal wall of a room, maybe the bathroom, where everyone writes, or maybe like a big chalkboard on a whole wall in a classroom.
    So we're certainly not in lack of the room metaphor for the internet. Maybe you're thinking of some other use of this metaphor, and it's possible, but why would we need it?

    I think the most interesting part of what you wrote is the reference to the cognitive psycology paper, I think that's worth a read.

    SynapticNulship
    Thanks for pointing out the chatroom. By "shortchanged" I wasn't implying that rooms have never ever been used as a metaphor in computer interfaces. But there aren't that many that I can think of.

    I would guess that humans often might be using rooms or room-like structures as metaphors or mental models (without even realizing it) for certain interfaces that weren't purposely designed with room-like features/behaviors. Of course such an interface might be much more usable if it did accentuate its room like features and behaviors to resolve the mental model conflicts.

    Gerhard Adam
    Hmmm ... not sure I have "room" in my brain for all these ideas.  Oh no ... another metaphor.
    Mundus vult decipi
    MikeCrow
    I think Watson did this by brute force, I think our brains do it more on a context addressable memory sort of architecture.
    Much like Catch that we do without conscious thought, most can sort through the different contexts of the word room, without much thought, makes me think that brute force isn't the way to solve such problems.
    Never is a long time.
    Gerhard Adam
    I agree it's not a matter of brute force.

    It seems to me that what's overlooked is that we recognize such concepts because of our own experience.  Our own experience is predicated on the fact that we are biological organisms that can't tolerate cold, wet, etc. for very long.  We like shelter.  We like to divide up living space.  We like to keep stuff, but we don't want to trip over it.  Every one of these things occurred throughout human evolution, and eventually we got to where we could construct our own living quarters, and the modern day concept of a room signifies all these different things to us.

    I expect that's why it is used as a metaphor in computing [i.e. chat rooms] because it denotes some degree of privacy or separation [i.e. not everyone is in a room].  Similarly, rooms signify how our living arrangements are divided up [i.e. kitchen, bathroom, bedroom, etc.].  This is one reason why most people are somewhat incredulous at large mansions because it seems like there are far more rooms than can reasonably be used.  We have a kind of sense that all that space is wasted.

    Even the phrase I used earlier, about having "room", is used to denote having someplace to keep our stuff. 

    None of these concepts will ever make sense to a computer, because a computer doesn't share the same values about our living arrangements, privacy, stuff, etc. 
    Mundus vult decipi
    MikeCrow
    I've been more inclined to think it's a hardware architecture issue, but more to your point I think it's not so much of the limits of our biology but a lack of our experiences. I think the first AI's will have a more "biological" architecture (sensory input, structure, content addressable memory and response to stimulus) that is "raised". And while we'll be able to copy the state of the networks, and memory we won't understand them.
    Never is a long time.
    Gerhard Adam
    But that's my point.  This isn't about inputs or having some arbitrary set of definitions.  These things mean something to us because they have a "value" which is rooted in our biology.  Unless and until you are uncomfortable from being cold, or from being wet, or from being hungry you can never understand what these concepts mean.  This is no different than the experiences we already share with others that lack that element of actually having done or endured something.

    How does one reconcile such "values" to an entity [or organism] that cannot share the experience.

    It would be like my asking you to imagine being an alpha wolf or a dolphin.  You can anthropomorphize it to whatever degree you like, but in the end, you cannot imagine nor understand what it means.  Our motivations are different from theirs and there isn't any mechanism that will allow that gap to be resolved. 

    How much more difficult is it to imagine an entity like a robot or a computer that simply doesn't have our requirements, nor faces our risks, nor shares our objectives to experience the world in the same way we would?  If the argument is that it doesn't need to, I have no quarrel with that and that's precisely my point.  However, that doesn't solve any problems, because you will be just as disconnected from that machine's experience as you are from the dolphin.  In short, you can never know what, if anything, has been accomplished in assigning such values to a machine.
    Mundus vult decipi
    MikeCrow
    I don't know if we need all of them, and some we can emulate. For instance we could tie the level of power to a "hunger" circuit, temperature to a feeling of cold, etc.
    But I also want to point out an infant doesn't know all of them either to start with.

    But I do see your point, and whatever we raise, would not be human no matter how much we make it look and act like us.
    Never is a long time.
    None of these concepts will ever make sense to a computer, because a computer doesn't share the same values about our living arrangements, privacy, stuff, etc.
    Mine does. 
    Gerhard Adam
    ...and it still allows you to bang on its keys?
    Mundus vult decipi

    Yes, I sometimes wonder about that :)
     
    But in all seriousness,  your claim was that our concepts (of a room) would not make sense to a computer because it doesn't share our values. And yet a computer could be given any values the programmer chooses. I know you will say "Ah, but they wouldn't be grounded in biological purpose!" But why on earth shouldn't a computer be given the same values as a biological system? Why would the way it gets its values make any difference to what the computer understands?

    I think you have too low an opinion of computers! By your argument, a Boltzmann brain would not be able to understand a room because it hasn't experienced one. That is the silliness of Hilary Putnam although his point is more philosophical concerning the meaning of "thinking about" rather than shades of possible experience. (A brain in a vat could only imagine a room, it could not think about a real one. Something like that. I don't think the ontological status of the imagined room impinges on whether the brain can experience it.)

     

     
      
    Gerhard Adam
    But why on earth shouldn't a computer be given the same values as a biological system? Why would the way it gets its values make any difference to what the computer understands?
    It makes a difference in terms of whether something has a meaning or not beyond simply being an abstraction.  It's like looking at the pictures in the article.  We can intuitively interpret what each of those rooms represents to us, because we have values associated with the imagery.  One suggests wealth and formality, one suggests rustic, and the other is delapidated.  These concepts extend well beyond something as simplistic as a "room" and are relevant because of our experiences [and our biology; comfort, risk, etc.].

    As another example, it's like the difference between flying a flight simulator versus a real aircraft.  Even if you were flying completely on instruments in both situations, so that there was no particular visual/spatial cues, the knowledge that you aren't really in the air makes all the difference in the world.  Certainly you might argue that we could create a situation in which the individual can't distinguish between the two, however that would simply mean that an individual has to choose to believe one scenario over the other.  So, if they were in a flight simulator but thought that to "play it safe" they would behave as if they were in a real airplane, then their behavior would still be predicated on the "values" or concerns of what can happen to them, physically/biologically, if they make a mistake.

    Individual experience sets the boundaries of our behavior.  This is precisely why we find that people's behavior is different in video games versus "real life".  It's one thing to play a game in which you are charging on an enemy fortification, it's quite another to have real bullets rushing by your head.

    So, if human behavior can be influenced simply by the knowledge of whether an experience is "real" or "virtual", then how does one rationalize that a machine without any such experience will be capable of "experiencing" anything?  I don't accept the notion of imparting values to a machine, since that sounds too much like saying ... "OK .. now's a good time to be scared".  If you can't tell, then what's the point?
    Mundus vult decipi
    It makes a difference in terms of whether something has a meaning or not beyond simply being an abstraction.  
    Why should a computer not put its ideas into a wider context?
     I don't accept the notion of imparting values to a machine, since that sounds too much like saying ... "OK .. now's a good time to be scared".
    So what do you think evolution has been doing since life began?
    Gerhard Adam
    So what do you think evolution has been doing since life began?
    Not to machines, else we wouldn't be asking the question.
    Mundus vult decipi
    In what way is a brain *not* a machine, then?
    Gerhard Adam
    As you know, you're simply getting into semantics here, where a machine is something that is, by definition, man-made, whereas an organism is "machine-like" by comparison since it also consists of functioning, operating components.

    The difference is also obvious, in the sense that while biological organisms evolved, and follow specific rules of chemistry [utilizing resources] to stay "alive", there is no such corollary for machines.  So, without getting into the "origin of life" issues, there are is a reason why biological organisms evolved, and robots didn't.  Regardless of how similar the two may be by analogy, they aren't.

    No matter how sophisticated a machine may be, it will never be mistaken for something that is alive, or vice versa.
    Mundus vult decipi
    you're simply getting into semantics here
    Very well, I shall rephrase the question.
     
    In what way is a brain which is like something man-made any different from a computer which is man-made?

    Remember, the only differences that count here are ones such that evolution is able to impart values to the brain but programmers are not able to impart values to the computer. 
    No matter how sophisticated a machine may be, it will never be mistaken for something that is alive, or vice versa
    That is pretty well what you are doing. Mistaking a system's (past) origins for its (present) nature. What definition of "alive" do you use that it is impossible for life to be man-made? 
     
    Gerhard Adam
    I didn't say that "life" couldn't be man-made, I said a machine wasn't alive and consequently it will never be capable of experiencing the things that living organisms experience.  Put in whatever values you like.  Emulate to your heart's content, but it isn't alive and "life" has no meaning to it.
    Mundus vult decipi
    it isn't alive and "life" has no meaning to it
    Why not?

    You like horses, I have no experience of them. I can still recognize a stable and have some feel for what a stable is even though I personally have no use for one. 
     
    Gerhard Adam
    Well, that's actually not true, since you can recognize a horse.  It's not an unknown creature to you, so regardless of your direct experiences, you have many pseudo experiences or information that you've acquired over the years.  In fact, the mere mention of the horse will elicit images of what they look like, and what you consider their behavior to be, as well as various contexts in which you've seen them.  In short you will literally have hundreds of pieces of information pertaining to horses, regardless of whether it is accurate or not.

    However, the issue isn't recognizing a horse, the issue is experiences.  So, if I asked you to describe your feelings when you ride, or when you jumped creeks, or what it felt like to run at high speed, you'd be absolutely incapable of expressing anything useful, beyond what you might simply make up.  More importantly, whatever you do learn in real experience will be influenced by your own concerns regarding your safety and the risk of injury.  If you avoid the back end of a horse, you'll do so to avoid getting kicked.  You intuitively understand what it means when a horse gets spooked.  Again, even without direct experience, you'd be able to be next to a real horse and approximate what it's attitude might be; calm, excited, angry, etc.


    In the end, the problem is that you would have no concept about fear, excitement, concerns, etc.    Without experience you couldn't simply decided that you were going to interact with the horse until you gathered enough data to figure out its behavior.  You would proceed based on your experience with other animals, and a wariness directly related to the horse's size [and your own sense of injury/risk].  These aren't ideas that can simply be conveyed as if they were abstract arithmetic problems.  They occur as a natural consequence in how we deal with things, because of our life's experiences.
    Mundus vult decipi
    Well make up your mind! :) Either you are talking about my lacking direct experience so I can't understand or you are saying that by extrapolating from other experience, I can understand.

    Which of these is supposed to support your claim that "None of these concepts will ever make sense to a computer, because a computer doesn't share the same values about our living arrangements, privacy, stuff, etc."? 
     
    IIf I can extrapolate from natural fear of being kicked to an idea about being careful when near the back end of a horse, then I don't see why a computer cannot extrapolate from whatever infrastructure the programmer has given it.  

    If you are talking about having had first-hand experience to draw on then you are going way beyond saying  "None of these concepts will ever make sense to a computer, because a computer doesn't share the same values about our living arrangements, privacy, stuff, etc": you are effectively saying "The only thing that counts as proper understanding is to have had personal experience of the object as seen from being human."  In that case I simply disagree. The concepts (qualities of "roominess") would make perfect sense to a computer even if hadn't experienced then itself.
     
    Besides which, why would a computer-in-a-vat not understand a room?
     

    Gerhard Adam
    It's not an either-or situation.  You can never truly understand something without direct experience, but you can certainly anticipate what some of it may be like from comparable experiences.  Neither of which has anything to do with computers.

    All your experiences are governed by the context of your being human.  Whatever infrastructure a programmer provides to a computer is arbitrary and represents the values of the programmer not the computer [as an independent entity].

    In short, it makes no sense to even consider a computer being concerned about being hurt.  Such a "value" would be ridiculous to a machine, so if a programmer used something like that in a program, it would simply be an emulation;  a pretense of the actual experience.

    A computer-in-a-vat would understand a room in the same what that Google does or a Webster's dictionary.
    Mundus vult decipi
    John Hasenkam
    These aren't ideas that can simply be conveyed as if they were abstract arithmetic problems. 
    This is something I came across long ago and it demonstrates how perceptions can be conditioned by our experience.

    1.

    Pgymies raised in the jungle environment do not learn about distance and size constancy. One pgymy, upon seeing a herd of buffalo in the distance, thought they were ants. 

    2.

    Another pgymy jumped out of a fast moving vehicle, unaware of the speed factor. Dogs do that too, I wonder if they learn after a few attempts of jumping out the car window. I think this phenomenon has been addressed with respect to human learning(that Paiget stuff). 

    3. 

    Visual illusions vary in different visual environments. 

    -----
    The above helps illustrate why Edelman chose a developmental approach to creating artificial intelligence. 




    Gerhard Adam
    That reminds me of another phenomenon I've observed.  My wife had a cockatoo that she would take to work with her at times.  So she would set the bird on the front seat [back support] so that the bird could see out the window.

    What was interesting is that the whole time she was driving, the bird would lean forward with his wings spread, as if he was experiencing the illusion of flying [presumably from the visual cues about speed].

    Mundus vult decipi
    John Hasenkam
    OT but interesting ... 
    A friend of mine, Jane, had to be away from home for a couple of weeks so asked a friend to look after her parrot and house. The friend decided to try a trick and train the parrot to say "You're an asshole" whenever someone walked into the room. The parrot never did that. Some months later the friend of Jane dropped around to visit her and for the first time, upon his entering the door, the parrot did proclaim, "You're an asshole!"
    John Hasenkam
    I found a paper from the 1970s [2] about natural categories which specifies "objects." It states that we make categories based on:


    Incomplete. We also make categories based on social messages. An appellation doesn't have to be consistent, it is about communicating a perception of something which can be more important than the something itself. What is a room? That will vary across time, culture, and place. 
    In hacker culture the concept of a room has been stretched very far, quoting from the jargon file (http://catb.org/jargon/html/B/Big-Room.html):

    Big Room: n.

    (Also Big Blue Room) The extremely large room with the blue ceiling and intensely bright light (during the day) or black ceiling with lots of tiny night-lights (during the night) found outside all computer installations. “He can't come to the phone right now, he's somewhere out in the Big Room.”

    SynapticNulship
    Ha! Of course in the cheap version of reality, the big room is just a sky box.
    Please tell me you haven't been waiting for the chance to say that :) 
    This is a continuation of the thread with Gerhard, which is getting a bit cramped!
    GA: In short, it makes no sense to even consider a computer being concerned about being hurt. Such a "value" would be ridiculous to a machine, so if a programmer used something like that in a program, it would simply be an emulation; a pretense of the actual experience.
    Umm pardon me but evolution has used "something like that" in my brain. I am concerned about being hurt. It is not ridiculous to me.  

    Unless we go off into theology and metaphysics, there is no inherent value to whether a person lives or dies, it is simply what organisms do.  You rationalize a creature's fear by saying that it has an even deeper level of "being concerned about being hurt". However, there is no a priori value to this concern (or would you suggest even deeper levels ad infinitum?),  it is just a survival mechanism. Of course it is there because evolution put it there and evolution does depend on survival.  However, survival as part of the evolutionary process is not the same as an individual's resultant desire to survive.

    The question of whether fear makes sense is thus decoupled from the biology, the only thing that remains is that there is a mechanism for perceiving fear as sensible, which in turn must entail a perception that survival is desirable. Why should a computer not be given as many of these desires as the programmer wishes? The fact that *you* do not value the computer's life and are willing to unplug it is irrelevant. If the computer values itself and is concerned not to suffer damage, in what way is that not the same as you feeling the same thing about yourself? 
     
    Gerhard Adam
    The fact that *you* do not value the computer's life and are willing to unplug it is irrelevant. If the computer values itself and is concerned not to suffer damage, in what way is that not the same as you feeling the same thing about yourself?
    Not the same thing, because "unplugging" is not death.  You may consider it "unconsciousness" or whatever, but it isn't death.  Similarly, since a system can be fully backed up and even modified with additional program changes, it is meaningless to talk about such fears, since a damaged system could be rebuilt and modified and never be "aware" that such occurred.

    These are radically different than the values and motivations experienced by living organisms, where the underlying premise is that there is no recovery from such damage.  That is a fundamental difference in the states of awareness between biology and machine. 

    If you feel concern about being harmed, when you fully can't be, we consider that neurosis or hypochondria.  Such concerns aren't merely subjective because evolution has "programmed" us with those values.  They exist, because failure to consider them, really will bring about your demise.

    If humans could be assured survival of their bodies, and if their brains/thoughts, etc. could be fully backed up and re-integrated into a damaged body [as is often imagined in science fiction] do you really think that we would still cling to our evolutionary past attitudes?  Do you think that we wouldn't display different attitudes towards injury and death?  So, if we would change as a result of these technological changes, then why would you presume that a computer [which doesn't share such an evolutionary history] would need to emulate such concerns?
    Mundus vult decipi
    "Need to emulate"??? What are you talking about? The question is whether a computer could ever experience what we do, not whether it would need to.
    the underlying premise is that there is no recovery from such damage. That is a fundamental difference in the states of awareness between biology and machine
    Now you're just winging it. Why on earth would the potential for replication - never guaranteed - undermine a system's fear of damage? If I were offered the opportunity of a holiday in paradise I'd want my body to be moved there physically. I would not be reassured if I was told that we would be going by Gerhard's Apparator which would replicate me at the destination and destroy the old copy left behind. Would you quietly wait to be destroyed if you knew you had been uploaded successfully? I am 100% sure I wouldn't.

    It doesn't make any difference if you argue that I ought not to care, that my concern is an irrational legacy from the way I have evolved. I am programmed to care very much about the survival of *this* mind in *this* body. A doppelganger doesn't cut it.  
     
    Am I the same person as I was yesterday? A philosophical question to amuse those who talk about brains in a vat! Biology knows no such sophistry, it equips organisms with fear of damage because such organisms as have the fear tend to survive and pass it on. Clearly survival of the fittest would not work without mortality but there is no memory of that tautology built into the fear mechanism.

    Try again :)

     
    Bonny Bonobo alias Brat
    Samuel you are asking us to think here about 'what is a room?' You are wondering if it is just a man made category that is probably the result of our minds' development and learning? You then describe a room as an enclosed space with doorways, a floor, ceiling and walls each with direction-nemes to organise these surfaces into 9 or really 72 sub areas for a cube and then ask if there is a mental type of a room that is not composed of those surface concepts?

    Well as someone then pointed out, there are chat rooms on the Internet, in fact we're sort of in one now. So I think that a 'room' is just probably a real or imagined space, contained within real or imaginary boundaries that hypothetically can be entered or departed from through a real or imaginary entrance located somewhere in that boundary. I was also thinking about the wide variety of rooms with 'real' surfaces that do exist, would you include a womb or a coffin as a room? If so then these are the first and last rooms we enter and caves are probably the oldest rooms in the world.
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    SynapticNulship
    So I think that a 'room' is just probably a real or imagined space, contained within real or imaginary boundaries that hypothetically can be entered or departed from through a real or imaginary entrance located somewhere in that boundary.
    That's certainly a sensible category. But do our minds typically have such an abstract room concept? Is it a parent class which has various child classes (physical rooms, chat rooms, etc.)? Or perhaps one abstract class and then a new custom instance of that class for every room-like experience?

    Or are chatrooms, etc. just flimsy metaphors based on the concept of room which is grounded in experience and how one's body relates to a room? Are all user interface metaphors alike (desktop, windows, dashboards, buttons, tools, etc.) or are there some that are really loose metaphors (windows) and others that are more similar to the original, grounded, concept (rooms)?

    and caves are probably the oldest rooms in the world.
    One would assume so. I haven't done any literature research to see if there are any other kinds of natural rooms or what the first artificial rooms might have been.