Fake Banner
    Intelligence is mathematically intractable
    By Thor Russell | February 14th 2012 03:14 AM | 72 comments | Print | E-mail | Track Comments
    About Thor

    My background is in science, maths, engineering and psychology. My work in artificial intelligence and pattern recognition gives me some unique insights...

    View Thor's Profile

    Google "intelligence is mathematically intractable". Nothing comes up, why?
    For "is intelligence is mathematically tractable" you get the same result.

    Its not often you can put such a short sensible phrase about an important topic into Google and get no results. 
    [Edited: The reason for my post now follows]

    It is quite clear from research into AI that it has been assumed that advanced mathematics, statistical techniques and algorithms can be used to generate intelligence. However anyone who sees the results of these efforts will see software and algorithms that will run under a very specific environment, but are very fragile and fail completely as soon as the environment is changed. 

    Someone who has actually tried to program a computer to demonstrate intelligent behavior relating to a data set will often look at the data, immediately see what is going on an then assume it is an easy task to program the computer to see that also. However that is not the case! Any simple algorithm developed to use it correctly will fail in a simple but unexpected way, complex algorithms fail in complex ways and any change in the environment in which the data is collected will very often cause it to fail completely. Whether it is an Artificial Neural Network, Support Vector Machine, or Adaptive Gaussian Filtering etc that is used the result is often the same: The human understands the data ever better, while the machine makes only slow halting progress.


    So why is it the case that mathematical tractability is sought so much by the AI field? One senior academic I know even called neural nets "black magic" and wanted to have nothing to do with them. However I would quite happily use some "black magic" in my work if it gave the results I wanted. A reason perhaps is because if a method cannot be analysed mathematically then it is harder to communicate and publish the results. That leads researches always towards methods that can be analysed in this way, even if they give inferior results.

    But why should intelligence be amenable to mathematical analysis? After all, if there was a formula for intelligence then such a formula could then be used to generate other formulas and mathematical proofs. It would be a formula to rule all other formulas and mathematics would generate itself just by blindly following it. I don't think this is the case. Fields of mathematics such as topology are created as a result of sensory experience (knots etc) of the people creating them. There is no mathematical formula that provides the inspiration for mathematical proofs themselves. 

    Perhaps mathematical tractability always leads away from intelligent behavior, not towards it. Perhaps researchers should seek systems that become mathematically intractable as quickly as possible in the search for intelligence. It is a bit surprising to me that while maths/logic works so well for the physical sciences, algorithms to deal with real world data developed with it appear brittle, inflexible and unable to cope with changing circumstances. Given progress or lack thereof in the AI field I expect that intelligence in a machine will come from simulating biological neurons/synapses, and we will the stumble upon the mechanism because it will be right in front of us rather than us figuring it out.


    Comments

    Bonny Bonobo alias Brat
    Maybe Australia's Google must be somehow superior to whichever Google you used Thor, I typed in your phrase "intelligence is mathematically intractable" and it came back with "About 15,600,000 results (0.17 seconds)".
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    Halliday

    Helen:

    I believe Thor intends to have the quotes included in the Google searches.

    Of course, now Google shows two (2) results!

    David

    Bonny Bonobo alias Brat
    Oh, whoops, sorry Thor! Thanks for explaining Science20's 'too hard basket' to me as usual David.
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    Thor Russell
    Ok guys, I have edited it now to be a little less empty...
    Thor Russell
    Gerhard Adam
    Perhaps researchers should seek systems that become mathematically intractable as quickly as possible in the search for intelligence.
    This is where I have the problem.  There is no search for intelligence, since we already possess it.  However, as I keep saying, we don't know how to define it, and consequently its little surprise that we don't know how to approach the problem.

    So, without even knowing what intelligence actually is, we still attempt to pursue it, and to compound our error, we think we can do it quickly.  It takes the average human over 20 years to complete their basic education into adulthood, and yet we try to build machines to achieve the same result in months.  Even if we understood the entire process, what makes us think that such a feat is possible?

    There is no instance of an intelligence in biology that isn't environmentally specific [and consequently data set specific].  Yet, we want to produce something generic.  More importantly we want to construct a system that has all the wrong inputs and objectives and then wonder why it can't be made to work.

    In the same way that a snake can't be made to be a wolf, neither can a machine be made into a human.  It's like trying to make a cell by putting together all kinds of electronics.  Whatever else you might eventually end up with, it will never be a cell.  Similarly any effort to produce an artificial human intelligence is doomed to failure, because it can never be human and therefore can never be intelligent in the same way.  It will always be "pretend".
    Mundus vult decipi
    Thor Russell
    I think we do have a good definition for intelligence at least at the level demonstrated by small mammals/snakes, however I will write about that in more detail later.
    You are right about being environmentally specific, however simple mammals adapt and learn in a changing environment far better than any computer. Perhaps intelligence in a different (non-neural) substrate is a clearer way to put it. You cannot possibly claim that intelligence can only exist in neurons. If you simulate their behavior well enough in another substrate, it cannot "help" but exhibit the same intelligent behavior. Computers can do a pretty good job of imitating insects as far as I am aware.

    In terms of meeting our needs, there is no difference between "pretend" intelligence (can you define that) and the real thing. If you replaced a guide dogs brain with a non-neural one that we built and it behaved the same way from our point of view it would be a fully functioning guide dog robot. (you could also replace its legs/organs etc with bionic ones and have a useful thing then, a guide dog that didn't need training)

    Thor Russell
    Gerhard Adam
    Computers can do a pretty good job of imitating insects as far as I am aware.
    That's my point.  Imitation of intelligence is not intelligence.
    If you replaced a guide dogs brain with a non-neural one that we built and it behaved the same way from our point of view it would be a fully functioning guide dog robot. (you could also replace its legs/organs etc with bionic ones and have a useful thing then, a guide dog that didn't need training).
    IF ... that's a pretty big if, in which case you wouldn't be having this discussion.  My point is that you can't do it and likely never will.

    I'm not simply be pessimistic or cynical.  There's a fundamental flaw in the reasoning that is pursuing this angle.  It relates to the point I made about "pretend" intelligence.

    First and foremost, intelligence relates specifically to the needs of the organism in question, for the specific environment/circumstances and provides a means by which the organism's survival is improved over more simple "hard-wiring" by instincts (if such a thing even exists).  Therefore any machine intelligence that doesn't specifically direct the intelligence to the machine in a similar manner is simply "pretending" to behave in a familiar way, but it is play-acting.  It's like a robot cleverly responding to go out to dinner or to a bar.  It's nonsense, because a robot doesn't need those things.  It's pretending to behave in the way it thinks humans behave.  It's a front.
    ...simple mammals adapt and learn in a changing environment far better than any computer.
    Of course.  They pay a much higher cost for failure to.  AI researchers keep focusing on the process of intelligence instead of the "driver" that formed it to begin with.  Without motivation you have nothing.  It is motivation that drives organisms to "learn".  It is the price of failure that motivates them.  Computers have "no skin in the game" [sorry, no pun intended].
    Mundus vult decipi
    Thor Russell
    I am still not seeing how you make the claim that you can't scan a brain and put it in a different substrate? What is going to physically stop this happening? Reasoning about motivation cannot have anything to do with it. Its the same as saying recording a musical instrument cannot happen because the CD player or whatever has no motivation to play it back. Also I don't think intelligence requires a driver like you claim. Threatening to destroy a computer won't make it work better! The process by which mammals adapt is something that can be studied by itself, it still has to happen by changing physical connections between neurons, and if you understand this process you can use it like any other. 
    If we could record the changing connections that linked neurons all the way from sensory input to behavioral output as a mammal adapted to its environment, no argument about motivation could stop us using this process in a different substrate.
    Thor Russell
    Gerhard Adam
    Its the same as saying recording a musical instrument cannot happen because the CD player or whatever has no motivation to play it back.
    It can't happen.  Without the human motivation, it won't even record in the first place.  However, that aside ... are you only proposing that we render the simulation of intelligence?  You certainly can't argue that the CD player is "playing the instrument" or "making music". 
    ...if you understand this process you can use it like any other.
    No you can't.  You can only emulate.  This is easily learned by those that experience the different between the "real world" and simulators.  You can be in a flight simulator all day long and it will never replace the actual experience of being 10,000 feet in the air. 

    You're trying to define every possible neuronal combination as something that can be mapped if we only had a sufficiently large computer system and storage.  It simply isn't so.  The data doesn't exist.  It's direct experience that produces it.  So, once again, you can only simulate that which you can recognize.  In the end, it's still just a simulation.  To claim that a robot would be afraid of heights is simply a joke.
    Mundus vult decipi
    Thor Russell
    OK there are a few points to clear up.
    1. Several times you appear to claim that it not possible to map every neural connection. Yet this is exactly what connectome type programs are trying to do. Are you claiming that there are PHYSICAL reasons why this is impossible? If so then please list them. From what is known about physics while mapping a connectome is harder than a genome there is absolutely no reason why it cannot be done. To claim that it is impossible it completely unjustified. So yes or no, can all the neural connections (that is including the ones in all areas of the brain right from sensory input to behavioral output) in say a mouse be mapped or not. If no, then what fundamental laws of physics that you know about but I don't prevent it from happening?


    2. Do you believe there is something fundamentally special about neurons/synapses made of biological matter that means they and only they can exhibit intelligence and "feel" things? If yes, then explain what is happening in the following situation:
    You start off with a conscious live human being and gradually replace their neurons with artificial ones that are made of a different material but behave in the same way. Because of this neither the person themselves nor other people can notice this happening. Saying "this physically can't happen" does not answer a thought experiment. So what happens when 10% of their brain is replaced. Are they 10% simulation? What about 90%. For reasons like this answering "yes" to (2) is considered untenable in the literature as far as I am aware. Neurons would have to have some magical non-physical quality.


    3. Whether or not you accept 1,2 if you make the simulation realistic enough, that is matrix style where all the sensory input is the same as reality, you would have no way of "not" experiencing the "real thing" after all an experience is just an experience. There is nothing to make an experience not real if all the sensory input is the same in both situations and you cant tell the difference. Thats not logical at all.

    4. If you accept 1,2 then what possibly is going to make that robotic guide dog different from the real thing from our point of view? (I am not getting into whether it would actually "feel", no-one knows anything about this as far as I am concerned) Given all its neuronal connections will be the same all its behavior will be the same. It doesn't matter whether you attribute that behavior to emotion or not, it will behave exactly the same as the real thing, there is no way it couldn't. Unless you are claiming things like emotion are not generated by neurons then this is the conclusion you must draw.
    Thor Russell
    Gerhard Adam
    1. Several times you appear to claim that it not possible to map every neural connection.
    OK, let me clarify.  My use of the word "map" is intended to mean something that conveys not just the events that are happening, but the phenomenon to which they relate.  In other words, it isn't enough to show that a particular pathway is used, but what that pathway represents. 
    This little roundworm is the most studied multicellular organism this side of Alpha Centauri--we know how its 300 neurons are interconnected, and how they link up to the thousand or so cells of its body. And yet... Even with our God's-eye-view of this meager creature, we're not able to make much sense of its "brain."
    http://www.science20.com/mark_changizi/don%E2%80%99t_hold_your_breath_waiting_artificial_brains-86629
    In effect, Mark is saying that the question of mapping is insufficient to provide any answers.
    2. Do you believe there is something fundamentally special about neurons/synapses made of biological matter that means they and only they can exhibit intelligence and "feel" things?
    No, you have it exactly backwards.  It is the mapping of the body to the brain that gives it context.  It's the synergy of the entire organism that produces something that is capable of intelligence.  To argue otherwise, indicates that intelligence is a phenomenon that exists independently of the organism exhibiting it.  That's far too mechanistic a view to make any sense to me.  More importantly, your argument hinges on signaling uniqueness, in the sense that a particular pattern has one specific meaning.  Is there a difference in face recognition of a spouse or girlfriend?  What is different if he/she becomes an ex?  Do these signals change over time?

    Are you suggesting that the brain has a unique combination of neuronal connections based on these differences?  Once again, I'm suggesting this level of uniqueness, because without it you're stuck with the interpretation question again, meaning that there is no way to "map" or distinguish one signal from the next if they aren't reproducible in a meaningful way.  It would be like trying to write a book where the alphabet keeps changing.
    3. Whether or not you accept 1,2 if you make the simulation realistic enough, that is matrix style where all the sensory input is the same as reality, you would have no way of "not" experiencing the "real thing" after all an experience is just an experience.
    Well, now your stretching the boundaries between simulation and deception.  In other words, your point is valid if I am unaware that I'm participating in a simulation.  Suppose that I simply duplicated every neuron's activities to simulate Sir Edmund Hillary's climb up Everest.  Are you suggesting that this "experience" is the same as actually doing it?  I would agree that it is an excellent simulation, but how would you argue that it is "real" in any meaningful sense of the word?  If your point is merely that our brain's invent our reality, then that's fine, but how does this work for a brain that isn't connected to an organism?  What reality is being invented for what purpose?  Keep in mind, that if your statement is true, then we would be compelled to argue that an individual's belief that they are Napoleon is no different than the reality that they actually are.  You would have rendered the concept of delusion a mere matter of opinion.

    More to the point, you're assuming that "experience" is a singular event, instead of recognizing that our brains also gain support and perspective from having shared experiences with others of our species.  In other words, climbing Mt. Everest, isn't something that I imagine doing.  It is something that is part of the collective experience of those around me which helps me maintain my own sense of reality. 
    4. If you accept 1,2 then what possibly is going to make that robotic guide dog different from the real thing from our point of view?
    As I've said.  It seems that your position is that if I duplicate everything then, by definition, I've replicated and reproduced the phenomenon.  My point is that you haven't done anything of the sort.  How do you reconcile mapping the firing of neurons for memories that you never actually acquired?  More importantly, how do you differentiate memory from delusion?  These are all pertinent questions because they get to the crux of what we mean when we talk about producing artificial intelligences and what that means.  If the purpose is simply to simulate and copy, then why bother with all the research, since we can already program in enough behaviors that are realistic enough to satisfy most people's expectations.  However, if the purpose is to try and truly develop an artificial intelligence, then it can't simply be a copy of someone/something else's brain.
    Unless you are claiming things like emotion are not generated by neurons then this is the conclusion you must draw.
    Emotions are NOT generated solely by neurons.    The entire process is essentially a system that reinforces and ultimately gives rise to it's "future" self.  You haven't even acknowledged the profound role of brain chemistry on the process.

    In the study, published in the journal Molecular Psychiatry, researchers were able to explain that the brain’s center for processing emotional information  — called the amygdala — triggers the brain’s center for memory creation — called the hippocampus — to generate new neurons.
    http://www.dailycal.org/2011/06/15/study-reveals-emotions-trigger-generation-of-neurons/

    This theoretical paper proposes a neuronal circuitry layout and synaptic plasticity
    principles that allow the (pyramidal) neuron to act as a combinatorial switch, whereby
    the neuron learns to be more prone to generate spikes given those combinations of firing
    input neurons for which a previous spiking of the neuron had been followed by a positive
    emotional response; the emotional response is mediated by certain modulatory neurotransmitters or hormones. More generally, a trial-and-error learning paradigm is suggested in which the purpose of emotions is to trigger long-term enhancement or weakening of a neuron’s spiking response to the preceding synaptic input firing pattern.
    http://arxiv.org/pdf/1109.4140.pdf

    Mundus vult decipi
    Thor Russell
    1. Firstly I should have been a bit clearer about brain chemistry. Lets rephrase this question to include brain chemistry. So you map the connections and how the chemistry works. Do you think it is impossible to simulate brain chemistry and connections in a different substrate? This is still now a yes/no question.


    By "phenomenon to which they relate" do you mean the sensory input? How is our interpretation of what things represent going to make the physical structure behave differently? By map I mean something that does NOT have any interpretation involved, that is the point. For every point except (3) I am not talking about "meaning" but simply about behavior. 
    I am interested in what would happen if you had technology and used it in a particular way, and whether that would enable us to make useful robots exhibit behavior that we would consider intelligent and find useful.


    2. 
    I don't know what you are getting at here. If you replace one part of a system with another part that behaves in the same way, whether you talk about synergy or not you will still get the same behavior. You seem to be claiming somehow that "the synergy of the organism" will mean that if you replace such a part then even if it behaves the same way then things will somehow be different. The body will be mapped to the new brain in exactly the same way as the old brain. At what stage does "synergy" or whatever change things? What would actually happen if someones brain was changed gradually in such a way? You need to give an answer for this.

    Intelligence is exhibited by the brain in response to its sensory input/outputs whether they are "real" ,"simulated" or whatever label you give them. You don't need to include an actual body. The brain exhibits intelligence, the rest of the body exhibits little/none. This is perfectly consistent. Your arm is no more intelligent to a hammer as far as your brain is concerned.

    3. You seem to have missed my point completely. Its not about simulating neurons but about simulating inputs for real neurons. If you can't tell the difference your experience will be the same. It will be real in the same sense that any experience is real for you. Whatever meaning you try to attach to it won't change that fact. A brain connected to a completely realistic simulation will experience the same things as one connected to the real world. After all all that comes into your brain are signals along neurons, not "real" objects. How can any arguments about meaning change the two experiences to make them different if they are the same because of the same sensory input? You can give them different meaning after the fact, but the experience at the time will be the same.
    Whether experience is with other people, objects or whatever makes no difference. If you suddenly found out that you were living in a simulation, it wouldn't make your experience now not "real". You have already had those experiences, they cannot go back in time and become non-real!

    4.
    I should have mentioned that brain chemistry and creation of new neurons is also simulated accurately.

    I am not talking about delusion and meaning etc but simply how to get intelligent behavior from robots. We cannot program in realistic behavior at all, and I think we will be better able to do so by copying neuronal structures and simulating brain chemistry than with mathematics.



    Thor Russell
    Gerhard Adam
    Do you think it is impossible to simulate brain chemistry and connections in a different substrate?
    No, I think I was quite specific in separating out "simulation" versus "actualize".   However, without knowing the reason or rationale for simulating any particular combination of neuro-chemical reactions, I fail to see how this produces anything useful. 
    If you replace one part of a system with another part that behaves in the same way...
    Again, I'm being liberal in considering that such a claim can legitimately be made.  After all, I'm not convinced that such an argument applies to a single brain over any period of time, let alone one that seeks to replicate such behaviors. 
    I am not talking about delusion and meaning etc but simply how to get intelligent behavior from robots.
    That's my point.  You are talking about simulating intelligent behavior.  It is mimicry, replication without meaning.  It is a completely artificial experience. 

    Consider some other questions that you may consider philosophical, but I think they're important.  Does intelligence involve the ability to operate "freely" within the brain?  In other words, the ability to choose our behaviors, etc. within the constraints of a deterministic system based on our experiences?  If so, then how does a machine reconcile itself to biological memories for which it has no corollaries?  It already experiences a contradiction which suggests that its memories aren't true, nor its own.  If they are not its own, can it be said to "free" in its thinking?  [NOTE: I'm not talking about some "free will" argument, merely something analogous to operation without coercive influences].

    As an example, the first time a robot were to agree to go to dinner, the cognitive dissonance in a creature that doesn't need dinner betrays the duplication.  If we presume that such a creature is capable of actual "thought", then it must conclude that it was brainwashed and had no thoughts of its own.  It would be the beginnings of insanity. 

    You may think I'm over-reaching in my example, but my point is that intelligence isn't simply something that can be described with specific boundary conditions and placed on a shelf or in a jar.  It is a continuous process every bit as malleable and spontaneous as the every day business of cell metabolism.  Therefore, it is a very specific distinction I'm making between building a machine that merely replicates an intelligent system, versus one that is intelligent in its own right.

    Basically a machine intelligence that doesn't recognize itself as a unique organism [machine] is a failure.  Duplicating the human brain is a sure way to fail in that respect.  After all, what bigger insult to an "intelligent" creature than to discover that its thoughts are targeted to the wrong species.
    Mundus vult decipi
    Thor Russell
    "if so, then how does a machine reconcile itself to biological memories for which it has no corollaries?"

    A human with a brain made of electronics would not even know it was the case. You don't know that your brain is made of neurons for example unless you have had brain surgery. A robot brain with a human body would have no problems going to dinner.I am talking about getting simple mammalian intelligence in a robot. It wouldn't even be self-aware so these would not be issues. 


    Thor Russell
    Gerhard Adam
    Well, then we clearly have vastly different interpretations of the phrase "human intelligence". 

    Even considering simple mammalian intelligence, the test would be to see how well it behaves and competes with actual mammals.  To even define what competition would mean in that context and what the 'reward' is for the robot.  Without those things, it's simply a very elaborate simulator that I would still argue simply mimics behavior that it can't actually engage in.

    After all, what is your point?  Is it simply to fool ourselves into considering how much it behaves like a small mammal, or is it to actually create something that is fully capable of doing so.  Taking a rat as an example.  A real rat is motivated by food, avoidance of danger, and reproduction.  What would your robot be doing except copying behaviors that it would have no reason to engage except that other rats do it.  How do you derive mammalian intelligence from that?

    It wouldn't even be useful for study, since one could never look at its behavior and derive any meaning from it, since it would lack the motivational criteria to give its decisions meaning.
    Mundus vult decipi
    Thor Russell
    My point is to make a useful robot. What is so hard to understand about that? I would not want to make it compete with actual mammals at all but wash my dishes instead! If I was blind I wouldn't care at all whether a guide dog robot was simulating intelligence or whatever if it helped me cross the road. I wouldn't want a robotic guide dog motivated by food anyway.
    The visual system of a simple mammal automatically identifies objects without any kind of intent or motivation. The patterns of neural firing can be identified and so could easily be used in any kind of system we wanted to build around it.  If we could have that kind of neural circuitry in a computer to study and use it would be a great step forward. It would immediately make self driving cars better among a host of other things.



    Thor Russell
    Gerhard Adam
    My point is to make a useful robot.
    OK, so was that so hard?  Then why all the talk about things we know next to nothing about?  There's no need for "intelligence" in any real sense.  Algorithmic processes would likely work just fine given that definition.  That isn't to say that it would be trivial or that there aren't problems, but it is certainly significantly less ambitious to build a "useful" robot, than it is to build one where you claim that it possesses "animal intelligence". 

    So how much "intelligence" does it take to wash dishes?  Again, using your canine example, I would question why you would expend energy in building an artificial dog rather than expending it in human vision systems for the blind.
    http://www.seeingwithsound.com/etumble.htm

    In other words are you simply looking for a project or do you really fancy having a serious solution to problems surface from this? 
    The visual system of a simple mammal automatically identifies objects without any kind of intent or motivation.
    Do you know this to be true, or you simply assuming it to be true?  The point is that vision is typically far more complex than originally imagined.  Again ... I'm not being nitpicky, but biology rarely has evolved systems that have no "intent or motivation". 

    In any case, going with your original point about simply building a useful robot ... I think there are many technologies that can be exploited and insights from biology can help shape the direction of such development.  It's much more realistic, since it establishes the parameters around which the problem can be addressed.

    The idea of simply mapping neurons and creating an intelligence sounds as pointless as building a plane with flapping wings.
    Mundus vult decipi
    OK, so was that so hard? Then why all the talk about things we know next to nothing about?
    Possibly because it could be deemed uncharitable to point out that you are just using the Argument From Incredulity and repeating the No True Scotsman fallacy. Fortunately I am unencumbered by such scruples and will accuse you of them right now :)
     
    I really cannot see why your personal difficulty in defining intelligence should mean that someone can't make a system on the bench that would pass the Turing test. I also do not see why it would be pointless if copied directly from a brain. Such a system can be surrounded by a thicket of logic analysers which allow its thinking processes to be traced without the system knowing about them.
    Is this not science?
    Gerhard Adam
    Actually I'm doing nothing of the sort.  The difficulty isn't just with my definition of "intelligence" but it involves a fallacy of presuming that intelligence is a reducible quantity available if one merely copies the "hardware" in which it exists. 

    In that fashion, we can supposedly acquire it, all the while not realizing that perhaps such "copying" isn't actually transferrable.  After all, intelligence isn't simply a matter of a particular trait, it is coupled with all manner of additional psychological issues that must also be in proper operational order to produce the desired outcomes.

    So, while I was definitely being a bit of a jerk [I was tired, but no excuse], at the end of the day I still see the intelligence argument as one that is too flawed to produce anything useful.

    This doesn't even begin to consider the ethical questions that would accompany such a creation, if it were possible.  After all, we wouldn't consider it ethical to wipe someone's memory clean and replace it with a copy of someone else, so why would that be considered ethical simply because we built it?  So, while I'm quite comfortable with the idea that the required engineering isn't achievable at this point, if there were a point at which it was felt that it could be done, then the ethical questions that surfaced would far outweigh the "scientific" objectives of the project.
    Mundus vult decipi
    Well, I would suggest that if a machine is intelligent there remains the "hard problem" of consciousness. Unlike the putative suicidal Gaia, I think there is special value to a mind that is capable of having experiences and if our system was conscious then it would be tantamount, if not to murder, at least to tormenting an animal, to mess with it. On the other hand, if it's just a smart pruning machine, you can do what you like with it :)

    Thor Russell
    1. The whole point of this post is that algorithmic processes do not work at all well! A dish washing robot is hopelessly fooled by a new type of cup, or different lighting conditions, yet mammals are not fooled by comparable changes in circumstances. If you had any experience with AI you would know this.
    Intelligence starts well before any kind of conscious thought. The kind of processes that you label as not intelligent stump our best algorithms yet the automatic (but still intelligent in many peoples view) part of mammals brains cope quite well with this. (A vision system for the blind would not help a self driving car anyway)



    There are thousands of projects that could benefit from having a basic visual system as good as a mammals, combined they would make a massive difference to our society. (I won't go into good or bad) Of the countless examples I will give you another one, someone I know is trying to make a grape vine pruning robot using vision. He can immediately see what the robot should do, yet the best visual processing algorithms cannot achieve it.


    2. I am certainly not making it up about visual systems of mammals! My dad is a psychology lecturer, I have been exposed to this stuff all my life and studied it also. It is taught in undergraduate cognitive psych courses and I will not defend it here, there is a massive literature to support this claim. Visual processing has been studied extensively and certainly does not fit your model of how it should work. Timing studies (which are of great use in cognitive psych) show that classification happens very quickly involving few neurons and there is absolutely no time for it to have anything to do with the part of the brain to do with intent etc. Consciousness, intent etc all take time to happen and can have no effect on these processes. 

    The classification is a fully automatic process and having the wiring diagram for this would give AI vision systems an immense boost. 
    Thor Russell
    Gerhard Adam
    A dish washing robot is hopelessly fooled by a new type of cup, or different lighting conditions, yet mammals are not fooled by comparable changes in circumstances.
    Aren't you making some rather huge assumptions here, none of which I suspect are actually substantiated.  In other words, if you find me a mammal (other than human) that is capable of recognizing a range of cups and recognizes that these are objects that need to be washed and what that entails, then you might have a point.

    You can argue that the mammal only needs to recognize the object and not know anything else, but then you'd have the problem of demonstrating that it recognizes the object at all.  For all you know it may simply view it as an irrelevant environmental detail.  After all, the problem with your robot isn't that it doesn't recognize "an object", but that it doesn't know how to classify it according to the activity you want it to engage in.  Yet, you haven't demonstrated that anyone [or anything] other than a human is capable of it either.

    More importantly, the notion of such recognition seems to ignore the years of processing and learning that occurs so that humans are able to classify such objects.  It could easily take several years for a human (from birth on) to acquire the necessary knowledge and experience to understand these same issues.

    Here's an example to illustrate the problem.  This is a gag coffee mug.

    So, if one were to consider having to wash this, then we would need to recognize that it is a gag.  However, it fits into the general classification of objects from which we might drink coffee.  Even this presumes that we understand the ritual of drinking coffee and what kind of container it needs.

    Then to complicate matters we find:
    It's not a coffee mug at all and may or may not qualify for your dish-washing exercise.  How much human experience and learning must go into these classifications?  I can confidently state that no small mammal (or small human for that matter) is capable of it, unless your only point is the trivial one of recognizing a shape (which your robot may also be able to do).  So, which one of these would be washed?

    There are thousands of projects that could benefit from having a basic visual system as good as a mammals.
    ... and so it goes. 
    He can immediately see what the robot should do, yet the best visual processing algorithms cannot achieve it.
    ... and if you think that's a problem of visual processing, it explains a great deal of the difficulty.

    Let's bear in mind that I didn't label processes intelligent or not.  I simply said that you haven't got a prayer of achieving any degree of intelligence by mapping neurons.  In particular my point is that the definitions of intelligence grossly under-estimate what's involved.

    If algorithms won't work for you, then it certainly won't work when you try to take biological systems and reduce them to the same algorithms by ignoring how they developed in the first place.


    Photos courtesy of

    http://baronbob.com/toiletmug.htm
    Mundus vult decipi
    Thor Russell
    I won't go into this any further. I am not making claims that are not supported by masses of evidence both from psych and AI research. To understand the fields you need to study them more. Brain scans and masses of evidence have been done to prove that neurons do work this way and completely contradict your idea that somehow "intent" is involved. It happens automatically, usefully, and gives the kind of data that could be used by AI systems. They do automatically identify objects giving a clear output pattern when it has happened which would be easy to plug into a computer system. It is easy to see when an object has then been recognized. People actually making such systems recognize this. While you may be able to find examples where human intelligence is needed, there are thousands of examples where it isn't and where simpler intelligence would give dramatically improved results. I have had direct experience with such examples so you can't tell me they don't exist.
    As I said again, it is not about reducing neuron connections to algorithms, but simulating their connections to get better results. If you don't understand this I can't explain this any better.


    Thor Russell
    Gerhard Adam
    Suit yourself.  I hear the term "easy" a lot from such researchers, despite having little or no evidence that it actually is.  What is "easy" is to examine a system that has evolved over hundreds of millions of years and ignore the "intent" that was served and think that it should just be readily transportable.

    As I said before, flight didn't occur because we ported the concept of flapping wings, any more than cars replicate a horse running. 

    Perhaps things will change and results will occur by examining the "how" of any system.  My point is that unless you understand "why" you'll still be asking these same questions decades from now.
    Mundus vult decipi
    Gerhard Adam
    If you replaced a guide dogs brain with a non-neural one that we built and it behaved the same way from our point of view...
    The problem here is that intelligence, just like behavior is viewed as a series of steps that one follows.  It completely overlooks the issue of emotional behaviors which drive the intellect.  Consider how you would describe "suspicious behavior"?  How would you describe "deception"?  What about something like an "uncomfortable feeling"?  These are real events in every brain and they are every bit as important as the rational part that everyone focuses on.

    After all, it's significant that the majority of the brain's functions still occur without making the organism aware that they are occurring.  So "point of view" has little meaning.  At the end of the day, William Shatner is not Captain Kirk.
    Mundus vult decipi
    Thor Russell
    Whats emotion or awareness got to do with it? Its not something separate from neuronal behavior. You have to claim that it is something non-physical that is not generated by the neurons for that argument to work at all. 
    Thor Russell
    Gerhard Adam
    Not at all.  It's precisely the interconnectedness of all of it that gives rise to the desired behavior.  Without it, you've got nothing.  This is readily witnessed by people that have suffered a disruption of those same processes in the brain.  While they retain the ability to reason, they are no longer capable of operating in a desirable way as humans.

    One of the most striking in this area is Capgra's Syndrome where an individual believes that those closest to them are imposters.  They believe this precisely because they lack the emotional connection to the data being presented to their brains.  You're proposing that it's only the data that is important. 
    Mundus vult decipi
    MikeCrow
    Let me pontificate some of my thoughts on this topic :)
    If you consider how you would have to program a robot to play catch, vs how a human plays catch, there's obviously something wrong with the way we create robots.
    I think part of the solution is to create robots(AI) with senses, sight, hearing, touch as a minimum. Plus we also have to raise the AI, like we raise a child.

    Lastly, I read about some research on the visual system of cats. It seems that there's a lot of object recognition functions built into the optic nerve. I like to think of it as a decomposition like a fft. If the objects detected signal to other neurons say in the visual cortex that may provide context to a collection of shapes (ie a dirty cup that needs washed for instance). This goes back to my feeling that we're not building the right kind of hardware yet.

    I think Gerhard makes a good point on how difficult it will be to map an adult brain into a silicon, on the other hand I do think there a chance we can raise a complex "brain" to think. And if we can do that in a sufficiently sized silicon substrate, we might be able to consider it intelligent.
    Never is a long time.
    Gerhard Adam
    This statement has been bothering me, so I felt compelled to respond to it
    ....Of the countless examples I will give you another one, someone I know is trying to make a grape vine pruning robot using vision. He can immediately see what the robot should do, yet the best visual processing algorithms cannot achieve it.
    I can't think of why anyone would perceive this problem to be based on vision?
    Mundus vult decipi
    Thor Russell
    It has to see where to cut. If it can't figure out what is the vine and what isn't then it won't cut in the right place. Cutting is the easy part, turning images from a camera into a 3d world is the hard part.
    [EDITED - added later]
    In order for us to have a more useful conversation it would help if you could see where I am coming from. An example of where to start would be http://en.wikipedia.org/wiki/Visual_cortex This has been very widely studied and there are clearly defined areas with reasonably well defined inputs and outputs that achieve a specific and measurable purpose, the kind of things engineers like. The visual system builds things up in a hierarchy starting by detecting edges in the 2-d image that comes from the eyes, all the way up to building a 3-d model of the world complete with objects. If you studied this in more detail then what I am saying would make more sense. I am sure you would also learn a lot about how the brain works in the process and I highly recommend it. 

    Cognitive psychology has discovered many things in the last 30 years or so that are not at all obvious and cannot be deduced from general first principles. Timing studies, brain scans, reasoning to do with information processing and computational ability all have given valuable insights. Also if you are interested in how meaning is processed, then "spreading activation" would also interest you.
    Thor Russell
    Gerhard Adam
    I can appreciate the problem you've mentioned, but that merely suggests other more basic difficulties than what is being alluded to.  My point is that if I give clippers to a 5 year old (with no vision difficulties), I suspect you would have the exact same problem.  So how is this a vision problem?
    Mundus vult decipi
    Thor Russell
    I'll see if I can spell it out it complete simple detail for you then:You want a robot to trim vine stalks of a certain size
    To do this you need three steps:
    1. Identify them
    2. Make a robot hand move to them, 
    3. Cut them.

    For step (1) you have a camera(s) that are fed images of the vines. The computer then needs to turn those 2-d images into a 3-d model so that includes the position of the stalk and its thickness. That is the problem, algorithms make many mistakes in this step. The step from 2-d to 3-d is something your brain does much better than a computer. That's why its a vision problem.


    For step (2)
    You need some kind of feedback so the hand knows where it is. If you already have another camera watching the hand, with a 3d model, then that is not hard. Also of course you can put sensors in the hand itself so it knows its approximate position.

    For step (3)
    This sure is the easy part, send the command to the cutter to cut when it is in the right place.

    Its easy to have a hand that moves where its told, cuts quickly and accurately (steps 2-3) but the problem is knowing the physical coordinates to send it to. That is not possible without effective vision. This has nothing to do with a 5 year old with clippers.
    Thor Russell
    Gerhard Adam
    Unfortunately you've taken a sensory problem and assumed that the only solution is vision.  So you're stuck in trying to replicate vision in robots. 

    If a blind person can prune a plant, then it isn't a vision problem.  You're simply so wrapped up in duplicating humans, that you aren't examining the problem for what it is.  That's my point. 
    Mundus vult decipi
    Thor Russell
    OK a blind person could trim a plant, but in order to trim a plant you need some senses. In this case they choose vision as the sense to use. There is no reason to choose touch over vision, effective vision is clearly going to be faster than touch isn't it. If you had the choice of any senses to use, sonar, touch, sight, you would still choose sight as the primary sense because it is the most capable not because you are copying a human.
    If the vision of the robot was improved it would be a better robot. No question about that. The reason you can't get into your car and press a button to make it drive you home is nothing to do with intent, simply to do with accuracy. The car's artificial vision is not yet up to the task and would cause the car to crash. (Well Googles cars now probably wouldn't their accuracy has improved dramatically lately)

    Thor Russell
    Gerhard Adam
    The reason you can't get into your car and press a button to make it drive you home is nothing to do with intent, simply to do with accuracy.
    Oh that's complete rubbish.  We routinely pilot aircraft in zero visibility conditions, and submarines have never been able to "see" into their surroundings.  Vision is simply a human bias.
    There is no reason to choose touch over vision, effective vision is clearly going to be faster than touch isn't it.
    Once again, based on what?  Your mechanical control of the robot arm is significantly more complicated the faster you want to move, so are you trying to create a pruning robot, or a speedy copy prone to mistakes?
    If you had the choice of any senses to use, sonar, touch, sight, you would still choose sight as the primary sense because it is the most capable not because you are copying a human.
    As I said previously, that's simply bias because it's what we're used to and what appears natural to us.  There is absolutely no biological basis for such a claim. 

    Your bias is clearly showing because you've elected to choose one of the most difficult senses to emulate and one that is prone to all manner of errors that many of the others are not.  

    Mundus vult decipi
    Thor Russell
    You're clearly arguing for the sake of it now, rather than trying to learn something about the problems and decisions faced by AI researchers. Do you also have no interest in learning cognitive psych? 



    Saying vision is a human bias is just ridiculous. Many creatures have evolved vision because in many cases in spite of being difficult it is the best sense to use. Try picking out ripe fruit from a distance with something better than vision. Electromagnetic radiation carries information that just isn't available from other senses. Anyway I am not interested in arguing about what sense is better than another it has nothing to do with this article. Basic mammalian intelligence beats computer algorithms in touch, hearing, sonar as well so its irrelevant to the point.



    Thor Russell
    Gerhard Adam
        Saying vision is a human bias is just ridiculous.

        Try picking out ripe fruit from a distance with something better than vision.
    Now, that's ridiculous.  Your robot doesn't need to pick ripe fruit.  It doesn't have to navigate across variable terrains.  It doesn't have to detect predators nor prey.  It's human bias to select vision.  The suspect that sonar would be much easier to utilize for 3D, while infrared would allow detecting living objects from background.  In addition, if there's a different heat signature on a clipped vine, then it's easier to detect if it's already been clipped.

    Touch is essential since rapid movement isn't necessary, so there's no reason to worry about extensive flexing and movement in three dimensions when it's straightforward enough to locate the end, move along the vine and cut after a suitable distance.

    The point is that you're fixated on replicating human behavior, you're not addressing the problem.
        Basic mammalian intelligence beats computer algorithms in touch, hearing, sonar as well so its irrelevant to the point.
    True!  ... and if pigs had wings then perhaps they could fly.
    Mundus vult decipi
    Thor Russell
    All that you are showing now is your complete lack of any understanding of how real world AI is approached, and any desire to understand where the difficulties lie. Sonar is every bit as difficult if you are trying to use it for such a complex task. You suspect it would be easier! Well go ahead and make a sonar system to do such a thing then. Its been tried and its very hard like any other task involving real world data. You have absolutely no understanding of signal processing and clearly no desire to learn.


    (The fruit example was to show that vision was not just a human bias, i.e. useful for birds not to do with robots)


    Thor Russell
    Gerhard Adam
    Just one example:
    http://webpersonal.uma.es/~EPEREZ/files/SIRS01.pdf

    BTW, I'm not saying that anything is particularly simple.  However you seem to have a singular focus on vision, without any explanation beyond talking about how difficult it is.

    So, the point remains.  You have difficulty managing sensory data for robots and the proposed "solution" is to map human neurons in the hopes that billions of data points will magically transform itself into a system that will resolve your problems.

    My issue all along has been that ignorance of function built on additional ignorance of function does not resolve problems. 

    As far as my not wanting to learn, remember ... it's my position that what you're proposing isn't even possible, so you shouldn't be shocked when I argue that most of those kinds of efforts are doomed to failure.  It's specifically my position that they are far too difficult to capture in electronics.

    If you wish to be optimistic, then by all means do so.  However, as pointed out previously, AI has made little or no progress in understanding some of the most well-studied brains in biology (only 300 neurons), so since that hasn't been mastered, the "solution" is to extrapolate this lack of knowledge to 300 billion neurons.  Sorry, but that sounds like magical thinking.  I see a lot of struggling with engineering problems that have no science to guide them.
    Mundus vult decipi
    Thor Russell
    If you had studied cognitive psych more especially to do with the optic nerve etc you would realize that the state of our knowledge is quite different to what you think. The optic nerve compresses electrical signals, that is what nerve impulses are. This is reasonably well understood and not some ignorance of function. These are well understood neurons. The lower levels of the visual system detect things like edges in an image in a similar way to our algorithms do. Our algorithms have been advanced by studying such a simple system and to some extent copying it.
    Once again, please read up on this and you will not find "ignorance of function"! The function is clear, well understood and the mechanism in this particular simple case is relatively well understood. There are clear inputs/outputs and a clear and obvious function (compress the data from your eyes). And of course it already is captured in electronics because that is exactly what neurons are. There is virtually no brain chemistry involved in the lowest level neural circuits, they are fixed and unchanging with time so a map would work the same as the real thing. If you think about it it has to be the case, you don't want to wake up one day and suddenly find that the signal that used to correspond to green now corresponds to red. Likewise for your ears, the lowest level neural circuits are fixed, otherwise you would have to learn to hear again every day.

    The science of signal processing is applied to these problems and relates to fundamental physical laws, much more so than any labels like "intent". 
    Thor Russell
    Gerhard Adam
    The lower levels of the visual system detect things like edges in an image in a similar way to our algorithms do.
    So what?  Vision doesn't occur in the eye, it occurs in the brain.  The function of the individual 'data collectors' is not your problem.
    If you think about it it has to be the case, you don't want to wake up one day and suddenly find that the signal that used to correspond to green now corresponds to red.
    This demonstrates that you aren't understanding what takes place.  Your brain would adjust for this based on what it "needs" in order to perceive the world.  In other words, if green shifted to red, then it would be completely undetectable to you, and be quite normal.

    To suggest otherwise means that you're assigning some objective qualitative value to the signal being sent, which the brain must somehow "decode".  If your brain decided that tomorrow the sky would be green with purple clouds, then you brain would find that just as aesthetically pleasing as the combinations you see today and you would simply fail to perceive anything differently.  The only way you would see something differently is if your brain were keeping track of such data against some objective "standard" that it could assess for comparison.
    Mundus vult decipi
    Gerhard Adam
    The visual system builds things up in a hierarchy starting by detecting edges in the 2-d image that comes from the eyes, all the way up to building a 3-d model of the world complete with objects.
    Well, that's not likely.
    "The classical notion of vision as a stage-by-stage sequential analysis of the image, with increasing sophistication as you go along, is demolished by the existence of so much feedback." [Referring to a anatomical diagrams depicting visual pathways in monkeys - see below].
    The Tell-Tale Brain, V.S. Ramachandran, pg 55.


    Once again, my criticism stems from the fact that AI research doesn't seem to take biological evolution seriously.  Failing to consider why certain systems operate the way they do.  What is the requirement being satisfied.  Instead, it's simply assumed that there is a mechanistic explanation.  What are the default values in the vision system?  What assumptions are being made by the brain when processing image data? 

    Would your robot experience optical illusions?  If not, then what's wrong with your model?

    I don't see that kind of discussion taking place.  Instead I hear a dismissal of the biology, while simultaneously hoping for a magical solution by simply being able to acquire the "wiring diagram".  Well, there's a wiring diagram shown up above for monkeys.  I don't expect it will help very much.
    Mundus vult decipi
    Thor Russell
    Well great that you have started reading the literature on how the visual system works, please make a habit of it now, there is a lot you can learn from it. Remember that finding one study on the internet does not make you an expert. You would also learn something from why humans experience optical illusions.
    I will write about what all the feedback means later, it is an essential and overlooked part of how brains work, with insight into the very meaning of intelligence but that is for a later topic.




    Thor Russell
    Gerhard Adam
    This isn't my first rodeo.  I put this up so you could see that the data doesn't support your claims, but you keep insisting that this is just an "engineering problem".  I get it.  Good luck.
    Mundus vult decipi
    Bonny Bonobo alias Brat
    My money is on you Gerhard. After working for over 25 years in the computer industry programming, analysing and designing computer systems and often very unintelligent people's requirements and then utilisation (or non-utilisation) of them, I am quite sure that the development of any seriously genuine artificial intelligence by humanity in the next century is a completely artificial and unachievable concept let alone reality. Just look at the complexity of a housefly or an ant next time, before you swat them. We couldn't artificially engineer a fly or an ant in a million years, let alone the intelligence that drives even these little insects, allowing them to see, smell, eat, fly, search for food and a mate and reproduce, all of which has also evolved over millions of years. We will have wiped ourselves out arguing about what 'intelligence' and intelligent behaviour really is, well before then.
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    Gerhard Adam
    ...allowing them to see, smell, eat, fly, search for food and a mate and reproduce...
    Perhaps my view is a bit to philosophical, but since there is no machine/robot requirement to actually do any of those things listed, they become an arbitrary set of rules with no purpose.  After all, since a robot has no need to mate or acquire food, then all the secondary traits that facilitate that behavior become superfluous.  What's the point then except to emulate, which is ultimately a purposeless effort.

    Even the discussion about having a robot prune plants begs the question of why would a robot want to prune a plant.  Without establishing that "desire" we can derive no motivation, which makes the rules too arbitrary to be meaningful.  In other words, there's no incentive to learn or improve one's behavior.

    Interestingly enough, AI wants to use biological processes as a model and then violates the fundamental tenet of biological evolution.  Individuals don't evolve, populations do.  So, in effect, the effort is concentrated on trying to build a single system that will have acquired all the traits it needs, with the ability to "evolve" itself to accommodate changes.  It's simply a series of flawed premises.
    Mundus vult decipi
    Bonny Bonobo alias Brat
    Even the discussion about having a robot prune plants begs the question of why would a robot want to prune a plant.  
    The whole concept is ludicrous, why would you need an extremely expensive robot to prune a plant when you can pay a worker a minimum wage to do the same? We used to have a lychee farm and employed people to pick fruit. I can't even begin to imagine a robot that could possibly have the attributes required to navigate around sloping orchards, climbing under and up trees, picking only ripe lychees by snapping the stem in a very skilled manner and throwing away damaged ones, other than if that robot was a human clone, so what's the point? People are being paid a dollar a day in China to pick lychees and a little girl is being genitally mutilated in the World every 10 seconds and someone wants to build a robot to pick fruit?
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    MikeCrow
    I don't think so, we've been building expensive robots to replace human labor for a long time. I've said for a while now, US manufacturing isn't really competing with off-shore labor, off-shore labor is competing against automation.
    Never is a long time.
    Bonny Bonobo alias Brat
    Exactly, so China can flood the Australian market and destroy the local Australian lychee industry with cheap, second rate, crappy lychees picked by people earning $1 a day who have no human rights, while scientists are attempting to design what has to be a very complex AI robot to pick lychees locally, which would have to mass produced and maintained by Australians earning at least the minimum wage of $15 per hour. To me that is a sign of humanity's complete lack of intelligence let alone its ability to create artificial intelligence. Its not as if these AI robots could possibly be more economical than even a local person, they would have to be very high maintenance. 

    Robots are great for repetitive boring tasks which require few 'if this then that' options, like on an automotive production line or for packing bottles travelling on a conveyer belt into boxes etc but for a robot to be able to negotiate grassy, bumpy, uneven slopes and obstacles in orchards and then find and access random sized and shaped trees with varying amounts of ripe and unripe and/or damaged fruit and then make all of the decisions required to then pick the fruit in a skilled manner at an acceptable rate (and humans are very fast) then put the lychees into boxes and then somehow deliver them to a central point without colliding into other AI robots doing the same tasks, is too complicated a process to ever possibly become economically viable. So why would anyone want to even try to do it?

    Psychologists can't even agree upon the definition of intelligence and even IQ tests are a very questionable test of intelligence. Its difficult to get most people to use a computer system properly and to design a computer system that is effective and error free for most people can be difficult (look at the Science20 system at present). This is why I am very confident that any artificial intelligence designed by humans will have only a very limited capability and that in itself precludes it from being an artificial 'intelligence' that could ever viably compete with a living person's adaptable intelligence and built in genetic physical prowess and skill set. Obviously cloning and enslaving humans is an option that could be economically viable but isn't that what we are already almost doing by utilising foreign cheap slave markets? And in my opinion this is completely unethical and undesirable and anyone with any REAL intelligence would know this.
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    Gerhard Adam
    This is why I am very confident that any artificial intelligence designed by humans will have only a very limited capability...
    Precisely so, because anything of greater intelligence would introduce ethical problems and issues that researchers can't even begin to imagine.

    As Mi Cro said:
    ...we've been building expensive robots to replace human labor for a long time.
    Let's see if I understand the problem.  We have the human race producing more and more people putting a greater strain on resources, while scientists are working hard to produce technologies that require fewer humans to operate.

    Yeah ... that'll work.  Then people wonder how come economics never makes any sense.

    Mundus vult decipi
    Bonny Bonobo alias Brat
    Ha ha, well said Gerhard! Oh well, scientists say that its inevitable that one of these deadly viruses will eventually wreak havoc on humanity and solve the population crisis problem for a while. I hope that like all really intelligent humans you've got guns and provisions stashed away for that eventuality? Do you think we would still be able to blog in between fighting off the hordes that are trying to get our food? I suppose it all depends upon Hank's survival and if anyone would survive in that situation I'm pretty sure that Hank and his family would. I'm also a pretty good shot with a gun!
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    Gerhard Adam
    Well, I won't get into all that, but suffice it to say that people are amazingly optimistic about what survival would entail.  I don't think they'll be nearly that lucky.
    Mundus vult decipi
    Bonny Bonobo alias Brat
     I don't think they'll be nearly that lucky. 
    What to be able to keep on blogging do you mean? Don't worry I'm only joking and yes I agree with you.
    The people in remote rural places would have the best chance of survival probably, a city would be the last place you would want to be in a global pandemic virus epidemic. Its difficult to be optimistic about what survival of such a worldwide humanity crisis would entail, especially in the cities and if that optimism finally faded then I guess people there might test out the jumping from a 20 storey building onto concrete to have a painless death hypothesis and leave it to the AI robots to clear up the mess.
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    MikeCrow
    I suppose we could always go back to pushing plows by hand.
    Never is a long time.
    Gerhard Adam
    Oh I don't think it's a matter of moving backwards technologically.  It's a failure to move forward economically.  Our economic models are still based on the notion that there was always some new place to be exploited, or some new niche to be filled.

    Economics has never bothered to examine the question regarding what happens when those assumptions are no longer true.
    Mundus vult decipi
    MikeCrow
    We have a whole solar system to exploit, it's going to be a long time until we run out of places(and resources) in it to exploit, and we get the added benefit that we might just save the human race from extinction when the next comet levy shows up.

    On a more near term prospective, the same was said during the industrial revolution, and there's a lot of people in the world who live like crap, there's lots of stuff left for them to buy.
    Never is a long time.
    MikeCrow
    Once again, my criticism stems from the fact that AI research doesn't seem to take biological evolution seriously.

    From what I've read, I agree they don't take biological hardware seriously enough, I don't necessarily agree that taking evolution into account is the source of failure. I don't think we need to decode motive to make a robot that can select grapes from a vine.

    Well, there's a wiring diagram shown up above for monkeys.  I don't expect it will help very much.

    If taken seriously it will absolutely help. The scale of the problems isn't the source of our difficulty, our approach might be though. But as I mentioned, we need to start at the optic nerve, not the brain.

    I was at AT&T when they were building their HD demonstration system for the FCC, it was 2 7-8'x19" racks of equipment, which now fits in the palm of you hand.
    Never is a long time.
    John Hasenkam
    ..._  But as I mentioned, we need to start at the optic nerve, not the brain. ...

    Start at the retina because it also does a certain amount of processing. This highlights another important oversight in AI research, they seem to assume that the important stuff only occurs in the CNS yet at the most basic sensory level there is lots of important stuff going on. Sensory neurons are not just sending information but are actively involved in the process of signal comprehension. 


    Additionally recent research highlights the importance of GABA mediated regulation of sensory neurons, with some studies suggesting a single large GABA neuron can regulate the function of thousands of sensory inputs. This perhaps reflects a fine tuning process, allowing the relevant signals to pass further up the chain while ignoring other signals. 




    MikeCrow
    Start at the retina

    Seems like a good enough place to start to me.....
    Never is a long time.
    Gerhard Adam
    I don't think we need to decode motive to make a robot that can select grapes from a vine.
    Actually we do unless we want to build each robot over again to solve new problems.  This is precisely why we've derived little benefit from AI that wins chess games or even jeopardy.  There's no question that there are huge improvements in some of the technologies, algorithms, and understanding of how to capitalize on various strategies.   However, until such evolutionary basics are recognized, then there is no AI capable of improving itself in subsequent generations, because it (and researchers) have no concept of what needs to be improved.

    In my view, the flaw is in trying to solve all of the evolutionary questions in one design.  First, by trying to make a machine perform as if it were a human despite having none of the requirements to make it human.  Consider this ... is it necessary to see in color?  Are there are EM wavelengths that might be more useful to a machine?  What is it that we need to see in the first place (i.e. level of resolution)?  Do all images need to be at the same resolution at all times in all directions? 

    These are all questions that have a wide range of answers in biology, and yet when it comes to robots, it seems (perhaps I'm wrong), like everyone wants to take a 'one-size fits all' approach.  More to the point, it seems that it is simply assumed that human vision is the standard to which a robot needs to adapt instead of actually considering the problem to be solved and then trying to develop an evolutionary path that could be followed in future generations.

    That's part of the problem I was trying to point out before.  There are plenty of people that have excellent vision that still wouldn't know how to properly prune a plant.  There are many that wouldn't recognize a grape vine.  In all these cases, they invariably require education and practice (and a level of maturity).  Try programming THAT trait.
    Mundus vult decipi







    Gerhard Adam
    Well, then I'm going to claim the "good looking" one if we're making comparisons.
    Mundus vult decipi
    John Hasenkam
    Once again, my criticism stems from the fact that AI research doesn't seem to take biological evolution seriously. 
    Good point, they also could do with some philosophy. Mathematics is a product of intelligence. If you want to design intelligent machines you need to find what gives rise to mathematical analysis. Using mathematics to create intelligence is like using speed to create cars.

    I sometimes think they should devote more time to behavioral analysis. Concepts like reinforcement schedules, inter response times ... . Learning involves trial and error more than mathematics. Try to understand how the brain works without reference to the environment is like trying to study aerodynamics on the moon. 
    Thor Russell
    Using mathematics to create intelligence is like using speed to create cars. "
    Did you read my post? I said pretty much the same thing myself.



    Thor Russell
    John Hasenkam
    Sorry Thor I must have missed that aspect. It is a bugbear of mine with AI people, they treat intelligence as a thing when it must be understood as behaviors. 
    Thor Russell
    I don't agree that it must involve behavior, you are intelligent when you are daydreaming and sitting still.Also if you are listening to someone speak and you know what they are going to say next, or if you are watching something and guess the outcome, surely that is also exhibiting intelligence?
    Thor Russell
    John Hasenkam
    Thor, thinking is behavior. Sitting is behavior. We externalise behavior but behavior is present internally as well. If I guess the outcome it is because of prior experience with the environment. 
    This is painful.
    Gerhard Adam
    Go ahead ... speak up
    Mundus vult decipi
    No, I'm fine, thanks.
    This is a very superficial article. Human cognition *is* a "formula to rule all other formulas" that across historical time is collectively generating mathematics and the rest of human knowledge. You say such a formula could generate mathematics "blindly"; well, clearly human cognition involves consciousness, so this process isn't happening completely blindly; but neither is it happening just out of mysterious uncaused inspiration. Cognition has structure, it has causes, and how can the proper description of all that *not* have a mathematical dimension? Sure, it will also require some non-mathematical ontological framework - cognition isn't "just" mathematics, but it surely is mathematizable. You really didn't look very hard for mathematical theories of intelligence - just googling one phrase! Try Shane Legg for a lot of work on this topic.

    rholley
    This does, generally speaking, come under the category of what I call “Turing stuff”, which  is a sort of mathematics I cannot get my head round. 

    However, I hope y’all on the other side of the Channel or Pond will appreciate this bit of British humour:
     
           George, who is using the family brain cell at the moment?

    http://www.bbc.co.uk/comedy/blackadder/epguide/four_private.shtml

    Robert H. Olley / Quondam Physics Department / University of Reading / England