Banner
    First Working Artificial Intelligence (AI) Unveiled
    By Gerhard Adam | April 1st 2010 04:07 PM | 37 comments | Print | E-mail | Track Comments
    About Gerhard

    I'm not big on writing things about myself so a friend on this site (Brian Taylor) opted to put a few sentences together: Hopefully I'll be able...

    View Gerhard's Profile
    Today computer science research has crossed a major threshold in producing a system that can be considered to display true intelligence. The system is purported to possess the ability to engage in conversations and even invoke creative use of language to express new and novel ideas. The system utilizes about 20 quad-core processors for processing speed while linking together several terrabytes of memory to maintain an active working set of functions. There is no appreciable delay in responses, and unlike systems of the past, this system displays a behavior that can only be considered "self-motivating" in initiating actions. Dr. Charles P. Unger suggested that perhaps this use of artificial intelligence still displayed many unresolved issues and clearly warranted significant future research. "Perhaps we should've focused more on artificial personality rather than intelligence, because this system is particularly cranky!." After a grueling morning of questions and intense conversations, the research team responsible for this development is relieved that the system has finally "shut up". "Nothing was good enough and we were all dolts," according to an account by one of the researchers who has chosen to remain anonymous. "Left up to me, I'd unplug the bloody thing". Despite their overwhelming success, the team isn't sure whether anyone can tolerate being in the same room with this system long enough to validate their results.

    Comments

    ROFLMAO!!!
    AdamRetchless
    Nice. I think The Onion can take the day off.
    logicman
    Well done!
    Steve Davis
    I must take more care looking at dates on articles!
    Nice work.
    "The system is purported to possess the ability to ... invoke creative use of language to express new and novel ideas. "

    In other words, it's going to die alone.

    Would it not be better if it was called Virtual Intelligence (VI) and not Artificial Intelligence (AI)? As I have read somewhere before, these two are very different from each other. VI s are programs that mimic intelligence while AI s do not mimic intelligence rather, they show sentience if I'm not mistaken. If it's an AI then you should not turn it off because that is tantamount to murder, while a VI may be turned off anytime it's not needed.

    Gerhard Adam

    You can't murder a machine.  Whatever else you might want to call it, it isn't murder.

    Mundus vult decipi
    Oh, this reminds me of that movie with Will Smith. For the life of me I can't remember the title. Can someone help me out here please?
    Thanks, Henry. The answer came to me before I saw your post. But, I appreciate your help, nonetheless. : )
    Technically speaking, we are bio-mechanical machines. So,............................?
    I, Robot I believe is the title of the movie.
    Now tell me! Which of you would have the heart to "terminate" this "machine"? ROFL!!!

    Believe it or not that choice is not too far into the future! It is a legitimate philosophical question.
    Gerhard Adam
    The first philosophical question that needs to be answered is what the status of a creator is relative to his creation.  If we accept the religious position, then the creator is entitled to do as he chooses, so I'm not clear what the basis would be for a creation becoming independent of or having equal rights to the creator.

    The second question is what the criteria is for life, because you can't murder something that isn't actually alive.  Which leads to the third question of whether sentience (and how such a thing is determined) is sufficient to establish something as being alive? 

    I realize that many people have extended the concept of life in all sorts of artificial manners, so that machines that can build other machines are said to satisfy the criteria for reproduction, etc.  To me these seem like stretching the boundaries of these meanings, but it does raise the question of what separates us, as living organisms, from the non-living.

    The trickiest question though, is if we grant all these previous points to a machine because of sentience, then what criteria should be applied to other animals that are sentient?  Does that convey rights? and what does that mean within our social context?
    Mundus vult decipi
    Aitch
    The first philosophical question that needs to be answered is what the
    status of a creator is relative to his creation.  If we accept the
    religious position, then the creator is entitled to do as he chooses
    , so
    I'm not clear what the basis would be for a creation becoming
    independent of or having equal rights to the creator.
    Gerhard,
    I've read this thread down to Eric's 3.24 in the morning comment, and can't help but feel that Eric's comment, ...
    "So, your first point, Gerhard is that if a sentient being has the power
    to create another sentient being than it has the right to destroy it?"
    ....which I think I broadly agree with  - yet it doesn't take you to task on the notion of a creator's entitlement, and subsequent creation's equal rights to the creator

    I think you malign a creator badly, as I see a creator as maybe knowing it has the capacity and capability to do as it chooses, but I don't see the need to ascribe a right to do what you are able to, since it seems to imply no need of discriminatory choice as to whether or not to do it

    Hence Eric's notion that you are saying 'the creator has a right to destroy it', went unchallenged by you

    To me, it is this fundamental misunderstanding of being a creator, which gives rise to this 'cruelty and brutality' Eric refers to, which pervades our societies, instead of the 'kindness and compassion' for which we have equal opportunity of choice, and yet no such apparent 'right' as is ascribed to the former

    Why do you paint a creator as so morbid?....and why do you feel that if we misunderstand a creator, we must follow the misunderstanding and claim a right to do so.....

    It seems patently bizarre to me - and I neither seek nor want such 'right' as this, whether from a creator, or from some peer group thinktank. It seems bordering on psychosis to me, and a possible explanation why 'religions' start wars with each other, if this misunderstand of a creator is widespread

    If we do build sentient robots in our own image, I hope we never forget the 3 laws of robotics, as I would feel more affinity to robots who obey, Rule 3:-
    A robot must protect its own existence as long as such protection does
    not conflict with the First or Second Law.
    ...than to a robot who is programmed to have a creator's right to do as he chooses

    http://www.auburn.edu/~vestmon/robotics.html

    I can thus understand why....
    If we accept the religious position, then the creator is entitled to do
    as he chooses, so I'm not clear what the basis would be for a creation
    becoming independent of or having equal rights to the creator.

    .....you are not clear about this. Applied to our robot creation, they would have no need of the third rule, and we're bang in trouble!

    I think we may need to consult Ed Neumeier, and ask him how humanity will create Robocop to rid us of OCP, whose take on what it means to be a creator, seems remarkably similar to yours

    According to this 2007 article,

    http://innovationwatch-archive.com/choiceisyours/choiceisyours-2007-02-1...

    .....we are falling behind schedule, so maybe this conundrum is still relevant?

    Still all's not lost, and I think I prefer this take on robotics....

    I’m not worried about a future where sentient robot dogs that feed on
    the dead stalk the streets at night.

    I’m worried about a now where corporations trick humans into paying
    as much for a liter of bottled tap water as they do for a liter of milk.


    http://www.allartburns.org/2009/07/29/fear-of-sentient-robots/

    Aitch
    Gerhard Adam

    The problem with Asimov's three laws it that they treat robots as either machines or slaves, which is fine for dealing with sophisticated machinery, but it begs the question of a robot that is truly an intelligent independent entity.  While I don't personally believe the latter objective is achievable, if we simply assume it for a moment, it introduces all manner of problems because such a entity would be another species, machine or not.

    As a separate species, we would expect that it would compete for its own survival and it would be presumptious to suggest that it would grant humans a higher status than it's own survival.  Whether a creator has such a right or not, is largely irrelvant to me because I don't see it ever happening, but from a purely philosophical perspective, I don't see how such a position can ever be overcome. 

    As for painting a creator morbidly, I don't see how you reached that conclusion.  The point is that most religions consider suicide a sin precisely because it is felt that you don't have a right to terminate the life given to you by God.  Therefore it is concluded that only God has the right to give and/or terminate life.  I don't see that as morbid, but rather it states a fundamental philosophical position regarding the one bonafide case of where an act of creation is asserted.  Since we have no other models for such a situation, that's the one I pointed to.

    As for brutality and cruelty, I'm not sure where that comes in or why you would conclude that as being fundamental to my point.  Is it brutal or cruel when we inundate a bacteria with antibiotics to kill an infection?  Is it brutal or cruel to kill animals for food?  It may seem arbitrary in my example of a creator destroying their creation, but it certainly isn't brutal or cruel.

    As I also mentioned, how would our responsibility or culpability be viewed if we created a sentient machine that, in turn was viewed as brutal or cruel?  After all, it is entirely plausible that such a machine, in an attempt to assure it's own survival or future may well act in a manner which other humans think fits that bill. 

    As for your comment regarding corporations ... I agree completely and then some.

    Mundus vult decipi
    So, your first point, Gerhard is that if a sentient being has the power to create another sentient being than it has the right to destroy it? Isn't that along the lines of "might makes right"? Power alone, no matter how great, does NOT a God make!

    As far as being sentient, strictly speaking many species besides ourselves are sentient. Look up the definition of 'sentient' and you will find that it only means an organism that is capable of "sensing" its environment with the same "five" senses to which we're all so accustomed. Contrary to the popular connotation of the word, 'sentient' does not mean self-aware. By definition an insect is sentient. But, I understand that is not what you mean.

    Where do we draw the line, Gerhard? Obviously other species have emotions and can experience pain--both physically and mentally. And yet some of these species which are not physically capable of creating artifacts (i.e. technology equivalent to our own) appear to be more intelligent and compassionate than we humans.

    Case in point, both dolphins and whales have a much larger neo-cortex with more convolutions than a human being and have had one a lot longer than human beings. We have hunted and slaughtered these creatures mercilessly for centuries. Dolphins are more than capable of killing a great white shark and do so quite easily when threatened. And yet, throughout all of recorded history, dolphins and whales have been noted for helping human beings in trouble while in the water despite the way that we have treated them. So, if whales and dolphins can kill a great white shark with little effort,  can you imagine what they could do to us if they so chose? They don't seem to harbor any grudges the way human beings do. So, you tell me who is more intelligent and advanced on the evolutionary scale.

    As far as religion goes, and in this culture it usually comes down to Judeo-Christianity and the Cartesian/Platonic duality of body and soul....well all I can say is that, I don't know that I have a soul that will survive the death of my body. Do you? I may believe that is the case, but beliefs are not facts! So, if I don't know if I have a soul, how am I to judge whether another creature--artificial or natural--has one?

    I don't have the answers, Gerhard, but neither do you or anyone else. And that is why I say it is a legitimate philosophical question which I would categorize as metaphysical in nature. Better to err on the side of caution, I say.
    Please wait until I am finished editing what I have written, Gerhard before you respond.
    OK....I'm finished editing. Thank you for your patience, Gerhard. Now you may respond if you wish. ;-)
    Gerhard Adam
    Sorry, sometimes I can be a bit quick on the response end. 
    Mundus vult decipi
    logicman
    Why would a data manipulator want to obey orders coming from a hydrocarbon heat engine?

    ;-)
    Steve Davis
    Are you trying to stir me up Patrick?
    You know how much I hate reductionism!
    I did laugh though.
    Gerhard Adam
    So, your first point, Gerhard is that if a sentient being has the power to create another sentient being than it has the right to destroy it? Isn't that along the lines of "might makes right"? Power alone, no matter how great, does a God make!
    While I understand your sentiment, the only philosophical base we have for this problem is religion, where this is precisely the conclusion drawn.  I'm not suggesting that religion is correct, but that creates a different problem, because if a creator does NOT have such a right, then it would be legitimate to challenge what the basis for God's right is over his "creation".

    Of course, if we step away from religion, then the problem is simply reduced to establishing the relationship of creator to created. 


    Contrary to the popular connotation of the word 'sentient' it does not mean self-aware.


    As you correctly pointed out, the problem of sentience is a bit more dicey.  For example, could a biologist in a lab create a single-celled organism that is certainly aware of it's environment and then be prohibited from destroying it?  What if the organism were a deadly disease, would that change anything?



    We already kill innumerable organisms on a regular basis, so does the act of our having created another being become relevant?  Since our laws are intended to protect humans, would that apply to a machine, regardless of how intelligent it might be?





    They don't seem to harbor any grudges the way human beings do. So, you tell me who is more intelligent and advanced on the evolution scale.
    Once again, I understand your sentiment but there are more complications of definitions here.  What is intelligence?  What makes us think that we are more intelligent (and what is the basis for comparison)?  As for being "advanced", I have a problem with that because it suggests a directionality to evolution (especially in a moral direction).

    So as I don't ignore the moral part, I don't believe that any species can impose their morality on another, since their entire basis for existence is species-specific.  Therefore, how can our morals be applied to a machine any more than the machine's be applied to humans.  If we did build an intelligent machine and it wanted to destroy all of humanity, could we really argue that it was behaving immorally?

    In truth, this is the fundamental problem with all AI research anyway and why I don't believe it can ever succeed.  We aren't interested in building intelligent machines.  We're interested in seeing if we can build a machine that behaves like a human.  The latter will always just be a simulation, since there is no reasonable argument to suggest a machine should behave like a human.

    Mundus vult decipi
    While I understand your sentiment, the only philosophical base we have for this problem is religion, where this is precisely the conclusion drawn. I'm not suggesting that religion is correct, but that creates a different problem, because if a creator does NOT have such a right, then it would be legitimate to challenge what the basis for God's right is over his "creation".
    I do! And if omnipotence is the only criterion for being a god than I challenge God! And, I say if this is the case, then I assert that God is not God!
    The latter will always just be a simulation, since there is no reasonable argument to suggest a machine should behave like a human.
    How do we know that, Gerhard? We haven't crossed that threshold yet.
    Gerhard Adam
    How do we know that, Gerhard? We haven't crossed that threshold yet.
    We have the evidence of millions of species that exist and none behave like humans except humans.  There may be similarities, but they can't be considered human behavior.  When we consider that human behavior isn't simply arbitrary but rather that it is precipitated on our biology, it becomes clear that any entity that doesn't share a common biology and heritage would have no basis for acting in a human manner. 

    A machine built in the fashion we're discussing, if it satisified all our criteria, would be a new species. 
    Mundus vult decipi
    A machine built in the fashion we're discussing, if it satisified all our criteria, would be a new species.


    Indeed it would be, Gerhard. And that in turn would make us its God? And as gods how are we to treat our creation...with kindness and compassion or the cruelty and brutality that we have shown each other for millennia?
    Gerhard Adam
    That would depend ... how do you expect your creation to act.  What if it were cruel and brutal?
    Mundus vult decipi
    No brainer! Then it would be a simple matter of self-defense and we would in such an instance have demonstrated that we are not gods, but just the simple human beings that we truly are. ;-)
    Food for thought. It's 3:24 am here and I need to get some sleep. It's been a pleasure as usual, Gerhard. ;-)
    I know I'm joining in late here,
    My question is:
    Would unpluging the thing be murder, if you can just plug it in again?

    Gerhard Adam
    Well, that's precisely the problem because one can't really kill a machine.  Even if I were to take parts from it and leave it shut down for a month, if I replaced those parts and started it up again, it would still be there.  The point is that it is a machine and therefore not alive.  Since it's not alive, it cannot be murdered.
    Mundus vult decipi
    Aitch
    Gerhard said,"........if we simply assume it for a moment, it introduces all manner of problems because such a entity would be another species, machine or not. As a separate species, we would expect that it would compete for its own survival and it would be presumptious to suggest that it would grant humans a higher status than it's own survival." And later takes the opposite stance, "Well, that's precisely the problem because one can't really kill a machine......." I don't see the consistency in the logic, Gerhard - Just a machine or a species, which is it? Whilst we may be awhile away from a functional AI robot, I believe we are heading in that direction Aitch
    Gerhard Adam
    I think the whole concept is contradictory.  One can't claim intelligence and independence of thought and expect that it would think and reason as a human.  Therefore, whatever thoughts it would have, must be unique to itself as a "species" or however you want to define a supposedly unique sentience.

    However, from a physical perspective, it's still a machine and consequently isn't subject to the same considerations of death and murder that a living organism is subject.

    The primary contradiction comes from the fact that the whole issue of AI and the topic of "murder" is only because it is assumed that AI represents a new kind of sentience and therefore warrants special consideration.  The problem with this is that it suggests that sentience, in and of itself, is a sufficient condition to establish something as being alive.

    As for AI, and a viable direction, I'm too skeptical to give it much credence.  In particular, I don't believe that intelligence is a separate entity, like some feature that can be installed on a particular system.  Intelligence is integral to the survival of a species, so if you remove the requirement of survival, I don't believe you can have anything left over that you could realistically call intelligence.

    Since a machine can't die or reproduce or do any of the things humans do, it can't possibly feel any of our pressures or motivations to do things.  Therefore, if a machine emulates a human it is artificial and can't be indicative of a real need.  It would be like trying to read the mind of a bear or an alligator.  While we may guess at the reasons for their actions, we can't imagine what the world looks like from their perspective.  Similarly any AI cannot view the world as a human would, since it would have no basis for such a context to exist.

    Mundus vult decipi
    Steve Davis
     Similarly any AI cannot view the world as a human would, since it would have no basis for such a context to exist.
    Exactly so Gerhard, and it would be most unlikely that any AI would closely resemble any organism that has undergone eons of evolution.
    to be honest its epic they finally created a VI still in the trolling between you 2 i think this a vi just mimics us thats all
    but if it was an actual ai i belive it is intiteld to richts because it can tink on a human level it may not be flesh and blood but apart from that it is the same then us it just looks and works a little bit different

    srry for my bad spelling in advance also i am not gonna argu on my point srry

    This reminds me of the '5th generation computer', a 400M$ project started by Japan's MITI in the eighties. The aim was to construct a naturally conversing computer. By 1990 little was heard of it, but I remember some rumour, that the project actually made substantial progress. The machine passed the turing test for a 16 year old japanese girl. Whatever you asked it, it would giggle.