In watching a recent discussion about "free will", I was surprised as to how quickly the discussion got confused by conflating "free choice" with "free will".
[Day 2, Afternoon, First Session:  Free Will/Consciousness]

In this article, I will attempt to better define some of these concepts to illustrate why "free will" is an illusion.

To begin, let's define "choice" as an event that occurs in which a decision point is reached.  Regardless of how many apparent choices one has, they always reduce to one decision.  In addition, we may recognize that certain choices may eliminate other choices from further consideration or selection.

We can also extend this to basic technologies to see that "choices" are routinely made by computer programs and numerous devices that we encounter.  Some input signal arrives, and some "choice" is made.  

The point being served here, is that even non-thinking processes are capable of taking inputs and exercising "choices" based on those inputs.  At the simplest levels it is only a matter of some mechanism that is capable of determining "if A then do B".

This can be extended further by imagining a much more sophisticated system employing exactly the same processes and yet becoming more readily credited with making a "real" choice.

From our simple computer program, let's scale it up to IBM's Watson system competing on Jeopardy.  While we don't have to credit it with intelligence, it clearly had to be capable of taking plain language and converting it into something recognizable to the software and then applying it to searching for the answers.  There were undoubtedly numerous choices that had to be made by the software to arrive at a correct answer.  As a result, there is little room to argue that "choices" were not actually occurring.

Again, it doesn't matter whether one argues that these "choices" were originally provided by a programmer or whether they were somehow intrinsic in the system.  The point is to simply define what we mean by choices; i.e. a decision point.  If we consider that since these machines are not truly cognitive, then perhaps we can't really claim that they are making "choices".  Yet, I would argue that because they are machines, the difference is merely that we are in a better position to recognize all the rules [i.e. motivations] that govern the "choices" being made.  After all, at the purely physical level, it is difficult to argue that a particular setting of 1's and 0's is materially different from a particular neuron firing.  

Now if we were to consider other animals, we can observe them also making choices.  They may decide to eat, they may decide to run, they may play, etc.  Based on whatever motivates them, they choose to respond in certain ways.  At some point the choices may be driven by physical requirements, while others may be cognitive, but again, the point is simply to illustrate that there are decision points being acted on.

If a particular event occurs and they sometimes act one way and sometimes another, then a "choice" is being made, whether we understand all the underlying motives or not.  As a result, if the "freedom to choose" is part of the "free will" discussion, then we must be prepared to argue that such creatures also possess it.  If not, then we must concede that choice is insufficient to establish any kind of cognitive freedom.

Of course, humans do the same things.  However, with humans we have the added issue of our brain, which doesn't simply make choices and respond to situations.  It is also capable of rationalizing "why" we did certain things (2).

On this point we must be cautious, because the brain is notoriously good at rationalizing anything we do, regardless of whether there was actually a self-motivated reason for doing it.  It is already well understood that the brain will simply "make up" information if it needs to in filling in gaps.  Similarly, as has been demonstrated through hypnosis, that humans are quite good at rationalizing why they did something, even though it is clearly evident that their response was due to some external suggestion.  In addition, we have seen just how extreme such rationalization becomes in cases of "recovered memories" (3).

So, we have the problem that our rationalizing, or simply having a reason for doing something, isn't a reliable indicator of our motivation to choose.

As a result, when we make a choice to have chocolate versus vanilla ice cream, we may feel that we are freely choosing, but we can't actually differentiate whether we are freely choosing or simply rationalizing a choice that has already been made.

However, if true, the last sentiment invokes images of humans as being mere robots and is generally unpalatable as an explanation for our behavior.  In addition, it raises the specter of moral responsibility if we truly can't control our actions.

Getting back to the issue of choices.

If we accept that there are numerous factors that go into making choices, which may range from the exceedingly simple signals to highly complex considerations and conditions.  Then we have to ask; what do we mean by "free choice"?

Free from what?

In my view, the only way to explain this is to acknowledge that all choices are subject to all the inputs available to the system, whether it is a human or otherwise.  Therefore, every input will contain some information that will influence a choice that is made. As a result, this suggests that the concept of "free", in this context, is to somehow convey the idea that a choice can be made without any influences; internal or external.  In short, the argument of being "free" presumes that somehow we can behave free of any determinism.

I would submit that such a situation is impossible.  Such a state would truly require that humans are a blank slate, on which all their history, influences, knowledge, and experience can somehow be set aside and a choice can be made free of any influences.

If we can't reasonably remove these effects, then it brings us back to the question of "free will", which is simply an extension of the same idea, in that our choices are supposedly free of any influence, and that we can exercise something called the "will" which can supercede any such influences that are present.

So the illusion stems from the fact that we believe our choices are somehow rational and consciously derived.  Should I eat or not?  What should I eat?  Yet, we are bounded by both the physical reality of our existence and the logical reality of our brains [mind].  We cannot venture outside of that domain and declare ourselves to be free of that existence.

I can no more will myself to stop breathing, than I can to will myself to ignore the information in my mind.  Yet, this is precisely what is required if one is to claim access to "free will".  

The point is quite simple.  Unless we can divorce ourselves from our minds, then any claim of "free will" is nothing more than the rationalization of that same mind that we somehow possess it.  It is simply a circular argument.

As a result, we cannot claim to be "free" of the very same organ and process that is making the claim of freedom.  Everything we do originates in our brain, including the notion that somehow we can behave independently of that same brain.  To suggest otherwise would require some independent "seat of power", a homonculus [or "little man"] that is capable of overriding the decisions originating in our brains.

It is an illusion. It is illusory in the same way that IBM's Watson system suggested a computer possessing knowledge. When a system becomes complex enough, and the choices are generated by a wide enough range of inputs [many of which we may not even be aware of], the illusion becomes sufficiently complex and we believe that we are capable of acting as independent agents.   

(2) Dennett argues that the difference in choices between humans and other animals is:

(a) Future abstractions [imagination and moral about outcomes].

(b) The ability to be talked into or talked out of things.

In my view, these arguments are quite weak.

Future abstractions is entirely dependent on how far into the future one contemplates extending these ideas.  Many animals are certainly capable of envisioning future outcomes, even if it is only short-term.  There is no strong evidence that humans necessarily envision long-term moral outcomes for their choices either.

The concept of morality is too ill-defined to make much difference here.  It's simply hand-waving.

As an example, an abused animal will present two distinctive behaviors:

The first is the expression of fear in many situations.  This clearly indicates an ability to anticipate an unpleasant future based on past experience.  As a result, we find that the animal is not simply a blank slate which accepts every situation it finds as a new one.  In extreme cases, fear gives rise to panic and the animal is simply in a perpetually defensive/aggressive state.  In this state, it is easy to see that the animal is incapable of choosing an alternative behavior.  This is very similar to humans that are incapable of choosing alternative behaviors when faced with a phobia.

Secondly, one often has to engage in "talking" an animal into something as an incentive to cooperate.  Just as before, with an abused animal, one has to spend a fair amount of time reassuring them [i.e. talking the animal into] in their new circumstance and that future situations have changed.

So, the idea that incentives can be created is a reasonable one, but the notion that humans can be arbitrarily "talked into" things is quite a stretch.  This can only occur for ideas that are already viable options within the existing belief system.  So, it isn't a matter of talking someone into something, as much as it is weighting a particular choice more strongly than another.  However, both choices had to have been present in the brain already.

(3) The problem with recovering memories is that a false connection to emotions lends a strength to the memory beyond simple recall.  The fact that false emotions could be associated with false memories bears consideration since there is a syndrome that already operates in this area; Capgra's Syndrome.  In this case, the individual believes that people close to them are imposters, and the explanation is that it occurs because of a failure to attach emotional significance to the memories and persons.  What is more interesting in Capgra's, is the point that the individual will readily believe a totally irrational view that their family or loved-ones have been replaced by imposters rather than consider that there is something wrong with their perception.  It is important to note that these individuals are not delusional or psychotic.  They are quite normal in every other respect, but there is a disconnect between the visual centers and the emotional responses associated with the images.  It is also useful to note that Capgra's sufferers seem to only be affected by the visual pathway, so a conversation without visual contact occurs completely normally.