In other words, while an animal may view a problem and review possible solutions, humans have the ability to “create problems” that aren’t really there and then abstract solutions to them. It is the fundamental ability to ask “why?” and to construct “what if” scenarios . It is this trait which gave rise to the concepts of long-range planning (and I suspect fostered the need for language). In addition, this would also have provided the framework for philosophy and science to develop as a direct consequence of human brain evolution.
Language would’ve suddenly become critically important and been a unique advantage in learning since the participants could exchange experiential knowledge without the need of having to deal with events directly. This would eventually give rise to the written word and the development of a collective knowledge pool which became a part of the entire human social structure. In modern society, it is virtually impossible to separate any innate intelligence that an individual possesses versus the knowledge that has been provided by the social group. In effect, each generation is the beneficiary of thousands of years of experience. A phenomenon that could never be replicated by any other species in existence. I would suggest that this is why human intellectual achievement looks so radically different from other organisms.
The development of the frontal lobes of the human brain are the primary functional drivers being responsible for: reward, attention, long-term memory, planning, and drive. Coupled with the brain’s extensive memory systems of implicit (that which requires no conscious control) and explicit (short/long term memory) storage, these elements provide the means of storing information, relating it, and ultimately providing the framework against which abstract problem solving can occur.
The role of memory in learning is well recognized and in humans is fundamentally associative. This simply means that information isn’t simply stored randomly, but rather it is associated with other pieces of information to create data relationships. Therefore when something is remembered, it isn’t simply retrieved directly, instead it is reconstructed from all the associated data with which it is stored. This also provides another indicator, that our memories will only be as good as the amount of information sources to which new data can be anchored.
In addition, the executive functions of the frontal lobes involve the ability to recognize future consequences resulting from current actions, to choose between good and bad actions (or better and best), override and suppress unacceptable social responses, and determine similarities and differences between things or events.
In other words, despite the desire to focus on brain size, and relative processing power, it would appear that the operative difference really exists within the executive functions of the frontal lobes. These are the elements most often impaired with drugs and alcohol, which provides another strong indicator of their role in intelligent behavior.
Since these functions don’t fully mature until about the age of 25, it stands to reason that I.Q. tests may really be measuring the effectiveness of our executive functions and not our brain capabilities at all. Since these executive functions are also heavily involved in emotions and social behaviors, they play a tremendous role in differentiating apparent “intelligence” between cultures.
Given all these elements, there are two ways in which the brain processes problems; convergent and divergent thinking. In divergent thinking, the emphasis is on the development of ideas or solutions to problems and is generally considered the more creative aspect of human thought. Convergent thinking takes existing information and applies it to the solution of a problem. It is this latter aspect of human brain function that is tested in conventional intelligence tests.
When it comes to measuring intelligence, the idea of something like an I.Q. test is so fundamentally flawed it’s amazing that it ever gained any traction. No one would think to give the same test to an individual at different stages of their life (i.e. 5 years old, 10 years old, 20 years old) and expect to get any results that are meaningful. Yet, why shouldn’t they be if they actually measured something that is supposed to be an innate property?
After all, if we can’t get consistent results when measuring the same individual, then how much can we rely on the scores measured between different individuals?
Interestingly, the developer of the IQ test, Alfred Binet, had no such illusions regarding the validity of testing intelligence. Indicating that the purpose of his testing was strictly for determining the placement of children in school that needed special attention.
So once again, we are faced with the problem of misusing a metric that was never intended for the purpose to which it was taken. When existing information is used, it becomes impossible to distinguish between innate brain power versus knowledge that can be brought to bear on a problem. The more experience and education an individual has, the greater the likelihood that they will have more information to bring to such a test and consequently score higher. This serves the purpose of reinforcing a bias that better educated individuals are more intelligent. This becomes a self-fulfilling prophecy since individuals that do well on intelligence tests are generally viewed as having a greater potential for success. However, this is only logical if they are already better educated and have greater experiential information available.
Ironically when we classify an individual as a genius, we tend to focus on their divergent thinking capabilities since it is ultimately the creative aspect of problem solving that is what counts. In effect, there is no likely correlation between an individual being a genius and having a high I.Q, although we can’t distinguish between the two modes of thinking based on these tests anyway (1).
In the end, intelligence consists of too many elements to draw simplistic conclusions. The effects of memory, the executive functions and their roots in culture and social convention are clearly influential factors.
Ultimately the one aspect that truly separates human thought processes from those of animals isn’t being measured, and the part that is, is more indicative of human social evolution and collective knowledge than it is an indicator of intelligence. In other words, the predicted result of an intelligence test is primarily an indicator of how well the individual has absorbed the experience and lessons of thousands of years of human development and tells us virtually nothing about their innate intelligence.
(1) Consider that Richard Feynman supposedly had an I.Q. of 124 when measured in school.