A Science Of Human Language - Part #9

This series, which commenced here, is about quistic grammar, a semantic grammar.  It is called quistic grammar because it is based on the notion that all ideas can be reduced to simple questions, or the equivalent: simple statements which answer simple questions.  It is a grammar in which semantics - the meaning conveyed - is given primacy over syntax - the rules of word and sentence modification.  The semantic component of language is, I suggest,  coded in mental models: neural structures which we manipulate and which we compare to external realities in order to plan our actions.
the idea of a worldview or belief system is not optional. All humans have one, since it is a requirement to provide a minimal framework against which data is acquired and classified.
Gerhard Adam, The Luxury Of Belief

Asking why leads to the places opinion is born. Asking why an opinion exists leads to measuring it’s value. Knowing the value will help you decide if it’s time to change your mind or attempt to change someone else's. Logic and reasoning can help us along the way to bypass the traps of language and work only with concepts.
Brian Taylor, Anti-Social Engineering


Reprise

The previous eight parts of this series have introduced the core concepts of quistic grammar.  Before moving on to the topic of the bootstrap problem - how babies learn to use language - I will give a brief overview of the essentials.  At the core of quistic grammar is the idea that it is rare for people to discuss things in the actual presence of those things.  This means that, for the most part, when we use words, they are not pointers to some real thing, but pointers to ideas in the mind - models, abstractions or paradigms of reality.

These ideas, the primary referents of words, are mental models.  A mental model is a set of neurons which collate everything which is known, remembered or felt by a language user when a word is used.  A 'content word' is produced by a speaker when the relevant mental model is active.  When the hearer recognises that word, the hearer's equivalent mental model is activated.

The words or tokens which label mental models are called nuons in quistic grammar, as also the mental models themselves.  The parts of language which are seen as syntax, and which serve as cues to meaning and as error-handling codes codes are called quons.  The relationship between things and words is entirely an accident of the evolution of language.  We are born into a world of language users.  We learn to use language in a way that conforms to the norms of society within a range of unavoidable variation which is due to our biological variability.  The rules of conformity of a language evolve naturally.  The range of natural and unavoidable variation in a language - its tolerance range - explains the drift in language use over time.


How Babies Learn To Talk

A baby is born with no language abilities, other than the ability to make a few simple sounds.   Within a few years it has learned to hold a conversation. How is it that a baby is able to learn such a skill?  There are a number of theories about this, relating to whether or not babies have a basic 'built-in' grammar, and to what extent they imitate the speech that they hear.

I have stated that in quistic grammar there are only two kinds of words, or parts of words: nuons and quons.  Given that language has evolved so that the sounds which we use as nuons and the sounds which we use as quons carry no sign of their nature, how can a baby distinguish them so as to place them in their correct categories?

I suggest that the distributions of nuons and quons in natural speech in all languages obey statistical laws.  This is not to suggest that human brains are designed to make statistical, mathematical calculations about word use.  Rather, when a person speaks, there is an element of stochastic process mixed with the semantic-logical speech production process.  Speech is structured in such a way that a hearer can frequently predict the next word in a chain, or at least predict the remainder of a partly heard word.

In all languages, there are recurring patterns of use of word variants and sequences.  The most commonly used words, variants and sequences form a pattern from which details may be extracted.  I suggest that in general, for all languages, word, sentence and phrase endings exhibit strong regularities.  Rather than being 'rules of grammar', these regularities are 'rules of disambiguation'.

As an example, in English many sentences end with the followed by a noun.  In terms of pure expression of meaning, a sentence such as: "I was waiting for bus." is as good as: "I was waiting for the bus."  However, when viewed as error-handling codes, quons are seen to perform a vital role in a noisy environment and in a child's acquisition of language.  Consider the following errors of transmission:

"I was waiting for _."

"I was waiting for the _."


In the first instance, the hearer's expectation, in the absence of 'the' is a noun of a class of people or events.  The gap might be filled with the name of any person, e.g. 'John', or any period of time, e.g. 'morning'.

In the second instance, the hearer's expectation is the name of an object.  There are many such patterns of sentence-final word pairs, where a nuon is paired with a quon.  In all cases, the quon is in a class of words which have a very high frequency of occurence.  Indeed, in English, the word 'the' has the highest frequency of all.

Based on my many experiments in computational linguistics, I suggest that babies first acquire quons, the most frequent words or affixes in a language.  This is counter-intuitive: we observe that a baby's first use of language is of nuons, the  'content words' of language.  It is only much later that children begin to use quons.  I suggest that the theory that babies acquire some quons first is a sound one.  The suggested sequence of learning accounts for some aspects of language acquisition which are language-independent and which need no 'hard-wired' grammar.

A few quons are acquired by matching sound patterns to memories of sounds.  They are not at first linked to mental models.  In a very real sense, for a baby, these quons, these 'grammar words' are entirely meaningless.  The infant brain simply stores significantly frequent speech sounds.  In the case of English, 'the' will be at the top of the list.  The infant brain creates a template of 'the' plus a significant silence: the end of sentence marker.  The only 'assumption' made by the brain is that high frequency words are cues, quons, and the lower frequency words bracketed by quons and by silence are nuons.

If particular nuons exhibit regular associations with objects or events in a baby's environment, they will be attached to the baby's primitive mental models of those events or objects.  As the baby comes to develop motor, cognitive and language skills, some mental models, already labelled by nuons due to the stochastic processes of 'data gathering' from overheard speech, will be put into use.  The models, matching some thing in the baby's environment, will trigger a language output.

In the early stages, errors of application will be made.  The mental models will be far too coarse.  Perhaps any two-legged tall thing with a deep voice will trigger 'daddy'.  But as the baby modifies its learned categories, and as it more correctly labels those categories with words stochastically pre-acquired, the baby will gradually learn to more accurately use speech to signify objects and events.


Related reading:
Hauser, Chomsky, Fitch: The Faculty of Language: What Is It, Who Has It, and How Did It Evolve? free pdf
Introduction to the Study of Language THE HUMAN LANGUAGE SERIES #2
Acquiring the Human Language: "Playing the Language Game" free pdf

Other materials:
A wealth of links for the student of linguistics: Philippe Schlenker's home page.

The next article in the series, A Science Of Human Language - Part #10, further explains the concept of error-handling in human language.

If you have enjoyed this article, you may enjoy other articles, mostly about language, in my  blog The Chatter Box