Methodological Stuff:
  1. 1. Introduction
  2. 2: Patterns
  3. 3: Patterns, Objectivity and Truth
  4. 4: Patterns and Processes

The Pattern Library:

  1. A pattern of Difference
  2. A Pattern of Feedback
  3. The Hourglass Pattern
Convergence Inducing Process

This pattern is another personal favourite. It is basically an abstraction of a large class of optimising processes that is studied in computational intelligence. It is also sometimes called global search.

Pattern
Convergence Inducing Process
 akaGlobal Search, Problem Solver
  
 Notes
  • The actor typically maps the evaluated variables with the goal function that typically contains a securing mechanism that stores high-ranked variables, which are used for further evaluation.
  • The securing mechanism is represented by the white triangle
 Expert Domains
Computational Intelligence, Cybernetics

The basic idea is that the process has to find optimal values in a large, unknown set of data, according to a certain criterion (or criteria) that can rank the data. The only data this process can use for its search strategy is the data it has already collected (sampled), and the strategy has to 'guess' new samples based on this information, which hopefully will result in better results.

The interesting thing about convergence inducing process is that it is one of simplest processes that combines order (patterns in the sampled data) and uncertainty (the data that has not been sampled yet). The uncertainty  is usually tackled with a process of exploration, which in computers are often implemented with random functions. The process of optimisation can take up many different forms, and usually largely determines the specifics of the optimisation process, such as genetic algorithms, hill climbers, or the learning phase of neural networks. The quality of the various implementations of convergence inducing processes therefore can lie in the quality of optimisation, the quality of exploration, or both.

For the purposes of complexity, there is a good reason why the unsampled data has been categorised as 'uncertain'. It means that the process is not able to make any predictions about that  portion of its life-world. At best it can hope to find certain patterns in the sampled data, which allows it to make predictions about this data, thereby reducing the uncertainty. Often exploration is, as was mentioned earlier, a matter of random sampling, which makes these processes, according to a graph of Gerald Weinberg, one of the simplest forms of a complex system, because it combines analytical and statistical approaches. It sounds  rather paradoxical, I know. And this brings us to the lessons learned by so-called 'No Free Lunch theorems' (NFL-theorems), deJong functions and multi-armed bandit searches.

There has always been a hope that there would be global search strategies that would perform above average in any given environment. AI pioneer John Holland had aimed to prove this for genetic algorithms with his schemata theory, but research on the No-Free-Lunch theorems seem to suggest that global search strategies can only behave above average if the environment already provides certain clues (patterns) for optimisation, which are then integrated in the strategy. Kenneth deJong has developed a number of benchmark functions (deJong functions) which are often used to test the quality of certain optimisation strategies.

John Holland's initial research on optimisation strategies basically consisted of two distinct parts. The latter part was the afore-mentioned schemata theory, but this was based on his proof of  multi-armed bandit problems, which deals with multiple search strategies that operate simultaneously. The proof shows that a search strategy is optimal if the best search strategies get an exponential amount of time during the search, with respect to strategies that behave poorly.

The classical description is that you are in Las Vegas, and you are testing a large number of one-armed bandits to see which ones cash out the most frequently. Initially all you can do is make some random tests, but as you proceed, some one-armed bandits seem to cash out more regularly than others. So you devise a strategy that incorporates this knowledge. Now you have two strategies (exploration and optimisation) that you have to perform simultaneously. As you proceed however, you will gradually spend more time on optimisation, and less on exploration. At least, according to the proof of the multi-armed bandit, this will be the optimal strategy.

As was mentioned earlier, it seems that there are no search strategies that will achieve this optimal situation under any given circumstance. I have once devised a hypothetical search that consists of three strategies, one that yields better results with each sample, one with comparable results to what has already been sampled, and one that gives poorer results. I found out that if the strategy that yields poorer results gets exponentially less trials with each iteration, you also get an optimal search. With this, I believe that you can always map any given set of search strategies to these three, and as long as the poor strategies are filtered out  exponentially, the global search will behave optimally.  In other words, there is no real difference in the various implementations of search strategies, apart from practical suitability and characteristics; you can never say that one type of global search will be better than others in complex environments. For my own research, I use the following set of characteristics to define a complex environment:

CharacteristicDescription
 Known/UnknownThe process is designed to incorporate specificknowledge about the environment for its optimisation strategy. Often only certain aspects of the environment are known, such as its structure (tree, graph, etc.).
 NP-hard (complete)
Results in a combinatorial explosion of the search space
Linear/non-linear
Usually increases the uncertainty in trends, such as curves
 Static/Dynamic 'Static’ means that the environment does not changewhile it is being processed. A dynamic environment can change duringoptimisation.
 Reactivity This implies that the environment isinfluenced by the actor. A reactive environment is always dynamic as well.
Contingent Unexpected, singular events or circumstances can occur

A lot of research on global search  will only address one, or a few of these characteristics, most notably np-hardness and (in agent systems) reactiveness.

As convergence inducing process is a pattern, it means that it potentially can scale up to more complex forms. I have used this to argue that we humans perform this pattern on a daily basis and on many different levels. In true self-referential fashion, I have argued that science itself conforms to this pattern, through the experimental method, and the formation of theories from hypotheses. However, even at these levels of complexity, the wisdom of NFL-theories and the likes constrains science. Science is not about finding truths, rather it is about reducing uncertainty.