Banner
    Eye Computer: Turning Vision Into A Programmable Computer
    By Mark Changizi | March 25th 2010 11:37 AM | 18 comments | Print | E-mail | Track Comments
    About Mark

    Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella 2009) and Harnessed: How...

    View Mark's Profile
    Our everyday visual perceptions rely upon unfathomably complex computations carried out by tens of billions of neurons across over half our cortex. In spite of this, it does not “feel” like work to see. Our cognitive powers are, in stark contrast, “slow and painful,” and we have great trouble with embarrassingly simple logic tasks.

    Might it be possible to harness our visual computational powers for other tasks, perhaps for tasks cognition finds difficult? I have recently begun such a research program with the goal of devising ways of converting digital logic circuits into visual stimuli – “visual circuits” – which, when presented to the eye, “tricks” the visual system into carrying out the digital logic computation and generating a perception that amounts to the “output” of the computation. That is, the technique amounts to turning our visual system into a programmable computer. 

    This is not the first time scientists have attempted to make use of biological computation; this was first tried with DNA, tapping into the computational prowess inside cells. My research is the second kind of biological harnessing that has been attempted for computation, aiming to commandeer our very brains. Because this new kind of computation – “visual computation,” or “eye computation” – is carried out in people’s brains, its outputs can be directly and immediately fed into humans, making for true human-computer interaction, all in one head! 

    Why with the Eye?

    People are notoriously poor reasoners, whether in the probabilistic domain or logical domain, something I have also personally witnessed when teaching logic and computer science. That’s one of the reasons we all appreciate computers. Although our reasoning and logic powers are poor, we are all walking around with computers in our heads that are far more powerful in many respects to any computing device ever built, or likely to be built in the foreseeable future. 

    There are several reasons why the visual modality is a promising one for biological computation.  First, the computations underlying our elicited perceptions are extraordinarily powerful, our visual system taking up about half our cortex. Second, our eyes and visual system are capable of inputting and processing large amounts of information in a short period of time. Third, in spite of the billions of  calculations carried out at each glance, it feels effortless to perceive. Fourth, visual neuroscience is by far the most well understood subfield of neuroscience, both at the level of neurobiological mechanisms and perceptual phenomenology. Finally, visual stimuli are a much easier modality for presenting stimuli – e.g., on paper – whereas audition, say, requires a computer or play-back device. 

    The idea of tapping into vision for computation is not new. Mathematical notation itself is a visualization technique and aid to cognition. The invention of writing, more generally, moved language from the auditory modality to a visual one, and enhanced our reasoning capabilities.

    Visualization within science in the form of graphs, charts, etc. has been crucial for understanding complex phenomena. Visualization has also been employed for over two hundred years in logic.

    For example, Leonhard Euler invented diagrams to visually represent the contingent relationships among concepts, John Venn utilized visual diagrams to show the logical structure relating concepts, and Charles Sanders Peirce invented existential graphs (Figure 1a) for depicting logical formulae. For digital logic there has long been a standard visual notation scheme that depicts each logical formula as if it were a physical electrical circuit, as shown in Figure 1b. Figure 1c shows a preview of the kind of visual circuit I have been designing; we’ll see more of these later.

    Pierce contingent relationships among concepts

    Vision has, then, long been harnessed for computation, in particular with the aim of facilitating human reasoning. 

    The “visual computation” technology I am designing is something altogether different. The aim is not simply to use visual stimuli to aid one’s cognitive computations. Rather, the aim is to get our visual system itself to obediently carry out the computations.

    The broad strategy is to visually represent a computer program in such a way that, when one looks at the visual representation, one’s visual system naturally responds by carrying out the computation and generating a perception that encodes the appropriate output to the computation. That is, there would be a special kind of image that amounts to “visual software,” software our “visual hardware” (or brain) computes, and computes in such a way that the output can be “read off” the elicited perception.

    Ideally, we would be able to glance at a complex visual stimulus—the program with inputs—and our visual system would automatically and effortlessly generate a perception that would inform us of the ouput of the computation. Visual stimuli like this would not only amount to a novel and useful visual notation, but would actually trick our visual systems into doing our work for us. Other visual notation systems for logic and computation – such as Charles Peirce’s “existential logics” or standard digital circuit notation (see Figure 1a and 1b above) – cannot do this.

    Escher Circuits

    In my attempt to hack into our visual system and program it as I wish, I have experimented with hundreds of varieties of visual circuit instantiations, for much of that time with little success.

    Figure 2 illustrates some of the broad classes of visual circuit types that failed for any of a number of reasons. Stimuli that were not successful include: (i) cases where the bistability was figure-ground (and for these circuits a large problem was that the “wire” was not “insulated,” and would quickly leak and spread everywhere in the circuit), (ii) some stimuli looked like pipes and tubes (but although NOT was easy to achieve, AND and OR were not even conceivably computed), (iii) some kinds of circuits tried to affect the probable illumination direction, thereby modulating the perceived convexity of a bump/crater, (iv) some utilized dynamic stimuli with dots alternating positions, with ambiguous motion signals, (v) others had a similar idea, but for grouping ambiguously grouped pairs of objects, and (vi) color spreading was utilized in some attempts.

    failed visual circuits for computer logic

    Finally, though, I found a variety of visual circuit that (kind of) works. The design relies on depth ambiguity, and was the first design that enabled perceptual AND and OR operations, as well as satisfying the other early constraints for the simplest digital visual circuits. Because this variety of visual circuit can often look vaguely Escherian, I call them Escher circuits.

    To understand Escher circuits, we must start with the most fundamental part in them: wire.

    Wire: Circuits need wire in order to transmit signals to different parts of the circuit, and an example case of “visual wire” is shown on the left in Figure 3. It is bistable, and can be perceived either as tilted away (0) or tilted toward you (1). Stimuli of this sort serve as wire because your perception of its tilt at the top propagates all the way down it to the bottom. This kind of stimulus also serves as insulated wire, because state changes tend to be confined to the wire itself. Many circuit varieties I experimented with before the current variety suffered from leaky wires, where the state would spread across the page: for example, this was a key problem when trying to use figure-ground perceptual ambiguity for digital state. In this style of Escher circuit, wire has a canonical form, directed down and to the left as in the orientation shown at the input and output of the wire on the left in Figure 3.

    Wire can also be bent as in the case shown, which – with the increased junction information – can make the perception of depth more pronounced and stable. But these circuits are designed so that any such bends must eventually “unbend” when being input into another component of the circuit. This feature of visual circuits is important in understanding the design difficulties in building a NOT gate, something I’ll discuss in a moment.

    wire escher circuits

    Inputs: An input to an Escher visual circuit is an unambiguous cue to the tilt at that part of the circuit. Here I utilize simple unambiguous boxes as inputs, as shown in Figure 3 on the middle and right. One advantage to inputs of this kind is that differential depth cues lead to pop out: in larger circuits there will be many inputs, and it will be crucial for the pattern of tilt-towards and tilt-aways – i.e., the binary input – to stand out as a perceptible pattern so that it can induce the computations in the circuit.

    escher inputs not gate

    Negation: NOT gates are crucial for digital circuit computations, inverting the signal from a 0 to a 1 or vice versa. Figure 4a shows one kind of visual NOT gate for Escher circuits. It begins as a special kind of wire—roughly a wire-frame box—which undergoes a “break” below it. The portion of wire below the break tends to be perceived as having the opposite tilt to that above the “break.” The curvy portion below it is required here in order to bring the wire back into the down-and-leftward canonical orientation for wire in these circuits. Another variety of NOT gate is shown in Figure 4b, this one relying on an ambiguous prism-like shape to correct the circuit orientiation, or handedness. A third type is inside the circuit shown in Figure 6. 

    escher circuit negation and or gate

    Disjunction and conjunction: Escher circuits allow ORs and ANDs as shown in Figure 5. The visual OR gate in Figure 5a is designed with transparency cues so that the tilted-toward-you, or 1, interpretation is favored, and tends to be overridden only when both inputs are 0s. A similar idea works for an AND gate, but with a distinct kind of transparency cue. That is, the OR and AND gates are designed so that, without inputs, 1 and 0 output interpretations are favored, respectively.

    These gates – NOT, AND, and OR – are sufficiently powerful that any digital circuit can, in principle, be built from them. (In fact, {NOT, AND} and {NOT, OR} are each universal.) In the circuit shown in Figure 7b are two NAND gates (relying on similar transparency cue tricks as for OR and AND), and a NAND gate is, by itself, universal. Figure 6 shows an example larger circuit, an exclusive-OR (XOR).

    escher circuit XOR

    Most of the interesting computations possible with digital circuits require feedback, and Figure 7 gives two examples, including a simple variety of flip-flop for memory storage.

    escher circuit visual circuit feedback

    What’s the Point?

    Why do any of this? I can imagine a variety of possible long-term benefits (some of them quite fantastic).

    Enhanced computation: One general potential payoff concerns the possibility that some programs could be run more quickly on an “eye computer” than on an electronic computer. These would be programs that critically rely on the visual system’s specialized “GPU” (graphical processing units), something unparalleled by computational vision algorithms. This is analogous to the original hopes for DNA computation.

    Computation that interacts with the brain: DNA computation did not end up useful for carrying out computations faster than electronic circuits. Instead, it was realized that the advantage of molecular computation was that it allowed the direct communication and interaction with the cell biology. Analogously, whether or not eye computation can ever be employed to carry out computations more efficiently than on an electronic computer, the benefit may be that visual circuits can directly interact with the neural machinery – because the neural machinery is the computer here.

    State-dependent perceptions in static stimuli:
    One of the directions of interaction can be from brain to computation, where different observers – having different brain states – may react differently to a certain visual circuit. For example, one can imagine a circuit component whose perceptual resolution is modulated by the observer’s, say, thirst (actually, there are such stimuli, something from 2001 research of mine). The visual circuit would be designed to communicate (via its perceptual resolution) something relevant for thirsty observers when thirsty, and something else for non-thirsty observers.

    Diagnostic Rorschach-like tests: One of the hopes of molecular computation is to have molecular computers that can interact with cells, and whose output will depend upon the state of the cell. In this way, molecular computation hopes to be a diagnostic tool. Similarly, eye circuits may potentially have value as a diagnostic tool for neurology and psychiatry: the patient reports the perceptual output to the doctor, and this output is diagnostic about which condition the patient likely has. Like a Rorschach inkblot test, eye computation relies upon ambiguity; but unlike Rorschach tests, visual circuits carry out specific algorithms, and can be explicitly designed.

    Treatment: The other direction in the brain-computation interaction is from computation to brain. For molecular computing, the idea would be that the molecular computer can selectively affect the cellular environment. For eye computation, the goal would be to develop circuits that can leave a particular lasting impact on the visual system and brain. Just as with flip-flop circuits it is possible to create and control a long-lasting state change (used for memory storage), visual circuits can potentially induce perceptual states, and in such a way that even once the input stimulus inducer is removed, the perceptual state remains “frozen in”. There may, then, someday be routes by which visual computation could not only diagnose psychiatric and neurological disorders, but also be involved in treatment.

    Programmable perceptions: Visual computation could provide powerful tools for manipulating an observer’s perception, despite much or all of the visual stimulus remaining identical. For example, a three input visual circuit can have up to eight different perceptual states, and which perceptual state the observer is in can be controlled by modulating the three unambiguous input visual stimuli. It is also possible to program for arbitrary kinds of perceptual ambiguity: for any visual circuit without inputs, one’s perceptions will tend to settle only on the logically satisfiable solutions to the circuit, and so one can purposely engineer which of multiple perceptions are possible for a viewer.

    Enhancement of human logical capabilities: Despite the presence of computers, people rely more and more on visual displays aimed at aiding our thinking. For example, digital circuit notation is used more than ever among engineers, and visual notation for mathematicians and scientists is likely to always be with us. Visual computation makes new inroads into visual displays, and radically extends its horizons so that the visual modality is not just a medium for the iteraction of vision and cognition, but lets loose the computational dynamics of the visual system. For example, rather than an engineer programming digital circuits via thinking his or her way through traditional digital circuit notation, with visual circuits the engineer’s visual system will be harnessed and allow him or her to much more quickly see – literally see – the computational steps.

    Manipulation of perceptual memory: Manipulation of computer memory (in RAM) relies upon digital circuits like flip-flops, where a brief signal to one of the inputs leads to a state change (a bit flip) at one of the outputs, and this new state remains even after the brief input signal is removed. Such digital memory circuits can be implemented via visual circuits as well, allowing a short presentation of an input stimulus to cause a long-term shift in the perceptual output. Circuits like this rely upon feedback, which in the case of visual computation amounts to one’s own perceptual state being fed back to earlier parts of the circuit, affecting the perceptual state there. I foresee memory circuits such as these eventually being crucial building blocks for visual circuits, helping to maintain greater circuit perceptual stability. In the long run one hope would be that visual circuits for bit storage could be utilized as an aid to working memory, allowing us to artificially enhance our working memory limits by tapping into visual working memory.

    Mnemonic device: One common technique for enhancing recall is to create imagery connected to the list of terms to be recalled. The imagery is more easily recalled than the list all by itself, and the imagery then helps one recall each of the terms. Visual computation allows something like this. The list of terms to be recalled is now the input to a visual circuit, and the visual circuit is designed so that there is a one-to-one correspondence between the possible inputs and the resultant visual circuit perceptual state. The list of terms now computationally induces a particular imagery, and the person just needs to remember the look of the induced imagery. To recall the list, the visual circuit is presented without inputs, the observer recalls the imagery, the imagery helps induce the visual circuit into the earlier perceptuo-computational state, and this state leads to perceptual states at the inputs (now empty), which can be read off one’s perception to attain the original list.

    Just the Beginning

    Although the Escher visual circuits I just described are a great improvement over my many earlier attempts (see Figure 2), there are serious technical difficulties to overcome.

    First, the larger circuits currently appear to require – at least without training -- “perceptually walking through the circuit” from the inputs downward toward the output. One does not yet immediately and holistically perceive the output. Second, the visual logic gates do not always faithfully transmit the appropriate signal at the output. For example, although AND gates tend to elicit perceptions that are AND-like, it is a tendency only, not a sure-fire physical result as in real digital circuits. Third, even if a logic gate works well, in the sense that it unambiguously and robustly cues the perception at the output, our perception can be somewhat volatile, capable of sudden Escher-like flips to the alternate state. The result is that it can be difficult to perceive one’s way through these visual circuits. And, fourth, building larger and more functionally complex circuits will require smaller and/or more specialized visual circuit components in order to fit the circuit on an image (analogous to the evolution of electronic circuits).

     A major problem to overcome is how to do this while still ensuring that the visual system reacts to the circuit as intended.

    The current visual circuit design is only the first step, demonstrating the basic concept. It should be thought of as analogous to the early research stages in DNA computation, an idea that was “miles away” from the ideal promise at the inception…and still is.

    Comments

    Aitch
    Wow, Mark...Very interesting
    I'll have to come back to this later, when I'm less busy...but Escher was one of my favourite thought provokers, and you've definitely got the touch, here
    Good piece

    Aitch
    adaptivecomplexity
    I like the idea of computation that interacts with the physiological machinery. There is a lot of potential in getting biology to do new cool things.
    If this works well, you could harness this visual computing power in a way analogous to the SETI or Protein Folding at Home, by having millions of users view these circuit diagrams.  The advantage there is that you could use probability in your favor to overcome the problem of individual logic gates not being interpreted perfectly.
    Mike
    jtwitten
    Wasn't this the plot of Snow Crash?
    I think the idea of re-defining a “problem” to be immediately “computed” by the visual cortex is a fantastic concept (and routinely used today). However I see deep flaws in this article.

    The first problem is fundamental -- the very concept of black line on paper "3D" line drawing is entirely a learned symbol system. It is not a biological given.

    I can force myself, with substantial effort, to see it as pointing "in" but for me, they point "out" unconditionally. The shaded box at the top (fig. 3) is supposed to lead vision to one sense or the other, I guess, but that utterly fails for me. There's nothing wrong with my vision or my seeing. I'm a sample of one here, but the implication that this "in" vs. "out" business is universal or consistent is clearly not true. It's learned, trained, cultural. Why not just learn/train in a system with inherent rigor?

    Second, what's with the binary logic? What a bizarre mis-use of visual cortex! It's like working out a way for a horse to ride a bicycle. If we're solving combinatorial logic problems, it will take absurdly more energy to re-represent the circuit in a visual system than it would be to simply execute it in some logic apparatus. Isn't digital logic cheaper than anything? If the goal is to solve problems of worthy weight, eg. not combinatorial logic, well, show us the stuff.

    In the 'what's the point?' section, every single example of the word "computation" explicitly or implicitly refers to the sort of work done by stored program type binary logic computers, not "computation" in any broad sense, as it must to include the biological. The comparison of DNA to stored-program computers is spurious. And Rorshach tests don't simply rely on "ambiguity" -- they are hugely dependent on learned cultural and social experiences, and experts subjectively judge the results. Sheesh!

    Here's an example of visual cortex computing for you: Take a handful of pennies and dump them on the table. How many are there? Arranged chaotically: you must tediously count them. Arrange them symmetrically: number-pattern either jumps out immediately or you count sub-patterns (4 x 4, rows x columns, etc). There's still "culture" in there...

    I have to reiterate here, that the point of continuously re-drawing electronic schematics (my canonical trivial example from the Microcontrollers class) is that the *form* of them transforms understanding, even if all of the drawings in a series are for the exact same circuit. There are ways to re-draw something that makes it abundantly clear -- to you, or someone else trained to parse schematics -- that entirely rely on your visual cortext doing precisely this sort of “work".

    Mark Changizi

    Thanks, Tom, for your comments.  First, the current "visual circuits" don't work very well, something I explicitly mention in the piece; they are just the first step.  Second, although visualization is used all over the place for notation and thinking-aids -- and in eliciting the perception our brain carries out computations -- the visual circuits are something fundamentally different, modulating visual perception in a manner that makes them genuine cases of computation implementing the visual software. Third, the point is not for the visual system to go head-to-head against the computer; as I mention, by being able to implement specific computations inside the head, the computations can interact with the brain, both taking inputs from the internal variables, and outputting directly to the brain.
    logicman
    Mark: I really like this approach to visual computing.  I think you are, as people say, 'on to something'.

    Your figure 3 presents me, personally with a problem.


    I see both 'boxed' wire frames as having the same orientation in space, and have to make a mental effort to perceive that the orientations as a whole are different.  I mentally obscure the bottom part to focus on the top, otherwise the bottom folds predominate for me.

    Another 'trick' I use when looking at images like these is to imagine my hand following the lines.  In effect, I consciously haptualise the shapes.

    Have you considered using a robotic device to store haptual representations of shapes?  I think that a visual computation method informed by haptual data could lead to better disambiguation of visual inputs.

    Just thinking aloud, because thinking is allowed.
    Mark Changizi

    Placing them directly side by side doesn't help, of course. ...although that's one of the problems to overcome: that as circuits get more complex, neighboring wires which must behave independently may have to be placed close to one another. One would like better "insulation" than I have now, although it's MUCH better than some of my early attempts, where the state would spread entire across the page!

    Also, any real attempt at an implementation could fine tune the stimuli, to ensure that they are balanced (i.e., equally likely to be a 1 as a 0).

    On haptics, I'd have to think about it!

    Your thinking's topped, because your thinking stopped. :)
    logicman
    Your thinking's topped, because your thinking stopped. :)
    Ouch!  That really hertz!
    I don't see the utility here. Hard upper bounds on this linear path-following are easily calculated from mechanical properties of the eye (field sweep speed, lines of resolution, etc) and would likely never exceed a c. 1973 calculator. By contrast, topologists gain novel mathematical insight through internal visualization in ways that take many billions of binary computations to replicate.

    Mark Changizi

    It's not obvious what the upper limits are at this point.  My point is not to suggest that those specific Escher circuits are the only way forward. They are only meant to illustrate the kind of thing I'm getting at. It may be possible to have much much better stimuli, ones that don't require eye movements going through the stimulus. For example, our stereoscopic ability happens "all at once" over the entire visual field, without needing eye movements. Could we somehow harness that for arbitrary computation? I don't know. But hopefully I'm getting more people thinking about how to use visual stimuli to harness our visual system in new ways.
    logicman
    our stereoscopic ability happens "all at once" over the entire visual field, without needing eye movements.
    Conscious eye movements.  Saccades: would we be blind without them? Muscle freezing experiments suggest - yes.

    My own occasional thoughts on this: a mental haptual model of a 3D world onto which each eye maps its input.  The 3-way agreement is re-modelled as a 3D mental visual image.

    I suppose I'd like a clearer argument that there exists something to harness. We have some power-efficient analogue pattern-matching machinery, but no indication that it will be any better for arbitrary computation than a GPU for single-threaded scalar math. All the impressive bits of cognition are happening elsewhere, to my mind.

    Rich Shull
    Perhaps humans of some type have been doing this for years on end now? I grew up learning autism a different kind of human thought process that has never been in a book before. The Essance of autism is our OPTIC and brain generated vision is interchanged all the time. It seems our Optic nerve is a "switch" between actual optic vision we all have ,and our internal thoughts something like your daydreams we think with by default.  The internal thoughts that happen during the lack of eye contact, are Einstein Material as well as the mundane.

    These internal picutre thoughts again have never been in a book before and are the building blocks of the mind. We all have them, use them but never realalize it as they are also known as normal thoughts. I grew up starting life below 123 and the ABC's learning those internal thoughts by happenstance. If they are the building blocks of the mind- man's mind is not very advanced at all, it is nothing more than one big talking photo album.  Honestly Einstein was not all that smart,it is just his ideas were timed "just right" and his different point of view was thankfuly seen as smart.   

    If my road map of the mind theory of my figured out autism is ever published and followed it will show every human has the Einstein ability in them and it will also expose the mind for a real simple shallow picture compairson machine and nothing more. The tasks like talking ,movement and motion ,and what we term thinking are all picture thoughts. If science could forget about the brain waves and literally watch the picutre thoughts form, complie and translate to words speech and action the mind would be figured out and again it is not all that 'pretty'.  Just image a computer monitor hooked to our brains and watching hundred upon hundreds of thought picutres complie and then taking a piece of each one of those thoughts to from a new thought or to make a sentance.  When your STUCK you see a minds eye picutre of "uncle joe" you can see him but can't place the name(your optic vision is off) If I am right (our autism anthropology is right) the Uncle joe thought is just one of the hundred upon hundreds of thought picutres that play below the surface of the mind all the time and we never know it UNLESS we are stuck and forced to say I can picutre him but can't place the name . 

    Note the uncle joe picutre you do see in your minds eye when your stuck is just a sample and in this case an expose' of what your mind is stuck on. If our Autism theory is right and holds water  there are hundred of uncle joe thought going on all the time and that make our mind work. Yes the mind is that primitative and that simple.  Hook those monitors to our brain and see the one by one thoughts complie and form, see that and see what autism has figured out, see that and witness the next 1000 chapters in psychology for you self.

    Sorry to be so bold and big and a know it all, but I'm just the messinger not the designer of this big talking photo album we call the mind.  If you realy want  to be shocked witness the picture thoughts we don't know we use "short cut themselves" -when this happens we get emotions, pretty powerful stuff.  rich Shull
    Mark Changizi

    Thanks, Rich, for your comment. I do tend to agree that we humans are much less smart than we take ourselves to be, relying on very ape-like, fairly-non-plastic mechanisms. -Mark
    Rich Shull
    Funny you mention Aminals and humans. Autistic Author Temple Grandin was noted for her work in the catte industry  making humane slaughter houses. Of course she understood her cattle as well.  Personally  (lucky or stupid) I have walked right up to the meanest dogs even pit bulls and they melt like butter we seem to be connected.  Some of my "luck" has been my keen senses and decible meter hearing associated with autism. I am able to stop and wait if I make a high pitched noise that scares the dog (or sets it off) .

    Years ago after school on one of my bike Rides, I lived rural and often rode 30 miles a night, I ran Across a dear stuck in a barb wire fence. I was able to gently walk up and greet her and then  pull the wire from her leg and side.  She moaned a bit and hobbled away. Like me and other Autisitc people she has a big pain tolerance and probably was over the minor barbs in her side in a few  minutes. Old Autism was noted for our Pain tolerance ,from the Boy from Aleverz a 1600's character ,a ferel child caught and "tested " to Alan Turing's (father of the computer) running with out walls  we miss the big stuff. (so does new Autism they just miss it)  

    SHOCKINGLY 

    I was once set beside a parriot while visiting a friends home, He steped out to go next door and I am Picutre thinking (an Autism Thing) and the parriot starts to talk and WE ARE ON THE SAME INTERNAL Thought picutre page. I could follow the internal thoughts the bird was using to form his words and the" SAME"  jump of faith internal thought process (not in a book yet) that converts thoughts to words.  I predict every thought  the bird  speaks is orgional in the moment to him but he simply re thinks the  same one over and over again. In other words he doesn't remember what he just said. I know that is putting words in his mouth but it is just a thought.

    This post is just wild and I think it is a grand effort to really explain the mind in terms of the nitty gritty. It seems to be the "exact but opposite" same thing  we do autisitcally to convert our thought to yours.  Indeed our eyes seem to be just tools and our brains convert and compare  the information and compromise  it to make us 'function' .  rich shull
    Mark Changizi

    BTW, Rich, here's a piece I wrote (kind of) on autism:
    http://www.scientificblogging.com/mark_changizi/power_brain_through_wind...

    Best, Mark
    Hi, I just read this great post and immediately ordered The Visual Revolution via Amazon!

    This idea about the awesome power of visual computation reminded me of the great mathematician Michael Atiyah's distinction between algebraic and geometric thinking. I quote:

    "It is I think no accident that geometry, in the hands of the Greeks, was the first branch of mathematics to reach maturity. The fundamental reason is that geometry is the least abstract form of mathematics: this means that it has direct applicability to everyday life and also that it can be understood with less intellectual effort. By contrast algebra is the essence of abstraction, involving a dictionary of symbolism which has to be mastered by great effort...

    "If geometry is not just the study of physical space but of any abstract kind of space does this not make geometry coincide with the whole of mathematics? If I can always think of n real variables as giving a point in n-space what distinguishes geometry from algebra or analysis?

    "To get to grips with this question we have to appreciate that mathematics is a human activity and that it reflects the nature of human understanding. Now the commonest way of indicating that you have understood an explanation is to say "I see". This indicates the enormous power of vision in mental processes, the way in which the brain can analyse and sift what the eye sees. Of course, the eye can sometimes deceive and there are optical illusions for the unwary but the ability of the brain to decode two- and three-dimensional patterns is quite remarkable.

    "Sight is not however identical with thought. We have trains of thought which take place in sequential form, as when we check an argument step by step. Such logical or sequential thought is associated more with time than with space and can be carried out literally in the dark. It is processes of this kind which can be formalised in symbolic form and ultimately put on a computer.

    "Broadly speaking I want to suggest that geometry is that part of mathematics in which visual thought is dominant whereas algebra is that part in which sequential thought is dominant. This dichotomy is perhaps better conveyed by the words "insight" versus "rigour" and both play an essential role in real mathematical problems."

    Burhan

    Mark Changizi
    Thanks, fascinating, and I hope you enjoy! -Mark