Leaving the singing, dancing munchkins aside, I could not help but think of the Yellow Brick Road as a great metaphor for the accumulation of knowledge in a self-documenting system. A self-documenting system is a system that organizes its information exponentially on a timeline. The system has greater organization at T+1 than it has at T, and it has greater organization at T than it has at T-1. It is reasonable to ask if there is any organization (or information) at T = 0. The problem with Zero (0) is that in a closed system there is no outside information, so either the system does not exist at T = 0 or else the information is organized in such a way that it begins to unravel after T = 0 is passed, in which case the exponentially increasing organization (information) denoted by the time line T = 0 to Infinity must either represent the decay of Negative (Inverse?) Information or Entropy.

But what if there is no Infinity? What if the time line terminates somewhere down the road where the organization of information becomes nearly infinite but not quite infinite? At that point, how much of the Negative (Inverse) Information is there as opposed to Positive (Converse) Information? We would want the timeline to terminate at a time T such that the absolute value of (Converse T) equals 0. In other words, if you represented the growth of organization (information) in the system from time T = 0 to time T = Terminus as an outwardly expanding spiral surface, it will stop at precisely the point where a reverse inwardly expanding spiral surface (equal in area to the first spiral surface) begins.

In other words, the closed system's Information (organization) and Negative Information (entropy) are both described as one half of the surface area demarcated by an Euler Spiral on the surface of a Prolate Spheroid. A football surface is a closed system with a beginning point and an end point for both directional surfaces.

The Internet is a closed system, although it is a Large System (as I define them: you cannot measure it completely given current measuring tools and methods). I know the Internet is a closed system intuitively because it can only host as much information as our combined world-wide computing technology can hold, and that is finite though very, very robust.

It should be possible, at some point, to measure the information on the Internet in some meaningful way. I don't mean "estimate how much information the Internet holds", but rather to

*measure it*via some algorithm or tool we have not yet developed. That algorithm or tool would be the ultimate search engine, always able to return an answer to any query about whether specific information exists (although the time required to satisfy such a query might be very large, if not infinite).

A measurable Internet becomes a

*small system*(again, by my definition of

*large systems*). A small system's knowledge should be mappable in terms of

*atomic pieces of information*, and the data aggregates we derive from those

*atomic pieces of information*would grow exponentially (as in our outwardly expanding spiral curves). Data aggregates are what I call data superstructures, and I have posited that they can grow infinitely in both directions of complexity.

These two concepts contradict each other, so either my models have failed or something is preserving the finiteness of the closed system. By thinking about the timeline which maps the growth of complexity (organization) in a system as an Euler Spiral-guided surface, it occurred to me that such a surface could be terminated in a closed system if it ended at a point where a reverse form of information began; in other words, Entropy and Complexity must be equal over the timeline, but in reverse symmetry to each other.

We can easily demonstrate this by constructing an algebra that consists of all the pairs of positive integers on a number line segment that, when added together, equal some integer greater than the largest integer on the number line segment. The integers 1 to 99 almost form such a number line segment, except for the irksome number 50. Either it does not exist in this algebra or it exists twice. It's my algebra so I'll go ahead and say that it must exist twice within the data set.

The only rule in the algebra is that every pair of numbers included in the set must equal 100 when added together. Obviously if you want to define an algebra where the rule determines that the two numbers added together equal 101 there is no pair of duplicate values because (49,52), (50, 51) , and (52, 49) cover the center of the forward and reverse progressions. But in both Even and Odd algebras you do have symmetrical or opposing pairs such as [(1,99) and (99,1)] and [(1,100) and (100,1)].

If the Internet matches this model then there must be something we are missing. There must be Inverse or Negative Information. The more information we can measure across the Internet the less of this UnInformation (or Non-Information) we can measure. Maybe that is best labeled Ignorance but Ignorance, if it exists, must comprise a quantifiable (measurable) aspect of the Internet.

This is all rather oddly disturbing to me because it implies a paradox of information in a closed system. As our knowledge of the closed system grows something else must decrease; otherwise our measurements are meaningless. But semantically Ignorance is the opposite of Knowledge, not the opposite of Information. In other words, our Ignorance should be a quantifiable measurement (we do not know 51% of what is in the closed system at time T) just as our Knowledge is a quantifiable measurement (we DO know 49% of what is in the closed system at time T). So the reverse of Information is not Ignorance but, perhaps, Mystery.

If we can measure Mystery then we can estimate the size of a closed system without being able to measure Information. One way to do this would be the estimate of Capacity. But Capacity is capricious because there is no rule that says every atomic piece of information has the same dimensions. With no frame of reference for the size of atomic pieces of information our measurements of Capacity become meaningless. That's all rather annoying.

Add to this the fact that Complexity is measured in different layers. Hence, even if you can measure all the atomic information in a closed system (thus reducing the Highest Complex Mystery to a quantified value of 0) you would still have to measure each layer of Complexity all the way up until the Simplest Complex Mystery has a quantified value of 0 (there are no more unknown or unknowable atomic pieces of mystery).

If all the atomic pieces of information and mystery at a given level of complexity/simplicity comprise an algebra, then there must be a superset of these algebras that describe all the layers of symmetrical complexity/simplicity. For both Complexity or Simplicity, then, the number of algebras in the superset represent the limits of Information/Mystery.

And yet according to my concept of Data Superstructures there is no limit to either Complexity or Simplicity; hence, we are faced with another paradox. And I find that annoying because I know very well that layers of complexity evolve in all information systems; furthermore, I know that if you map the growth of knowledge in reverse you see the layers of simplicity evolving (in reverse) in all information systems. It's exactly like running a movie in reverse.

Thus the Paradox of the Football Surface is that in a Large System it is essentially infinite even if the Large System is a Closed (Finite) System. This is a bit like Xeno's Paradox in that you can continue to divide each unit of distance in half in an infinite progression, and yet if you move an object across a given distance eventually it crosses the "last" unit of measurement.

Information in a closed system cannot be infinite but in a large enough system we may be able to treat it as though it is infinite. In that case our attempts to measure the system will fail unless we resort to using approximations that only map part of the system's information. By pairing up as many of these approximations as we can we can gradually paint a picture of what the system looks like, thus improving our understanding of it, but we'll never get a complete picture as long as the system remains large. And once you can measure the entire system your approximations become irrelevant (so I suppose I just described something like a simple Calculus).

Why does this bother me? Because every day I see people struggling to quantify the information on the Internet. Whole industries have grown up around these algorithms and engines, and yet I know intuitively that they cannot possibly be accurate. But hundreds of millions of dollars are spent on these quantifications every year.

And I suppose I have just described something like Economics.

It would just be nice if I could figure out what the Mystery is while I'm collecting Information, although I suspect we might have to invent whole new sciences to make sense of any reasonable quantification of what we do not know.