And never ask a mathematician to define "system" because he (or she) may add a name to it and respond with, "Would that be [so-and-so]'s system or the system of [something-I-never-heard-of]". And that just underscores the problem. When it comes to systems we can't agree on what we're talking about, except in the vaguest of terms. Without a proper definition of "system" you're really lost if you want to discuss LARGE "systems": well, it's larger than a not-large system. I can almost see a class full of "systems" students dutifully writing down the definition "a LARGE system is any system sufficiently large not to be small or normal, such that the definitions for 'small' and 'normal' systems are insufficient to describe the LARGE system" (append some obscure mathematician or computer scientist name here and add "law/axiom/theorem" as required by level of course).
Woe betide the student who raises his hand from the back of the room and asks for the definitions for "small" and "normal" systems. Hopefully he will have passed through the Recursion course by that point.
I decided to define a system as "something which possesses scope and boundaries". To me that is pretty simple, almost elegant (I promise, I will try NOT to utilize all the mathy/computer sciency buzzwords - I doubt I know them all anyway). I feel my definition is elegant, however, because it covers just about everything. Scope = identifiable properties that help us to recognize them as components of a system; Boundaries = limits which determine whether things are parts of the system or not.
My feeling is that we should treat systems as algebras. Let each system define what its properties and limits are. It can bring in all the fancy mathematical formulae, the sociological nomenclature, the astronomical star index of choice - every system has scope. And there is a limit to every system, even if we cannot see those boundaries. After all, who would argue that the universe is NOT a system? (Okay, I probably should not ask THAT question, but I say the universe is a system.) But we still ask if there might not be something beyond the universe, which implies that we can imagine boundaries for the universe.
It's the Boundaries that lead me to see a difference between a "small" or "normal" system and a LARGE system. What makes a system LARGE is our inability to observe everything within the system. The boundaries of the LARGE system extend beyond our metrics or metrical tools in some ways.
This is a very flexible definition. For example, some decades ago I remember being totally amazed at holding a $15,000 disk pack in my hands. It had a capacity of about 30 megabytes. (My numbers may be a little off, but that should be okay.)
Today I can email a 30 megabyte file that takes up so little space on my hard drive that it would be impossible to compute a cost for the file. Certainly that should be less than 1 cent. In the old days a stack of magnetizable platters was a pretty big "system"; today I hold a larger system in my smart phone. So system size is relative to the methods and metrics we use to observe or measure the system. A large system may become very small given a change in the context through which the system is studied. I'm good with that. I hope other people are, too.
But what makes a system LARGE is not just that the boundaries extend beyond our ability to measure the system; that is too simple because as soon as you settle for THAT explanation someone will devise a formula for creating a montage of measurements that we can use to evaluate the entire system. And so to resolve that problem I decided to say to my readers that "a large system has unexpectedly large scope and boundaries". In other words, we cannot measure the large system (yet). To measure that large system we have to create some new tools. The system won't become smaller as we increase our knowledge of how to measure it; it will simply become more measurable, or more quantifiable. In order to carry my discussion forward, however, I also postulated two assertions I cannot prove. I can't disprove them so I feel safe enough working with them for now.
My first assertion is that "large systems are more complex than small systems". Now, I can think of some pretty simple large systems. In my article I used 1,000,000 candy bars placed end to end on a highway. You know what the ingredients of the candy bars are (they are all the same kind) and you know where they lead (as far along the highway as they can extend).
Except for the fact that you know there are a million candy bars in the road you really cannot measure them. So it's a large system that is simplistic. I am not sure if that is a good example, but I think it works. However, I offered another example to illustrate complexity: a handful of sand. A handful of sand is a pretty small system until you start looking at it on a microscopic, even a sub-atomic level. Then you introduce complexities that are not apparent to anyone who just sees a handful of sand.
This seeming paradox led me to my next unproven assertion: "large systems are relative to the states and properties of their contexts". Let us suppose there is an intelligent non-human species somewhere in the universe (it could even be here on Earth) that understands the universe to an order of magnitude greater than mankind does today. Such a body of knowledge and comprehension would encompass things that today we find to be too complex and large for our own understanding. So, given these two perspectives, there are things we know about which seem like large systems but which to another intelligence may seem like normal systems. And we can change places with that other intelligence: we may understand the universe better than, say, chimpanzees.
I would say that means the context in which a system is evaluated is commutative. It should also follow that regardless of context a system's definition points are associative. You can group them any way you wish but they remain the components of the same definition. That should mean a system is immutable because it is the context that changes, not the system itself. And therefore the context itself cannot be part of the system (but I haven't really considered the implications of that - can a context be part of the system the context is used to evaluate?).
ON EDIT: Let me explain my idea of commutativity better. Suppose you define a system called MAXIMUM = (a,b,c). If your definition can be modified such that MAXIMUM Less a = (b,c) behaves very similarly to MAXIMUM = (a,b,c) then your definition is commutative. A commutative definition would allow us to work with incomplete system definitions to learn more about the complete systems.
It seems to me that Large Systems must also be unpredictable if only because we cannot measure them accurately. This unpredictability is simply due to our ignorance. A Large System might be closed and experience no random events, but in a context that views it as a Large System there is no way to know that there are no random events, especially given that the Large System may include localities that seem to behave in unexpected ways.
It might be that Percolation Theory offers a way to test whether a system is Large or Predictable. If you can percolate at will through a system then it must not be large, since the consistent ability to percolate would imply that you can reach any edge through a predictable pathway from any starting point. If there is at least one edge you cannot reach no matter where you start then your knowledge of the system is incomplete; hence, it must be a Large System.
If that is correct then I think that means you would find only limited application for Probability Theory in a Large System. You could define variables within certain localities but they would not be confirmed for the entire system. We might be able to find a local subsystem that behaves exactly like the entire system but our incomplete knowledge of the system prevents us from knowing if our calculations are correct. There is no way to confirm all predictions within a large system.
And so Probability Theory may be the tool we use to determine when a system (viewed from a given context) is no longer Large. When you can confirm all the predictions you know all the variables and can reach all the edges from any starting point within the system.
I remember my Computer Science professors discussing President Reagan's Strategic Defense Initiative in the 1980s. They said it was a really dangerous concept because our computer science at the time could not prove the correctness of the concurrency algorithms that the system would require to function safely (knocking 1,000 hostile missiles out of the sky without harming innocent objects like aircraft, friendly satellites and missiles, and air balloons). I don't know what our current state of knowledge in that area may be. I hope we don't have to find out the hard way.
Knowing that a system is Large by our present context may not seem immediately useful but it may have applications in AI and learning algorithms theory; because if your software can determine that it is studying a Large System it may be able to better prioritize its resources (or see a logical trap before it becomes stuck in one).
Anyway, that is today's thought.