Fake Banner
    Mapping a Complex System With a Nested, Emergent Vector
    By Michael Martinez | August 26th 2013 02:43 AM | 6 comments | Print | E-mail | Track Comments
    About Michael

    Michael Martinez has a Bachelor of Science degree in Computer Science, an Associate of Science degree in Data Processing Technology, and a few certifications...

    View Michael's Profile
    I don't know enough math to know if this has been precisely defined but I know enough about my ignorance of math to know that if there is such a definition I probably won't understand it.  Mathematics fails to be a universal language in most respects because mathematicians can rarely articulate their concepts in layman's language that actually makes sense.  A universal language is only universal iff the common folk can grok it too.

    I have waited nigh on 50 years to say that publicly.  But that's not what I want to say with respect to my rambling about the definition.  My random thought for today is about what (for lack of a better name) I think of as a Complex Emergent Vector (of Sets) that defines a progressive system.  Let's call that Martinez Gobbledy-gook for short.

    In my Complex Emergent Vector I define a first element that consists of an atomic system.  The system is as small a system as you can derive from the progression.  To be a system it must have a unique architecture and features that its constituent parts do not possess for themselves.  A termite is a constituent part of a termite colony; the colony is an atomic system in a progressive system.

    The second element in my vector consists of a set of 1 or more systems whose constituent parts are atomic systems as defined by the first element.  Maybe this is a field filled with termite colonies, each of which is unique and distinct from the others (although they may all be genetically related).

    This second element in the vector may include a very different system from that which I just described.  For example, we could include in it all the termite colonies that are descended from a Prime Colony.  Those descendant colonies do not have to reside together in the same field (they could, but we are assuming that the system consisting of the descendant colonies is unique and distinct from the system consisting of termite colonies residing in the same field).

    The third element of my vector consists of a set of 1 or more systems whose constituent parts are atomic systems as defined by the second element.  An atomic system could be a field of termite colonies or a genetic family of termite colonies.  A level-three system might be an entire species of termite (colonies), or it might be the maximum geographical area containing all the termite colonies of a specific type.

    This vector model has an emergent property in that each member of a set has the potential to spawn a sub-vector that evolves in a different direction from the primary vector.  Rather like an alternate universe every sub-vector develops its own rules and sensibilities, creating unique and distinctive complex nested systems that do not occur in other sub-vectors branching out from the same element.  It is plausible to allow for the possibility that some future sub-vector might very closely resemble an earlier or past sub-vector, even though both branch out from different elements.

    If we look at the Periodic Table we can classify all the elements within "a Complex Emergent Vector (of Sets) that defines a progressive system".  Start with Hydrogen atoms and fuse them together into Helium atoms, and then fuse Helium atoms together to produce heavier elements.  You quickly start spawning sub-vectors in the tree as you might produce, say, Carbon here and or Lithium there.  And so on and so forth.  Stars live and die by this rule, don't they?


    The Internet is "a Complex Emergent Vector (of Sets) that defines a progressive system" in oh-so-many ways my head starts to throb when I attempt to map it all out.  We spawn sub-vectors at the drop of a hat -- or when the dust raised by the dropped hat splashes down to displace other things.  I am pretty sure that you cannot have "a Complex Emergent Vector (of Sets) that defines a progressive system" that fails to cascade.  Why else say it is Emergent, right?


    But I say it's Emergent because I only see trees of sub-vectors branching out from prime vectors in every complex system I observe.  It is almost as if the universe is driven to divide and expand and populate its possibilities with more complex things.  We can always add or subtract 1 from any numerical quantity; we can always change the makeup of any collection of objects; we can always fuse objects together and create new objects.


    One of the random thoughts that led me to write this was the notion that mountains move.  Ask any random person on the street if mountains can move and they'll probably look at you like you are crazy.  Ask a geologist if mountains can move and he just might respond by asking, "in which way?"  Mountains move in many ways.  They just need a little more time than the average dust speck.


    That random thought also made me think of the sounds that mountains make.  I know that mountains make sounds because their constituent parts make sounds.  Just as we can record perturbations in starborn data to produce "music from the stars" we can record perturbations in mountainborn data to produce "music from the mountains".  It's all about vibrations and oscillations, is it not?


    If a mountain can vibrate and oscillate and create sound then it must be moving; it's just moving on a scale we cannot perceive (without help).  In a way it is reasonable to say that mountains move in a dimension of sound and space that we don't share with them.  Mountains and men are sub-vectors in the same tree but we share little in common even though we can trace our nested complexity back to the same atomic elements.


    This is all so very fractal to me.


    What ultimately led me to thinking about these things was a growing concern I have about autonomous systems.  Specifically, I have been thinking about the threat that autonomous robotic systems represent to us, our society and culture.  Google is developing robotic, self-driving cars and wants very badly (it would seem, based on recent news reports) to bring them to market.  And various companies are feverishly working on humanoid robotic systems that can help our elderly and invalid members of society achieve greater independence than ever before.


    Rosie the Robot may one day be reality, taking our kids to school, buying our groceries, and cleaning our homes.  She may use our driverless car to run her errands.


    When that day comes we may need something like Isaac Asimov's Three Laws of Robotics to protect us.  When I was a young lad and first heard about the Three Laws I thought they were stupid.  Why would anyone want to create a robot, I asked, that could not harm a human being?  Wouldn't we want to build armies of robots to fight our wars?  And what if we are invaded by space aliens?  Would you rather send your sons and daughters into combat with a species of unknown intellect or would you prefer to let a machine take the first hits?

    We are now using remotely controlled systems in warfare.  Soon we will introduce the first autonomous robotic systems into warfare.  Meanwhile, hackers and cybercriminals are already proving what Asimov knew to be true: that autonomous robotic systems are a threat to humanity on an unbelievable scale.  Earlier this year -- according to security experts -- about 90,000 Wordpress-powered Websites were hacked/compromised in the space of about a day to create 1 or more entirely new "botnets".  I believe that many thousands more Websites have since been infected, although surely many of the originals have been cleaned up.

    Botnet Creep is not a term we are accustomed to using, but I believe that it (or something similar) will soon enter our lexicon, for these dynamic self-sustaining botnets will become more prevalent as the Internet grows and becomes more sophisticated.  A botnet is a complex system, even if it is controlled by a small number of machines.  A botnet that is constantly seeking out new hosts to replace the old hosts that have been cleaned up will crawl around the Web like an antibiotic-resistant disease ravaging a once-healthy body.

    These botnets are relatively simple systems but the technology to create them and use them for harmful purposes can easily be transferred to new autonomous systems such as driverless cars and personal assistant robots.  These machines can conceivably be weaponized in three ways:

    FIRST, they can be armed with destructive technologies and used to remotely kill or harm people or to damage infrastructure.  The only difference between such machines and current "drones" is that these things will be able to "sleep" until awakened at the proper moment.

    SECOND, an autonomous machine can be passively infected by a destructive technology.  It would be as simple as duct-taping a bomb to the side of a driverless car and activating that bomb with a cell phone.  Whereas this is already possible with human-driven cars, driverless cars (at least in their first few generations) won't have the means to recognize foreign devices and remove them (or call 911 for help).

    And THIRD, any group of autonomous machines could be used to disrupt our system without using weapons technology at all.  The group itself is the weapon and the weapon's function is to delay, hinder, or cause "natural" damage and congestion.  Even if we have the capacity to recognize that a driverless car has a bomb strapped to its hood -- and the good sense to call 911 to report that -- no one will know if the driverless car passing by is on its way to run down a crowd of people on a sidewalk across town, or to block the entrance to a hospital emergency room, or to help create a traffic jam that traps police cars several blocks away from a bank robbery.

    The systemic threat is, I think, the greatest threat because it will be the one most difficult to recognize and manage.  If we don't enact legislation providing for the monitoring and managing of autonomous robotic systems before they hit our streets, we are only inviting an inevitable evolution of the system the sub-vectors of which cannot be accurately and completely foreseen.

    Put 100,000 vulnerable autonomous vehicles on the nation's highways and you are only 1 virus away from a science fictionesque nightmare.  It's already happened inside the relatively benign bubble of the Internet many times.  We're just asking for trouble by ignoring these possibilities as the public cheers on Google and other innovative robotics companies.

    The prime vector has already been seeded.  The tree is growing.

    Comments

    Thor Russell
    "Put 100,000 vulnerable autonomous vehicles on the nation's highways" We had them long ago, they were called horses. They weren't connected to the internet so couldn't be hacked. If safety features are hard coded into such cars so that even if you can browse in the car, there is no physical connection between the basic safety features, then such things couldn't happen. The whole three laws thing is much more of a problem because its a human that tries one way or another to make their brain run on silicon (or graphene perhaps) that can make things get out of hand. Once consciousness doesn't require neurons then all sorts of constraints are lifted, e.g. someone makes endless copies of themselves (like agent Smith in the Matrix I think) or something similar and you really have a problem, far more than a few botnets.
    How are you going to stop this? Well perhaps you can't but the most common way to deal with such issues even by seemingly smart people is to deny that such things are possible and use quite feeble reasoning to do so.
    Thor Russell
    Gerhard Adam
    The first problem is in defining what is meant by an autonomous system. It is my contention that there is no such thing. While I understand that it is typically used to mean autonomous "operation", no system is actually autonomous, since changes must obviously be introduced and therefore there is zero opportunity for true independence.

    Thor is incorrect in asserting that there is no physical connection since many errors occur without malicious intent and unless such systems are impervious to upgrades and changes there will be physical connections.

    Again, there may not be any "operational" controls, but there is plenty of opportunity for hacking.

    Presumably such "autonomous" vehicles are still subject to repair and maintenance, so again, more opportunities present themselves.

    Your description of the interconnectedness of all systems is precisely the problem, and we see that with our increased technology, not only do we not eliminate problems, but we simply modify the scope and impact that such problems can produce. Many of our modern medical capabilities exist precisely because our modern technologies are capable of producing injuries previously unimaginable.

    This is simply the coevolutionary nature of our existence, and the one thing we can absolutely be sure of, there is no way to prevent these things from occurring. Our problems will get bigger and bigger, because the complexity of the systems will enable such problems even when there is no malicious intent involved.
    Mundus vult decipi
    Thor Russell
    I don't think you understood exactly what I meant. It can be designed to resilient and not connected, but of course it may not be and may be done badly from a resilience viewpoint. At present there isn't much incentive against that in our economic setup, the short term cheapest wins even if there is the potential for a single global disruption. 
    My point is you cannot necessarily assume that advancing tech gives more complex systems. For example better batteries/distributed generation/fuel cells will make the grid much less fragile and in many cases redundant altogether, removing the possibility of an attack on it altogether. With a cheap building integrated solar panel and battery being cheaper than the grid, there wouldn't be a complex grid anymore because it would be more expensive than the alternative for many situations. Thats just one example, 3d printing is perhaps another.
    Thor Russell
    SynapticNulship
    Soon we will introduce the first autonomous robotic systems into warfare.
    That's not entirely accurate, at least for the US. Even "The Role of Autonomy in DoD Systems" task force report from 2012, which goes up against the traditional military complex of being afraid of high levels of autonomy (the DoD had--but this report suggests it's inadequate--a levels of autonomy scale), takes a practical human-machine interaction approach. Here is a quote from that report:
    Another negative consequence of framing autonomy as levels is that it reinforces fears about unbounded autonomy. Treating autonomy as a widget or “black box” supports an “us versus the computer” attitude among commanders rather than the more appropriate understanding that there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines. Perhaps the most important message for commanders is that all systems are supervised by humans to some degree, and the best capabilities result from the coordination and collaboration of humans and machines.
    SynapticNulship
    As an addendum, in regards to the concept of thousands of elements being hacked, such as botnets, I also think as you do that this concept is a potentially powerful weapon--even if in an accidental Sorcerer's Apprentice type of way.

    But there is something equally important to consider. Even humans are not fully autonomous in a context (as the quote I posted about from the Role of Autonomy report indicated). And groups of humans, or autonomous elements, or "dumb" elements like websites, can all be maliciously hacked and repurposed.

    We've already been struggling to deal with small populations of dangerous autonomous botnets conducting asymmetrical warfare. They are the terrorist groups and extremist violent religious groups. In both cases the autonomous elements were hacked (brainwashed) either on purpose or through accidents of society/family.
    Michael Martinez
    I can readily agree that definitions of "autonomy" will be reviewed and argued for years to come; I use the term conveniently only to refer to systems that operate without direct human control (allowing for prepgramming as well as AI-based independent decision-making).  I don't see them forming a SkyNet but swarm theory indicates that a group of "autonomous" objects can behave quite differently from the individuals.
    Swarm weapons may act very differently from concussion weapons but they can have equally devastating or perhaps even more devastating impacts.  A swarm of independently acting, predirected robots could damage our water supplies, food supplies, manufacturing systems, transportation systems, etc. just by wandering around the landscape and distributing malware, disrupting communications and transportation, and otherwise engaging in non-lethal activities that individually produce little to no impact but collectively could throw our entire economy, health, and life support systems into total disarray.