A good Business Basic language offers a lot of powerful features when it comes to processing data at the byte- and bit-level, but object oriented programmers almost always had a heart attack when they learned we did not work with strong typing. In Business Basic an integer could be processed as a string of text and vice versa. Floating point numbers were also vulnerable to reclassification on a whim. This was necessary for historical reasons because Business Basic was originally designed for business computing on mini-computer systems in the late 1960s and early 1970s. Memory and disk space were horrifically expensive in those days and there were times when you just had to reclassify data structures in order to improve throughput.
Another cool thing about the MAI Business Basics was that you could write self-modifying programs. For example, when you updated a program you could tell it to modify itself with the last date in which a change was made. This way other programs could ask the program when it was last changed. Another cool trick was to write "hard" data into the program and then use the program itself to change that hard data. This was usually in the form of constants that might change on the basis of monthly or annual cycles. The program would check the date every time it ran and if necessary would invoke some self-modification routines that were closely monitored.
Some companies wrote self-diagnostic modules that programs replaced portions of their code with to dump data into a diagnostic file. But these kinds of routines became less common as memory and disk space became cheaper and more plentiful, not to mention processing power.
All the while, as the years passed by, I continued to find needs for cross-typing my data. The problem with changing data types, however, is that you have to change the rules and the code that uses the data. You had to do this by including two or more sets of code. We could also invoke overlays (we named these "CALLed" programs) and pass data between the layers of overlays. Hence, you could pass data to one overlay as TEXT and then to another as NUMERIC data, but this was inefficient.
Business Basic data structures evolved to use templates that redefined allocated data spaces. Now we could switch between binary and string format data pretty easily, but you still had to write separate blocks of code for the different data types. And, of course, your data's integrity was always vulnerable to the weakness of your logic and the looseness of your code.
In practice I usually had to reclassify my data to fix corrupted records. How did they become corrupt? Maybe someone kicked a computer at the wrong moment. That often happened in the world of tower PCs. Power fluctuations were also a common cause of corrupt data. In the business world you can't leave corrupt data in place for very long, especially if it's someone's payroll data. And there were plenty of programmer errors that would accidentally rewrite data that was never meant to be touched again. Sometimes, you would need to scan several hundred thousand or million records for a specific string of binary data that might occur across multiple defined fields.
The problem with reclassifying data is that it is almost always done without foresight. There is no mechanism in place to ensure that you don't change something else when you write new code that looks at your old data in a different way. We devised all sorts of standards and procedures to protect that data, but at the end of the day you still have to reclassify it and turn your code loose.
The great thing about object-oriented programming is that your data is encapsulated. You might be able to define a class or sub-class that allows you to reclassify the data and use different methods on it, but that is supposed to be hard to do. You need to pass the data from one object to another and in some languages you have to use two passes. That, of course, is inefficient.
But things would become much simpler (and more interesting) if you could actually "spin" the object so that it transforms into a different kind of object. In other words, the object disassociates itself from one class and associates itself with another class. People sometimes ask how this can be done in OOP support forums and they are inevitably told "you can't do that because it's an object-oriented programming language".
A spinning object offers some advantages because you have to restrict your use of the object to whatever methods come with that class. So, for example, you could instantiate an object of type TEXT STRING that allows you to populate the string with data in the form of "1234". Then you spin the object so that it attaches to the INTEGER class. Now you can perform numeric operations on the object and it will only recognize methods (procedures and functions) that work with integers, not with string/text data.
Why do this? Why not simply pass the data to a new object that reclassifies the data for you? Because if you can compile the object with the ability to detach from one class and attach to another class you can trust the integrity of the code (the data may still become corrupted in the process). In other words, you can only spin an object across a limited set of predefined classes if the language supports spinning. The application-level programmer would still have to go to an extra degree of effort to create an object that performs non-standard tasks.
Mathematically this would be similar to defining a branching algebra. But instead of the two branches defining separate pathways extending from the same base the spinning object represents parallel algebras mapped across the same data space. Let us say we have a set of elements ECHO and we define two algebras (ECHO, alpha-1,...,alpha-n) and (ECHO, beta-1,...,beta-n) across that set. There must be a perfect correspondence between (ECHO, alpha-rules) and (ECHO, beta-rules) in order for us to spin an object from alpha-state to beta-state or back.
Each algebra offers us a perspective on ECHO but they don't necessarily share all the same output values. We should treat the algebras as if they are entangled, because whatever happens in one algebra causes something to happen in the other algebra. It's just that, in our software object, there is only ever one set of rules working on the data space.
In order to spin the object you have to define a trigger event, either when it encounters a specific value (or range of values) in the data or when an operation produces a specific result (or range of results). For example, an object could spin itself if it attempts to perform an illegal operation for its class, provided it has been told to spin if it encounters this kind of operation. You don't want to allow a divide-by-zero to happen in either perspective of the data space but you may want to allow the object to perform a non-integer computation on an integer value.
The spin-capable object must include appropriate data space to handles operations in both algebras. In other words, if you perform an operation using alpha-rules that would produce an overflow using beta-rules, your object must be able to store the overflow so that the beta-rules can work with that part of the data. There are practical limits to what you would be able to accomplish with a spin-capable object.
Proving the correctness of algorithms using spin-capable objects must be different from proving the correctness of algorithms using single-class objects. I imagine you would have to force correctness in some places. By that I mean that the spin-capable object cannot transform or create data that is not allowed in at least one of its classes, and it must only create or transform data that can be managed in an alternate format by all of its classes. If there is a state of spin where the data cannot work then the algorithm is not correct.
A spinning object would be useful in a situation demanding fuzzy or flexible logic. If the object encounters an exception event while processing data it can make a choice between aborting the operation or spinning to a new class; maybe the new class will be better equipped to handle the data.
For example, suppose the object is asked to look up an address in a database. The address may be entered in any of several variations: "123 Simple Street", "123 Simp. St.", "123 Simple St", "123 S. Street", etc. The object may fail to find the address using the first example but if it knows that it is dealing with text that can be abbreviated or that looks like an abbreviation it can spin itself into a new class that handles the data in a different way (normalizing it, for example).
Another example would be where an object receives encrypted data without knowing precisely which encryption scheme was used to transform the data. It can spin itself through multiple encryption classes until it either fails or properly decrypts the data (presumably there is some discrete data that is compared to the decrypted data to determine success).
An object that expects uncompressed data but which receives compressed data can spin itself to handle the compressed format. The possibilities are endless and the benefit of using spin-capable objects is that programming becomes more efficient when dealing with highly probable exception events that can be easily resolved by applying a different set of rules to the data.
As long as the object can only modify itself according to a hardened set of rules it is an instantiation of a superclass, thus preserving most of the principles of object-oriented programming. Application programmers would be incapable of (or unlikely to) create unpredictable algorithms that produce unwanted results, and the object would still have to report an error if it cannot spin itself into an appropriate state.
There would also be false-positive situations. These might be limited by adding integrity rules to the superclasses which force a last check before the object completes its task. By introducing spin-capable objects we reduce the need for defining child classes that inherit the rules of parent classes.
People who may be concerned about breaking encapsulation should look at this as a form of compartmentalized encapsulation. The encapsulation rules are more flexible but they are not limitless. And there are implications beyond mere programming. Entangling algebras may allow us to develop new models for analysis. For example, (ECHO, alpha-rules) might define a system of atomic elements and (ECHO, beta-rules) might define the constituent parts of an atomic element in a larger supersystem (see the first part of my August 26, 2013 article "Mapping a Complex System with a Nested, Convergent Vector" on Science 2.0 for more detail on this).
We may be able to map a large system (see my October 10, 2014 article "Why You Will Never be able to 'See' a Large System" on Science 2.0) using entangled algebras. Each algebra would be able to interpret some portions of the system more effectively than the other algebras. The entanglement isolates points of failure in the theory of what the large system is and how it works. When you can collapse the entangled algebras into a single algebra you know you have mapped the whole system and it is therefore no longer "large" (as defined in the October 10 article).