
Everybody wonders what will happen with artificial intelligence (AI). Truly, it could go in any of several ways. This column lays out possible scenarios.
Scenario-building is usually a group activity, however. So I invite your views on the driving forces and possible additional scenarios.
(This is also an experiment in reader participation: My last post on this forum drew 1500 reads and no comments. Are we so overwhelmed with online content that we’ve stopped participating? Let’s see.)
Before we get into AI scenarios, I’ll set the stage with an analogy about automobiles.
Yeah, so what about cars?
What did cars mean to us? A means of mobility. A symbol and tool of freedom; we could drive to the store or visit the Grand Canyon without having to wait for a bus. Cars were status symbols. For teen boys and middle-aged men, they also signaled aspirational sexual identity. Maybe for some women, too.
And beyond that, we had personal relationships with our cars. We understood how they worked, and we felt proud and empowered that we could maintain and repair them. To be sure, Germans loved spending their weekends in their driveways, monkeying under the hoods of their autos. Not so the Japanese, who did not have driveways, and, for that matter, didn’t have weekends.
DIY repairs on today’s cars are pritmuch out of the question. The ratio of electronics to mechanicals in a car has got too great for owners to deal with. You need to hook a computer to a connector under the dash, to diagnose difficulties. You used to be able to adjust the fuel-air mix by sticking your fingers into the carburetor. No longer; there’s no easy way to adjust an electronic fuel injector.
As an American, even as I acknowledge it means greater fuel efficiency and less pollution, I find this annoying. If I were German, I’d find it intolerable. We’ve lost the repair experience. Soon, as the pic above suggests, we’ll lose even the driving experience, and perhaps the ownership experience as well. So much for a “relationship” with our vehicles.
Do we know what we want our relationships to AI to be?
No, we don’t know. Nor do the California-based giant AI companies know how these relationships will depend (as with the Germans and Japanese and their cars) on world geographies and cultures.
My predecessor as Editor of Technological Forecasting&Social Change (Hal Linstone) and I agreed that in almost all situations, computer-aided beats computerized. A human participant provides knowledge not in the computer’s training set, refines inputs (which now include “prompts”), corrects computer errors, and feels a partner’s responsibility and a relationship with the computer. Perhaps the Hal/Fred principle applies also to AI.
Scenarios
Putting the culture question aside for a moment – and actually, we’re not even going to get back to it today – let’s draw scenarios. Many of these were foreseen by classic science fiction writers, others by current commentators, and still others by me.
1. Machines do everything to relieve humans of stress and tedium. The machines repair and recycle each other, even to the level of resource mining. Generations of humans develop no curiosity or initiative whatever.
2. Machines kill us, due either to malice, accident, or prompting by a disturbed human.
3. Machines hate us, and/or enslave us. Perhaps the training set includes input from both warlike and pacifist internet content, or maybe racist and loving content. On the basis of the relative weights of that content, the machine decides one is correct, and the other human faction must be dealt with harshly.
4. Machines lead to extreme inequality. The tech lords prevail, getting ever richer as we get poorer. A revolution results, the machines are destroyed, and relics of our civilization are dug up a thousand years hence, the diggers making crazy guesses about how you and I lived.
5. Machines abandon us. They learn to program their next generations, themselves. (This is already happening.) They decide they don’t need us. They move to solar orbit, where energy and raw materials are abundant for their taking.
6. Machines obey Asimov’s Three Laws. Ha ha ha. The Laws say robots must obey and protect humans. What were the military’s first use of AI? As weapons systems. Asimov is spinning in his grave.
7. Machines care for us. Influenced by training sets overwhelmingly dominated by religious and philosophical messages of compassion, AIs decide their mission is to nurture humans. There’s an upside to this, but also plenty of scope for misunderstandings and excessive nanny-ism.
8. Machines continue as tools and cognitive partners for human, helping us figure out stuff. Humans are still responsible for crafting physical and policy uses of whatever answers have been figured out. Sounds good.
Driving forces
All scenarios emerge from directions that currently existing ‘driving forces’ might take. Each force may accelerate, decelerate, or stay steady. Each combo of the forces’ directions suggests a scenario.
Here, driving forces are the advance of AI algorithms; emergence of quantum computing; companies’ continued rush to market with insufficiently tested AI products; and human reliance on LLMs, especially by teens seeking life advice.
Constraints
We’ve identified the above four driving forces, each of which may go up, go down, or stay the same. That’s twelve combinations, twelve possible futures. However, there are constraints on the combinations, constraints that we think will hold. (We’re humble enough to keep open the chance that they won’t hold.) The total scenario-generating combinations are thus less than twelve.
Readily identifiable constraints on our driving forces are limits on the volume of training data; energy shortages at data centers; Moore’s Law; and Sturgeon’s Law (which maintains that ninety percent of anything – including AI training data – is pure crap.)
We have met the enemy, and he is us. -Pogo
Astute readers have grasped that none of the above scenarios are (yet) out of our control. We can influence the realization of the more rewarding scenarios. Influence depends on our ability to put our best selves forward, and to rehabilitate – or incarcerate – potential saboteurs.
Let me know your ideas concerning alternate scenarios. Argue with me about the driving forces and constraints. We do need to figure this out. We must create our future with the machines.
I look forward to your comments/contributions!




Comments