With the development of the new generation of large language models (ChatGPT, Bard, etcetera) we are seeing the first hints of an accelerating pace in the progress of artificial intelligence. These innovations may turn out to be good for humanity (with all problems we have already identified, bias, misinformation, work displacement, etc.), but their possibly exponential (and more-than-exponential) rate of change is something we cannot fathom, and is in and of itself the source of large risks.
As my friend David Orban likes to put it, these are "jolting technologies" - jolt is the third derivative of a quantity. We are accustomed to increases in speed (acceleration, a second derivative) but we are not used to increases in acceleration (jolt). What we might be looking at, in the case of technological developments in the AI area, is something that seems to start as a straight-looking, innocuous slope, which soon turns into an upward-curving trend, and then suddenly becomes a real wall in no time. We cannot comprehend this behavior, because we have no everyday analogue in real life, but these processes do exist.

This potential AI-powered disruption of the way human society works, with its complex balances and imbalances, might one day be looked back at with a warm feeling: evolutionary pushes at work toward a better society, where benign AGI helpers solve our huge problems. But this is only one scenario, and we have indications that it is not the most likely one. So we need to ponder deeply on what ways we have to lead these changes in the right direction. A no-control sudden expansion of the application of AI-powered tools is very, very problematic.

For the above reason, I signed the recent open letter on a 6-months moratorium on further AI developments. And I am happy to see that many AI experts agree on the contents of that letter. But not everybody agrees, in truth. Andrew Ng, a world-leading expert, claims he does not get what the "existential risks" of AI development are, and in a tweet I read today he invites input:



(original tweet here).

This prompted me to try and answer his question, so I wish to share my answer with you here. Please find my answer below, and feel free to add your comments in the thread, it would be very much appreciated to hear your point of view on the matter - an urgent and compelling one for everybody, if you ask me.

-------------------------

Hi Andrew, since you asked for feedback - consider the impressive potential for change -in finance, society, politics, human behavior that AI applications have; we are only seeing the tip of the iceberg at this juncture. Then consider the fact that humanity does not cope well with disruptive changes happening on too short time scales: we simply can't adapt so quickly, due to infrastructures that are not meant for such flexibility. What you get is a huge risk of global warfare triggered by those socio-political earthquakes. 

Imagine, e.g., somebody in China developing some AI that breaks the stock-market overnight, emptying the pockets of rich corporations in the West. That would already be, in the mind of many, reason for waging war. And that is only a silly example.

We simply cannot model the events that may take place when powerful, fast-acting innovations are introduced in the global markets. Hence we must get equipped with such capabilities in order to get prepared before these hypotheses become real risks.

When you hear about "existential risk" you should put this into context; it is not the risk of being turned into paper-clips by a narrowly-tasked AGI. It is, rather, the potential chain reaction of phenomena too fast to be comprehended and coped with. 

Just my 2c.


-------------------------------------------------