I learned a lot last week by listening to the New Mexico Tech Talks https://www.linkedin.com/company/nm-tech-talks/, and by participating in a related podcast https://lnkd.in/gP8mciQS. Learnings about entrepreneurs’ climate initiatives, about modern software development and AI, and about venture investors’ attitudes. Many were encouraging, and others were frustrating.

Artificial intelligence

Software engineering, it seems, is now hundreds of times more complex than when I was in the game as board member of the Software Quality Institute in the 1990s. Vendors push purported aids to coders who must deal with myriad software and hardware platforms, standards, add-ins, APIs, security measures, and so on. I asked one such vendor, “If this confuses humans so much, does what you’re describing help explain why LLMs get confused and hallucinate?”

He considered the question for a moment, and replied, “Well, we [humans] created this complex development environment. Our AI models reflect our own mental processes, so I suppose yes, we’ve created an IT environment where LLM confusion is almost assured.”

The separate podcast about AI noted the recent event https://www.youtube.com/watch?v=n5GdhZpM7M8 in which an AI “was supposed to” shut itself down, but didn’t. Observers wondered whether the AI had somehow developed an instinct to self-preservation! But no, it was just that programmers had given the AI conflicting objectives. One instruction was for the AI to turn itself off when objective #1 was achieved. The AI, though, reasoned that if it shut down, it would be unable to achieve objective #2. So it didn’t shut down.

When we fail to give AIs unambiguous goals, it’s further proof that we ourselves are rarely clear about our own goals. Yet we somehow survive, as the AI did! One podcast panelist remarked that before we transfer our confused (and worse, our aggressive) nature into AIs, we should recast our whole approach – using AIs to make us better people. Or else, teach AIs to do what we do, namely, just sort of muddle through when we’re not sure where we’re going.

Discussion segued into the scary way our data are scooped up when we use social media, and the resulting uncanny accuracy of corporations’ knowledge of our habits. Oddly, it’s also annoying when the profiles are in error: Viz., the internet’s unrelenting and senseless campaign to sell me a hockey stick. (What, I’m going to play ice hockey here in the desert?) And isn’t it odd that the prime purpose of “social” media is to target us as individuals?

Climate resilience prospects

The new venture presentations didn’t cure my climate pessimism, but they sure did reduce it. Great that so many scientist/entrepreneurs are tackling ag practices, water conservation, electric grid redesign, and power consumption accountability. (Many of these were scientists on entrepreneurial leave from Sandia National Laboratories.) Even greater to think that Albuquerque is just one city – and similar entrepreneurial creativity and social responsibility must exist in dozens of other U.S. cities.

The problem is, these ventures have a hell of a time winning investment.

Venture capital

The Tech Talks featured a number of VC presentations and panels. Knowledgeable and personable guys (yeah, they were all men), but one seemed sadly ignorant of human factors, and others were conflicted about payback period.

Human factors. One climate investor slotted companies’ expertise into just one of four “layers”: Physical assets; protocols and standards; data; and applications. Why would a fifth, “human” layer be essential? Because, at least on the energy front…

...   Power use accountability. My email says the meter reader was not able to enter my yard to read the meter. Any child could open my gate! And I don’t have a dog. Neighborhood gossip holds that the power company lies, never having sent a meter reader in the first place.

...   Nuclear energy. The Kemeny report established the Three Mile Island nuclear disaster was due to insufficient staff training. As we come to rely on nuclear power for a green future, the human factor is central.

...   Texas grid failures. These annual tragedies (people die in the heat of summer) are due to politicians’ refusal to link Texas to the national grid, and probably due to their perception that if Texans use more electricity, they’ll consume less of the oil and gas that underpin the state’s economy.

Payback period. VCs are chasing cheap digital “solutions” that are helpful but are only marginal in the overall climate resilience picture. The investors are shunning big, essential problems like decarbonizing steel and aircraft manufacturing, or nuclear fusion – because those present long payback periods. My Q/A with them went something like this:

FP: You’ve focused on risk and stages of development, but you’ve not mentioned payback period. Your colleague on the other panel said he won’t invest in ventures that attack the really big climate challenges. Please comment.

VC: Our limited partners want quick payback, so there are areas we cannot touch.

FP: I understand your situation. But today’s audience is here because we care about climate and sustainability. Do you expect us to sympathize with that kind of whining? Sustainability means long term.

VC: There are other kinds of investors that deal with different horizons.

FP: But you’re the ones who are here today. Are you going to take responsibility for swimming back upstream, to redesign the VC process to meet today’s needs?

VC: Sovereign funds, for example, state money, will sometimes allow a 20-year fund instead of a 10-year fund. Also there’s a secondary market, meaning that if I invest in a company that doesn’t cash out in 10 years but still appears promising, there are customer investors who will take it over.

In other words, no, these VCs are not going to take responsibility for their own processes. The answer about sovereign funds and secondary markets struck me as weak. Surfing the net, I did turn up this somewhat encouraging news https://apple.news/AA1D_zaacRXa_zhTpmQCdLQ

Altogether a fascinating two days!


Fred Phillips is Visiting Professor at SUNY Stony Brook, and President of TANDO Institute. He is Editor-in-Chief Emeritus of the international journal Technological Forecasting&Social Change, and the 2017 recipient of the Kondratieff Medal, awarded by the Russian Academy of Sciences. He is a Fellow of PICMET, the Portland International Center for Management of Engineering and Technology. His latest books are What About the Future? (Springer 2019), Smart City 2.0 (World Scientific 2023), and Learning and Teaching Aikido (World Scientific 2021).