Is AI going to kill us or not?

Even as people tell us all of the amazing things AI is going to do for us, just as many people are telling us how AI is going to destroy us. Which is confusing enough (I mean, can’t something just be good for a change?) except that in many cases, the people leading the doomsayers are some of the very same people who made AI in the first place. Which is like Henry Ford saying “Sure cars are great, but they’re probably gonna kill over 42,000 people a year you know, so…”

To get some clarity, I dug into the open letter that the Future of Life institute (FLI) published (which has over 30,000 signatories, including everyone from Steve Wozniak to Yuval Noah Harari to Elon Musk), along with a supporting document the organization created of “principles” (5700 signatories), and then some FAQs that they provided, along with information from a few other places in order to figure out what the hell was going on. As you might imagine, there was a lot to digest and a significant amount that went over my head (if I may mix those body metaphors). But there were a few things that leapt out at me.

And the short answer is yes. AI is going to kill us or not.

The longer answer is um longer. And it starts with slaughterbots.

Number 18 in the FLI’s principles is the following:

“An arms race in lethal autonomous weapons should be avoided.”

I don’t even know what that means and I’m already afraid.

“Autonomous Weapons Systems (AWS) are lethal devices that identify potential enemy targets and independently choose to attack those targets on the basis of algorithms and AI.

“The U.S. Department of Defense described an autonomous weapons system as a ‘weapons system that, once activated, can select and engage targets without further intervention by a human operator.’

“Lethal autonomous weapons and AWS currently exploiting AI, under development and/or already employed, include autonomous stationary sentry guns and remote weapon stations programmed to fire at humans and vehicles, killer robots (also called ‘slaughter bots’), and drones and drone swarms with autonomous targeting capabilities.”

(from “The weaponization of artificial intelligence; What the public needs to be aware of” by Birgitta Dresp-Langley, Director of Research at the Centre National de la Recherche Scientifique)

Okay, now I know and I don’t feel any better.

Obviously I concur with the FLI – an arms race in these things should definitely be avoided. Will it? Well if the history of the world is any indication, no. Look, Alfred Nobel created dynamite and he thought it would end war. Oppenheimer felt the same way about the bomb. You think this is gonna be any different? Me neither. Chalk one up for AI killing us – or at least, giving us humans one more new way to kill ourselves.

(Oh and did you notice how I blithely passed over the fact that these things already exist? Pretty slick, huh? And, you know, terrifying. So maybe chalk two up for AI killing us).

How about our jobs? Is AI going to take our jobs?

Well, maybe, kinda, sorta, no?

“Frey and Osborne (2013) estimate that 47% of total US employment is at risk of losing jobs to automation over the next decade.

“Bowles (2014) uses Frey and Osborne’s (2013) framework to estimate that 54% of EU jobs are at risk.”

(From “The impact of artificial intelligence on growth and employment” by Ethan Ilzetzki, Associate Professor, London School of Economics (with Suryaansh Jain))

“Artificial intelligence (AI) could replace the equivalent of 300 million full-time jobs, a report by investment bank Goldman Sachs says.”

(From “AI could replace equivalent of 300 million jobs – report” by Chris Vallance)

That all looks bad, but it’s actually a mixed bag. The general consensus – with a healthy dose of caveats – is that the net-net is to the good – there will be more employment, more jobs, and more revenue. But that’s over all.

“The World Economic Forum concluded in October 2020 that while AI would likely take away 85 million jobs globally by 2025, it would also generate 97 million new jobs in fields ranging from big data and machine learning to information security and digital marketing.” (https://cepr.org/voxeu/columns/impact-artificial-intelligence-growth-and-employment)

That’s obviously a net gain of 12 million jobs, but it is ridiculous to think that all of the 85 million job losers will get jobs in the new fields. Or even that all the new jobs will be spread evenly across the geography of job losses. AI is a disruptive technology. It will require people to retrain themselves for new jobs in a new economy. You know, like the internet did. (*cough *cough). And if you think that’s going to be a simple and obvious and automatic thing, I suggest you ask your local coal miner how his new career installing solar panels is going.

In short, yes, it will take the jobs of some of us. But it will also employ others of us. And, if any of the projections are accurate, it will employ more of us than are employed now. Which is a good thing, right? Unless you’re not able to retrain or get a new job. Which would make it a bad thing.

Oh and speaking of “bad things”, the FLI weighs in on an interesting aspect of “retraining” for us to be afraid of that I hadn’t thought about. Number 22 on their list of principles is:

“Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.”

This starts to get to the heart of what motivated the FLI and its signatories in the first place, and why they’ve asked for a six month “pause on development” from all of the labs. They are concerned about the ability of AI systems to self-improve in ways that are good for them (that is, for the AI), but that are harmful for humans. In other words, if humans are not there to actively monitor the “improvements”, the AI could “improve” themselves right out of needing humans around at all. Think of this as a charming combination of “autonomous weapon systems” and “job insecurity” applied to every aspect of your life.

“Oh Martin, you’re exaggerating. You old copywriters are so dramatic, looking at every exciting technological advancement as a disaster. You probably would have complained about fire when the cavemen discovered it. You just need to calm down.”

Okay, but there’s this thing called AGI – Artificial General Intelligence:

“Current AI systems are becoming quite general and already human-competitive at a very wide variety of tasks. This is itself an extremely important development.

“Many of the world's leading AI scientists also think more powerful systems – often called ‘Artificial General Intelligence’ (AGI) – that are competitive with humanity's best at almost any task, are achievable, and this is the stated goal of many commercial AI labs.

“From our perspective, there is no natural law or barrier to technical progress that prohibits this. We therefore operate under the assumption that AGI is possible and sooner than many expected.”

(From “FAQs about FLI’s Open Letter Calling for a Pause on Giant AI Experiments”)

In other words, AI is just the tip of the iceberg, the cute funny thing we use to make goofy memes that make our friends laugh on social media, that’s opening the door for the 800 pound gorilla that will tear your head off.

And this literally from the guys who made the “cute funny thing” in the first place. Chalk another one for the doomsayers.

But here’s why you should not be doomsayer. In fact, here’s why you should actually be optimistic about the future of AI (if you can believe it).

“Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.”

This is buried deep in the FLI’s “principles” document, and it’s sort of stunning. First – it shows a level of self-awareness that is rare in creative people of any stripe. I mean, imagine spending a significant portion of your life thinking about theoretical stuff that most people didn’t understand, which ultimately leads to the creation of a technology that is incredibly innovative and breakthrough and world changing. And all of a sudden everything you’d been working on for years was in the news and people knew who you were – and then you suddenly were confronted with the fact that what you’d been doing as a lifelong intellectual exercise has resulted in something that could end life on earth as we know it. So you decide to try make sure that never happens.

People just don’t do that. Zuckerberg didn’t say “yeah, it’s possible that this little thing I’m whipping up in my dorm room will be used by authoritarians and other power-crazed assholes to try to steal elections and spread disinformation and destroy lives, and so it’s on me to make sure that doesn’t happen”. And Tim Berners-Lee didn’t say at the dawn of the internet “Yeah, I think this could be really great, but I’m accepting the responsibility for making sure it doesn’t fuck everything up.”

But more importantly, it’s remarkable because it offers us a new path to success. One not born from the typical Manichean yes/no, black/white, AI is good/AI is bad world that 99% of what we do lives in.

A third way, as it were, that incorporates our humanity and our technology, working together, to meet these challenges.

Which frankly is the only way success ever really happens anyway. Just ask Oppenheimer.