The robots are coming…

Blog BNR

The robots are coming…

This is my third post in my BNR column Unsolicited Advice! You can follow all Unsolicited Advice via Spotify, this edition can be found here:

This column on BNR (Dutch)
This column on Spotify (Dutch)

(after feedback on previous posts, the links are now at the top instead of at the bottom)

Let’s talk about the AI act, again…

The European Commission talked about the AI act last week, and there are some good points in it, so so far, good. But… the approach is special, in their press statements, including on Twitter/X:

Mitigating the risk of extinction from AI should be a global priority.

Erm…. Why are they suddenly talking about the risk of extinction due to AI (and not about other things that were previously discussed, such as deep fakes)? It is a frequently heard argument in this area: the exclusion risk. Because it is mentioned so much, and because AI is so elusive for many people, I thought it would be good to delve deeper into it today.

The risk of extinction due to AI is consistently mentioned by a number of thought leaders, including Elon Musk, scientists George Hilton, Sam Altman of ChatGPT, but also the less well known, but very influential Eliezer Yudkowsky.

What exactly is the AI that people are very afraid of? We are not talking about a specific AI as we know from Deep Blue for chess or for a self-driving car, which can only do one task, but general intelligence, which they often call AGI: Artificial General Intelligence. It is generally accepted that AGI is much more difficult to create than a dedicated single-task AI. After all, we already have robots for vacuuming with a bit of AI in them!

But the people who are now calling for a ban are experts, aren’t they? Well, let’s first look at the predictions of, for example, Hilton and Musk. They’ve been saying for years that even narrow AI will work great. In 2016, Hilton said that we can stop training radiologists because they would be redundant within 5 years, that has not yet happened, and Musk’s repeated predictions that self-driving cars and trucks are imminent have also proven to be wrong. So we cannot call them good forecasters.

How exactly will AI cause human extinction? It is therefore striking that there is a complete lack of a realistic scenario. It remains only vague, Hollywood-like scenarios that the AI will exterminate, but there is no real story. This is a stark contrast to, for example, climate activists who substantiate their stories about risks (existential or otherwise) with data, or virologists who have long warned about a Covid-like pandemic due to zoonosis. But, people say, including former MP Kees Verhoeven: “it could be possible”. Yes, anything is possible, maybe FC Volendam will become champion this year and Max Verstappen will go figure skating tomorrow. But it is truly remarkable that such vague, unsubstantiated statements receive so much attention.

Is the call to stop developing AI already having an effect? Well, it’s also “funny” (in bold quotes) to see that the makers of AI don’t stop making more AI. If they were really that concerned, they would just delete their own AIs right? But Palantir’s Alex Karp said that defense should fully focus on AI, which sounds very contradictory. Elon Musk was explicitly asked by scientist Deborah Raji why he continues to put AI in self-driving cars if he considers it dangerous. The fact that he didn’t respond to that question says a lot. It’s not that dangerous!

But do you think it is good to continue developing with that AI? No, and that is what makes the matter so complex. I also think that we should be careful when using AI! But many risks that scientists and journalists point out are not mentioned in the EU discussions, or, as was the case last week, in the AI insight at Congress in America. A major risk, also called the “enshittification of the internet” by journalist Cory Doctorow, is that AIs will fill the internet with nonsense and that we will no longer be able to distinguish true from false. That’s no direct extinction risk, but it’s already happening and it’s preventing people from finding reliable information. Or the risk that AIs are less trained on data of certain people, not only the well-known “facial recognition performs worse on black people” but also new research that shows that GPT works much worse in Hungarian or Vietnamese. So when Western companies use this worldwide, some people immediately have much poorer data quality. These risks (although fortunately they are partly covered in the AI act) are much less in the news, and that is not surprising because those risks hardly affect white Silicon Valley men. In fact, the only disaster that can affect the top 1% is a disaster that affects absolutely everyone. In fact, the only disaster that can affect the top 1% is a disaster that affects absolutely everyone. SO viewed that way, it’s actually logical that extinction worries them… what else would?

And then the advice to the European Commission, but also to the Dutch government, where something similar is happening with the AI petition: Be widely informed about the real risks, keep asking questions, and don’t listen to people who say no but do yes!

Back To Top