Objection 1: "Our use is our consent"
ChatGPT is the fastest-growing consumer application in history: It had 100 million active users just two months after it launched. There's no disputing that lots of people genuinely found it really cool. And it spurred the release of other chatbots, like Claude, which all sorts of people are getting use out of — from journalists to coders to busy parents who want someone (or something) else to make the goddamn grocery list.
Some claim that this simple fact — we're using the AI! — proves that people consent to what the major companies are doing.
This is a common claim, but I think it's very misleading. Our use of an AI system is not tantamount to consent. By "consent" we typically mean informed consent, not consent born of ignorance or coercion.
Much of the public is not informed about the true costs and benefits of these systems. How many people are aware, for instance, that generative AI sucks up so much energy that companies like Google and Microsoft are reneging on their climate pledges as a result?
Plus, we all live in choice environments that coerce us into using technologies we'd rather avoid. Sometimes we "consent" to tech because we fear we'll be at a professional disadvantage if we don't use it. Think about social media. I would personally not be on X (formerly known as Twitter) if not for the fact that it's seen as important for my job as a journalist. In a recent survey, many young people said they wish social media platforms were never invented, but given that these platforms do exist, they feel pressure to be on them.
Even if you think someone's use of a particular AI system does constitute consent, that doesn't mean they consent to the bigger project of building AGI.
This brings us to an important distinction: There's narrow AI — a system that's purpose-built for a specific task (say, language translation) — and then there's AGI. Narrow AI can be fantastic! It's helpful that AI systems can perform a crude copy edit of your work for free or let you write computer code using just plain English. It's awesome that AI is helping scientists better understand disease.
And it's extremely awesome that AI cracked the protein-folding problem — the challenge of predicting which 3D shape a protein will fold into — a puzzle that stumped biologists for 50 years. The Nobel Committee for Chemistry clearly agrees: It just gave a Nobel prize to AI pioneers for enabling this breakthrough, which will help with drug discovery.
But that is different from the attempt to build a general-purpose reasoning machine that outstrips humans, a "magic intelligence in the sky." While plenty of people do want narrow AI, polling shows that most Americans do not want AGI.
Which brings us to …
Objection 2: "The public is too ignorant to tell innovators how to innovate"
Here's a quote commonly (though dubiously) attributed to carmaker Henry Ford: "If I had asked people what they wanted, they would have said faster horses."
The claim here is that there's a good reason why genius inventors don't ask for the public's buy-in before releasing a new invention: Society is too ignorant or unimaginative to know what good innovation looks like. From the printing press and the telegraph to electricity and the internet, many of the great technological innovations in history happened because a few individuals decided on them by fiat.
But that doesn't mean deciding by fiat is always appropriate. The fact that society has often let inventors do that may be partly because of technological solutionism, partly because of a belief in the "great man" view of history, and partly because, well, it would have been pretty hard to consult broad swaths of society in an era before mass communications — before things like a printing press or a telegraph!
And while those inventions did come with perceived risks and real harms, they didn't pose the threat of wiping out humanity altogether or making us subservient to a different species.
For the few technologies we've invented so far that meet that bar, seeking democratic input and establishing mechanisms for global oversight have been attempted, and rightly so. It's the reason we have a Nuclear Nonproliferation Treaty and a Biological Weapons Convention — treaties that, though it's a struggle to implement them effectively, matter a lot for keeping our world safe.
It's true, of course, that most people don't understand the nitty-gritty of AI. So, the argument here is not that the public should be dictating the minutiae of AI policy. It's that it's wrong to ignore the public's general wishes when it comes to questions like "Should the government enforce safety standards before a catastrophe occurs or only punish companies after the fact?" and "Are there certain kinds of AI that shouldn't exist at all?".
As Daniel Colson, the executive director of the nonprofit AI Policy Institute, told me last year, "Policymakers shouldn't take the specifics of how to solve these problems from voters or the contents of polls. The place where I think voters are the right people to ask, though, is: What do you want out of policy? And what direction do you want society to go in?"
Objection 3: "It's impossible to curtail innovation anyway"
Finally, there's the technological inevitability argument, which says that you can't halt the march of technological progress — it's unstoppable!
This is a myth. In fact, there are lots of technologies that we've decided not to build, or that we've built but placed very tight restrictions on. Just think of human cloning or human germline modification. The recombinant DNA researchers behind the Asilomar Conference of 1975 famously organized a moratorium on certain experiments. We are, notably, still not cloning humans.
Or think of the 1967 Outer Space Treaty. Adopted by the United Nations against the backdrop of the Cold War, it barred nations from doing certain things in space — like storing their nuclear weapons there. Nowadays, the treaty comes up in debates about whether we should send messages into space with the hope of reaching extraterrestrials. Some argue that's dangerous because an alien species, once aware of us, might conquer and oppress us. Others argue it'll be great — maybe the aliens will gift us their knowledge in the form of an Encyclopedia Galactica!
Either way, it's clear that the stakes are incredibly high and all of human civilization would be affected, prompting some to make the case for democratic deliberation before intentional transmissions are sent into space.
As the old Roman proverb goes: What touches all should be decided by all.
That is as true of superintelligent AI as it is of nukes, chemical weapons, or interstellar broadcasts.
—Sigal Samuel, senior reporter