A huge industry has emerged in recent years as China, the United States, the United Kingdom, and the European Union have made the safety of artificial intelligence (AI) a top priority. Obviously, any technology – from cars and pharmaceuticals to machine tools and lawnmowers – should be designed as safely as possible (one wishes that more scrutiny had been brought to bear on social media during its early days).

But simply raising safety concerns isn’t enough. In the case of AI, the debate is focused far too much on “safety against catastrophic risks due to AGI (Artificial General Intelligence),” meaning a superintelligence that can outperform all humans in most cognitive tasks. At issue is the question of “alignment”: whether AI models produce results that match their users’ and designers’ objectives and values – a topic that leads to various sci-fi scenarios in which a superintelligent AI emerges and destroys humanity.

The best-selling author Brian Christian’s The Alignment Problem is focused mostly on AGI, and the same concerns have led Anthropic, one of the main companies in the field, to build models with their own “constitutions” enshrining ethical values and principles.

But there are at least two reasons why these approaches may be misguided. First, the current safety debate not only (unhelpfully) anthropomorphises AI; it also leads us to focus on the wrong targets. Since any technology can be used for good or bad, what ultimately matters is who controls it, what their objectives are, and what kind of regulations they are subjected to.

No amount of safety research would have prevented a car from being used as a weapon at the white supremacist rally in Charlottesville, Virginia in 2017. If we accept the premise that AI systems have their own personalities, we might conclude that our only option is to ensure that they have the right values and constitutions in the abstract. But the premise is false, and the proposed solution would fall far short.

To be sure, the counterargument is that if AGI was ever achieved, it really would matter whether the system was “aligned” with human objectives, because no guardrails would be left to contain the cunning of a superintelligence. But this claim brings us to the second problem with much of the AI safety discussion. Even if we are on the path to AGI (which seems highly unlikely), the most immediate danger would still be misuses of non-superintelligent AI by humans.

Suppose that there is some time (T) in the future (say 2040) when AGI will be invented, and that until this time arrives, AI systems that don’t have AGI will still be non-autonomous. (If they were to become self-acting before AGI, let that day be T.) Now consider the situation one year before T. By that point, AI systems will have become highly capable (by dint of being on the cusp of superintelligence), and the question that we would want to ask is: Who is in control right now?

The answer would of course be human agents, either individually or collectively in the form of a government, a consortium, or a corporation. To simplify the discussion, let me refer to the human agents in charge of AI at this point as Corporation X. This company (it could also be more than one company, which might be even worse, as we will see) would be able to use its AI capabilities for any purpose it wants. If it wanted to destroy democracy and enslave people, it could do so. The threat that so many commentators impute to AGI would already have arrived before AGI.

In fact, the situation would probably be worse than this description, because Corporation X could bring about a similar outcome even if its intention was not to destroy democracy. If its own objectives were not fully aligned with democracy (which is inevitable), democracy could suffer as an unintended consequence (as has been the case with social media).

For example, inequality exceeding some threshold may jeopardize the proper functioning of democracy; but that fact would not stop Corporation X from doing everything it could to enrich itself or its shareholders. Any guardrails built into its AI models to prevent malicious use would not matter, because Corporation X could still use its technology however it wants.

Likewise, if there were two companies, Corporation X and Corporation Y, that controlled highly capable AI models, either one of them, or both, could still pursue aims that are damaging to social cohesion, democracy, and human freedom. (And no, the argument that they would constrain each other is not convincing. If anything, their competition could make them even more ruthless.)

Thus, even if we get what most AI safety researchers want – proper alignment and constraints on AGI – we will not be safe. The implications of this conclusion should be obvious: We need much stronger institutions for reining in the tech companies, and much stronger forms of democratic and civic action to keep governments that control AI accountable. This challenge is quite separate and distinct from addressing biases in AI models or their alignment with human objectives.

Why, then, are we so fixated on the potential behaviour of anthropomorphised AI? Some of it is hype, which helps the tech industry attract more talent and investment. The more that everyone is talking about how a superintelligent AI might act, the more the public will start to think that AGI is imminent. Retail and institutional investors will pour into the next big thing, and tech executives who grew up on sci-fi depictions of superintelligent AI will get another free pass. We should start paying more attention to the more immediate risks. — Project Syndicate

POINTS TO PONDER

Misguided focus on AGI safety: The current AI safety debate is overly focused on the risks of Artificial General Intelligence (AGI), neglecting the immediate dangers posed by non-superintelligent AI misuses by humans.

Control and regulation: The ultimate risk lies in who controls AI technology, their objectives, and the regulations governing them, rather than the intrinsic safety of the AI itself.

Need for stronger institutions: Stronger institutions and democratic actions are needed to regulate tech companies and s controlling AI, addressing the immediate risks rather than hypothetical future AGI threats.

  • Daron Acemoglu, Institute Professor of Economics at MIT, is co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity.

Related Story