Opinion
Regulatory scrutiny grows as AI adds to global concerns
For years, the world’s biggest technology companies were largely able to resist government oversight; that’s changing fast
Already at work in products as diverse as toothbrushes and drones, systems based on artificial intelligence (AI) have the potential to revolutionise industries from healthcare to logistics.
And, the unstoppable rise of AI is bringing about upheavals in the tech world and becoming a driving force across the global economy.
The AI frenzy got a fillip when OpenAI transformed how the public think about AI with the launch of its hugely successful chatbot, ChatGPT.
But the complex, rapidly evolving field of AI raises legal, national security and civil rights concerns.
Of late, Microsoft and Apple have dropped plans to take board roles at OpenAI in a surprise decision that underscores growing regulatory scrutiny of the Big Tech’s influence over AI.
Microsoft, which invested $13bn in the ChatGPT creator, will withdraw from the board, according to a Bloomberg report. Apple was due to take up a similar role, but an OpenAI spokesperson said the startup will have no board observers after Microsoft’s departure.
Regulators in Europe and the US have expressed concern about Microsoft’s sway over OpenAI, applying pressure on one of the world’s most valuable companies to show that it’s keeping the relationship at arm’s length.
Microsoft has integrated OpenAI’s services into its Windows and Copilot AI platforms and, like other big US tech companies, is banking on the new technology to help drive growth.
Microsoft isn’t being singled out. The UK is looking into Amazon.com’s $4bn collaboration with AI company Anthropic, expressing concern that large tech companies are using partnerships to "shield themselves from competition.”
The US also is probing Nvidia’s dominance over AI chips.
For years, the world’s biggest technology companies were largely able to resist government oversight. That’s changing fast, in a wider sense.
The US Department of Justice and 16 attorneys general have sued Apple, accusing the iPhone maker of violating antitrust laws by blocking rivals from accessing hardware and software features on its popular devices.
In Europe, regulators, who suspect Apple, Alphabet’s Google and Meta Platforms of failing to comply with new laws limiting their dominance of the digital economy, have begun investigations that could pave the way for hefty penalties.
Meanwhile, Meta’s Facebook, Alphabet’s YouTube and Amazon.com have been scrambling to comply with tougher European Union rules governing digital marketplaces and the policing of social media content.
The EU regulations have already changed the way consumers use their iPhones and other gadgets. A successful DoJ suit could make it easier for consumers to use rival products on Apple’s devices in its home country and largest market.
The biggest US tech companies including Microsoft as well as Nvidia, Alphabet and Amazon.com have poured tens of billions of dollars into AI businesses. While these investments and partnerships are a lifeline for the startups, regulators have expressed concern that they threaten to concentrate access to the most innovative large language models among the tech companies that already dominate other platforms.
Tech giants are also striking non-financial agreements. These include Apple’s partnership with OpenAI to bring ChatGPT to the iPhone, and Microsoft’s decision earlier this year to bring on Inflection AI’s Mustafa Suleyman and most of his staff from the OpenAI rival.
The Big Tech, despite its overarching global reach, has for long been facing a widening trust deficit, both from the users and regulators.
Now, the rampant euphoria over AI has again problematised the good-vs-bad tech debate.
Google, Microsoft, IBM and OpenAI have encouraged US lawmakers to implement federal oversight of AI, which they say is necessary to guarantee safety.
And AI is the subject of regulatory reviews and consideration worldwide with companies and governmental bodies in the process of negotiating how to mitigate the potential harms without stifling innovation.