Fortunately, both the AI industry and US President Joe Biden’s administration seem to be taking these concerns seriously. Biden’s October 2023 executive order on AI, in particular, has been rightly praised for focusing on responsible innovation. By facilitating the development of crucial standards, tools, tests, and transparency requirements, it could help establish accountability, discourage illegal and dangerous applications, and guard against disinformation, privacy violations, and intellectual-property theft.
As a transformative general-purpose technology, AI is likely to have a far-reaching impact on society and the labour market. Consequently, the US government must study the technology’s labour implications and act rapidly and decisively to protect workers against potential disruptions.
But, in addition to protecting workers and consumers against potential risks, the government should also play a more active role in promoting the development of AI applications that serve the public good. To achieve this, the United States could leverage its significant and growing technological lead.
Globally, roughly $50bn in venture capital has been invested in the sector just in the past year. Consequently, the AI boom is projected to boost productivity by as much as 0.6% annually, enabling the global economy to grow despite slower workforce growth.
The US has more AI-related startups than the rest of the world combined, and the current US stock-market rally can be at least partly attributed to optimistic predictions about AI’s future. With a market capitalisation of $3.3tn, chipmaker Nvidia alone is worth more than the entire German stock market. Meanwhile, as companies like Google, Microsoft, Facebook, Apple, Tesla, and IBM spend billions of dollars integrating AI into their existing products and services, the Boston Consulting Group projects that this technological boom will account for 20% of its revenues in 2024. Even the electricity sector is growing rapidly, fuelled by the enormous power needs of large language models.
AI has also made significant inroads into numerous non-technology sectors. In the health-care industry, it is used for diagnostics, personalised medicine, patient management, administrative tasks, and pharmaceutical research. The financial sector employs AI technologies for fraud detection, risk management, and personalised services. In manufacturing, these technologies enhance supply-chain optimisation, equipment maintenance, and quality control, while agriculture companies integrate AI into precision farming, crop monitoring, pest control, and predictive analytics. And law firms increasingly rely on AI for document review, research, contract analysis, and compliance management.
Driven primarily by market forces with minimal government intervention, these sectors are beginning to harness AI’s potential to improve efficiency, reduce costs, and deliver better products. But the government still plays a crucial role in advancing the integration of AI into essential public services like health care, transportation, and education.
While some public-policy applications will resemble private-sector uses like fraud detection, procurement, risk management, logistics, traffic management, and software development, others will require careful adaptation to meet government agencies’ specific needs. These include data analytics and visualisation, customer service and support, and community input, all of which necessitate greater public-private co-operation and a massive effort to recruit and train AI talent.
As is often the case in new technological arenas, California is at the forefront of global efforts to promote AI for the common good. In May, the state hosted a joint AI conference with Stanford University and the University of California, Berkeley, and launched a six-month pilot program that aims to evaluate the suitability of generative AI tools for services like customer support, highway traffic management, and public safety.
Governments can also play an important role in facilitating research and development. The US already spends $95bn annually on defence and national-security R&D and $61bn on health-related research – two areas that will be dramatically affected by AI. Notably, the government has vast amounts of data, both public and private, that could be harnessed for open-source applications aimed at improving traffic monitoring, weather analytics, disaster prevention and response, economic statistics, and public health.
Of course, policymakers must ensure that these data are accurate, unbiased, and used responsibly. While this will not be easy, as evidenced by the bungled rollout of the new student financial-aid forms (FAFSA), it is conceivable that individuals’ information soon will be pre-filled into government forms, automatically enrolling them in certain public programmes and allowing them to opt out.
Perhaps the most promising public-policy option is integration of AI into training and education programmes. Recent innovations in generative AI have made personalised, one-on-one tutoring scalable and accessible, with studies showing that these tools could help bridge the gap between low- and high-skilled workers. The Khan Academy’s Khanmigo, a tutoring bot for schools, is a prime example of AI’s potential educational benefits.
To be sure, like most other technological advances, AI can have both positive and negative effects. Given the right policies and talent, it could fuel innovation, enhance competitiveness, boost productivity, and improve public-sector efficiency. But to unlock its full potential, the US government must ensure that AI is used to serve the common good. States like California are already leading the way. — Project Syndicate
- Lenny Mendonca, Senior Partner Emeritus at McKinsey & Company, is a former chief economic and business adviser to Governor Gavin Newsom of California and chair of the California High-Speed Rail Authority.