As artificial intelligence (AI) reshapes economies and societies, business leaders must consider how they will work with policymakers to govern the technology’s development. In the European Union, the recently adopted AI Act requires businesses to take precautionary measures depending on the risks associated with different use cases. Thus, using AI to engage in “social scoring” is deemed “unacceptable”, whereas AI-augmented e-mail filters come with “minimal risk”.The success of this approach will depend on businesses contributing technical expertise and practical insights to strike a balance between promoting innovation and addressing societal concerns. Leaving regulation entirely to policymakers and a few powerful companies risks creating rules that serve only Big Tech’s interests, while sidelining other industry perspectives.In the case of the EU’s AI Act, a lack of business participation in the drafting process has already left critical implementation details unresolved. For example, the law could be construed as regulating conventional statistical techniques such as linear regression, which is commonly used in the financial sector. If so, that would add an unnecessary compliance burden. Similarly, the law is ambiguous about which standard tools in drug development fall under its scope; such uncertainty could slow development and increase costs in an already heavily regulated industry.Such issues can be avoided if CEOs from these sectors get more involved. Although the text of the AI Act is finalised, questions of interpretation, implementation, and enforcement are still evolving. The precise list of high-risk AI systems – the most important category for sectors ranging from health care to banking – may change over time, based on industry feedback.Moreover, with rules and frameworks being formulated in the United States and other countries, as well as through international collaborations, business leaders need to broaden their scope. They could make valuable contributions to what is quickly becoming a complex, multi-jurisdictional regulatory landscape.Historically, public-private collaboration has been key to managing transformative technologies. During the Covid-19 pandemic, it ensured a proper balance between innovation and safety in achieving accelerated vaccine development. Similarly, the nuclear energy industry’s early engagement with regulators yielded rules for small modular reactors that reduced costs, streamlined licensing, and harmonised standards, enabling companies to expand into new markets, attract investment, and improve their competitive position – a notable departure from the sector’s traditionally burdensome regulatory landscape.In both cases, regulatory frameworks benefited from real-world input. Yet in the case of AI, too many companies remain on the sidelines, heightening the risk of poorly designed rules that hinder progress. This absence of business engagement does not reflect a lack of opportunity. Only 7% of corporate participants invited to the EU’s drafting process for its General-Purpose AI Code of Practice turned up, leaving NGOs and academics to dominate the discussions. Meanwhile, a recent BCG survey found that 72% of executives say their organisations are not fully prepared for AI regulation.If you are a CEO, what should you do? Since AI regulation and deployment are primarily sector-specific processes, a first step is to align with your industry so that you are all speaking in unison. That is the best way to make yourself heard alongside tech giants that are spending more than $100mn per year lobbying policymakers in Brussels (with Meta leading the pack).But AI regulation is not only about erecting guardrails and setting limits. In addition to building industry coalitions and agreeing on common AI standards, CEOs need to contribute to the full set of digital regulations that may affect their industries.As part of its broader digital strategy, the European Commission has implemented four other major laws and introduced the concept of “data spaces”. These are supposed to allow data to flow securely within the EU and across sectors, while maintaining compliance with EU laws. It now falls to industry to build these channels (with public funding). CEOs that align their corporate strategies with this emerging regime will be best positioned to capitalise on sector-specific opportunities.Executives also should identify and establish relationships with top policymakers and other influential stakeholders in their respective sectors, and at all levels of governance. These include the European Data Protection Board and national AI regulatory bodies in Europe, as well as agencies like the Federal Trade Commission and the Department of Justice in the United States. In each case, it is best to play the long game by building stable relationships based on expertise and trust, not transactional exchanges.To support these efforts, CEOs should have a specialised team dedicated solely to regulatory engagement. Simply rejecting proposed regulations is not an option, so defining fair trade-offs is key. Corporate leaders should be prepared to respond with clear, actionable alternatives presented in policymakers’ language, not industry jargon. For example, banks could propose that assessments of creditworthiness be exempted from the AI Act’s high-risk designation, on the grounds that these assessments strike an appropriate balance between innovation and accountability, and could reduce costs and make financing more available to consumers.AI regulation is not merely a compliance exercise. Industry leaders have an opportunity to shape rules that directly affect innovation and operations. By remaining disengaged, businesses risk allowing regulations to evolve without their input, leading to frameworks that are disconnected from operational realities. We do not want an environment in which regulators have overreacted to theoretical risks at the expense of practical progress.Just as the Industrial Revolution demanded new rules to govern transformative technologies, advances in AI call for guardrails. Business leaders have always had important contributions to make at such moments, and this one is no different. — Project Syndicate• Sylvain Duranton is Global Leader of BCG X. Kirsten Rulf is a partner and associate director at Boston Consulting Group.