
Members of the European Parliament vote on artificial intelligence legislation in Strasbourg on June 14. Frederic Florin-AFP/Getty Images
Nine months ago, ChatGPT was released, garnering public attention unlike any innovation in recent memory. Excitement about AI’s opportunities also came with legitimate concerns about its potential negative impacts, as well as calls from industry and government for substantive and enforceable regulations. However, the US’s window to influence the global debate on how to regulate AI is fast closing.
While AI innovation is taking off in the US, other governments around the world are moving faster to shape future regulations. In November, the UK is hosting the World AI Summit. By the end of 2023, the European Union will have finalized its AI law, the most comprehensive AI law enacted to date. Japan has set out its AI policy approach while simultaneously leading G7 efforts to establish common standards for AI governance.
Enterprise software companies—creators of AI systems—started calling for US legislation nearly two years ago—and the need to act is becoming more urgent.
There are basic goals that everyone must agree on: AI, in any form, should not be used to commit illegal acts. It must not be used to compromise privacy, facilitate cyber attacks, promote discrimination or cause physical harm. AI that is developed and deployed responsibly, improves our lives and makes us safer.
Congress should take advantage of the significant work already done by government agencies, civil society advocates, and industry groups to identify the risks of using AI in various contexts and what concrete steps organizations can take to mitigate those risks. Although these proposals have important differences, they together form the basis for action.
There are signs that American lawmakers want to act. Several members of Congress are drafting or already introducing AI-related legislation, and Senators Schumer, Young, Heinrich and Rounds have initiated a process aimed at developing bipartisan AI legislation “within months.”
However, some other leaders have suggested that Congress may never be able to pass meaningful AI legislation, given the complexity of the technology and the lack of understanding among lawmakers.
It is important to study the problem at hand before acting, but hopefully this will not lead to inaction on a major technology issue. While no one fully knows what all the positive and negative aspects of AI will be, including the new implications of generative AI, we don’t need to wait to set ground rules to guard against the risks that are evident today.
The law should require companies that develop or use AI systems in high-risk contexts to identify and mitigate potential harm from those systems. Specifically, the law requires companies to conduct impact assessments for high-risk AI systems, so that those who develop and deploy AI can detect and address potential risks. Impact assessment is already widely used in a range of other fields, from environmental protection to data security, as an accountability mechanism. The same approach could work for AI.
Setting thoughtful rules for AI is at the heart of the vitality of our economy. All types of industries and businesses of all sizes are finding ways to use AI for growth. Countries that best facilitate responsible and broad-based AI adoption will see the greatest economic and job growth in the coming years. But first, governments must establish strong laws that build trust and raise standards for how AI is used.
Making any new law is no mean feat. Shaping a meaningful global debate is time-consuming and difficult. America must not squander this fast-closing opportunity to lead AI legislation.
Victoria Aspinell is the CEO of BSA – The Software Alliance, the global trade association representing the enterprise software industry.
Opinions expressed in Fortune.com commentary pieces are solely those of their authors and do not necessarily reflect their opinions and beliefs. fortune.