Press "Enter" to skip to content

10 policy principles needed for artificial intelligence

Artificial intelligence is an area of innovation where regulation is necessary but can’t be allowed to curtail innovation. Find out what the US Chamber believes needs to be done.

New policies need to be created for artificial intelligence (AI) in order to govern its use while allowing for innovation, according to the US Chamber’s Technology Engagement Center and Center for Global Regulatory Cooperation. 

“The advent of artificial intelligence will revolutionize businesses of all sizes and industries and has the potential to bring significant opportunities and challenges to the way Americans live and work, said Tim Day, senior vice president, Chamber Technology Engagement Center, in a press release. The principles, “serve as a comprehensive guide to address the policy issues pertaining to AI for federal, state, and local policymakers.”

SEE: Special report: Managing AI and ML in the enterprise (free PDF) (TechRepublic)

The chamber also endorsed the Organization for Economic Co-operation and Development’s recommendations for AI.

“As leaders in the development and use of AI, the U.S. business community has a strong interest in supporting a global AI ecosystem,” said Sean Heather, senior vice president of International Regulatory Affairs, US Chamber of Commerce. “Foreign capitals are looking to promote trustworthy AI, without undermining innovation.”

Here are the 10 policies that the US Chamber recommends:

Recognize trustworthy AI is a partnership

Foster public trust and trustworthiness in AI technologies. Advance responsible development, deployment and use. Encompass values such as transparency, explainability, fairness, and accountability. Governments must partner with the private sector, academia, and civil society to address public concern associated with AI: protection from harmful biases, human rights, and democratic values. Policy should be flexible with a transparent, voluntary, and multi-stakeholder process.

Be mindful of existing rules and regulations

Anything aided by AI is accountable under existing laws. Governments maintain a sector-specific approach, and remove or modify regulations which hinder AI development, deployment, and use. Avoid creating a patchwork of AI policies at the subnational level, and coordinate across governments to advance sound and interoperable practices.

Adopt risk-based approaches to AI governance

Incorporate flexible risk-based approaches based on use cases when governing the development, deployment, and use of AI technologies. An AI use case with high risk should face a higher degree of scrutiny than a use case where individuals’ harm is low. Recognize the different roles companies play within the AI ecosystem and focus on addressing harms to individuals linked to AI technologies use. AI regulation should be tailored appropriately and specifically, but consideration should be given to any affected economic and social benefits.

Support private and public investment in AI research and development

Governments should encourage and incentivize investment in research and development (R&D), partner directly with AI -centric  businesses, promote flexible governance frameworks including regulatory sandboxes, the use of testbeds, and funds R&D, and spurs innovation in trustworthy AI AI R&D advancement happens in a global ecosystem in the collaboration of businesses, universities, and institutions across borders.

Build an AI-ready workforce

Governments should partner with businesses, universities, and other stakeholders to build a workforce suited for an AI economy, which insure workers are prepared to use and adapt AI tools as needed. Policymakers should take steps to attract and retain global and diverse talent.

Promote open and accessible government data

AI requires access to large and robust data sets to function. Governments should make its substantial amounts of data available, and easily accessible, in a structured, and machine-readable format to accelerate AI development, insure appropriate and risk-based cybersecurity, and privacy protections. Governments improve the quality and usability of data such as greater digitization, standardized documentation and formatting, and additional budgetary resources.

Pursue robust and flexible privacy regimes

Clear and consistent privacy protections for personal privacy are a necessary component of AI. Robust but flexible data protection procedures should be in place that enable the collection, retention, and processing of data for AI development, deployment, and use while ensuring that consumer privacy rights are preserved.

Advance intellectual property frameworks that protect and promote innovation

Governments must respect IP protection and enforcement, AI governments should support an innovation-oriented approach which recognizes open AI ecosystem strengths. Governments must not require companies to transfer or provide access to AI-related intellectual property, such as source code, algorithms, and data sets.

Commit to cross-border data flows

No policies that restrict data flows, such as data -localization requirements, constitute market -access barriers that diminish AI-related investment and innovation, and limit access to AI technologies. Governments commit to maintain data flow across international borders.

Abide by international standards

Policymakers should  acknowledge and support development of international standards. Governments should leverage industry-led standards on a voluntary basis, to facilitate use and adoption of AI technologies.

Also see

AI (Artificial Intelligence) concept.” data-credit=”metamorworks, Getty Images/iStockphoto” rel=”noopener noreferrer”>istock-889289314azure.jpg
AI (Artificial Intelligence) concept.

metamorworks, Getty Images/iStockphoto

Source: TechRepublic