Back to all Insights

Framework for Incentivizing Responsible Artificial Intelligence Development and Use

This post is part of The Catalyst newsletter series. Subscribe here for future resources.
Leading AI companies agree that powerful AI systems can cause harm and have acknowledged the need for federal regulation.

The Center for Humane Technology (CHT) developed the Framework for Incentivizing Responsible Artificial Intelligence Development and Use to provide a resource for policymakers tackling the challenge of regulating AI. 

The framework provides a set of guiding principles for lawmakers to use in scenarios including: crafting new AI policies; helping certify responsible development and use of emerging AI systems; promoting accountability toward individual and business consumers; and ensuring that companies behind these products are prioritizing safety over profit. 

“The harms that social media has caused on our society — including by undermining truth online and eroding children’s mental health — is well documented. We cannot let the same happen with AI, and this framework provides policymakers with the tools to prevent history from repeating itself.” – Casey Mock, CHT’s Chief of Policy and Public Affairs

Framework Overview

CHT’s liability framework takes a products liability and a consumer products safety approach. By combining both areas of law, the framework is both remedial and preventive, allowing developers to proactively address current risks and elevate the safety aspects of their products, while also ensuring accountability for those who develop and deploy systems in an unsafe manner. 

The proposed framework, which covers the riskiest AI systems – defined both by capability and by use case – developed or deployed in the U.S., builds upon historic models of regulation and accountability by: 

  1. Adopting both a products liability- and a consumer products safety-type approach for “inherently dangerous AI,” inclusive of the most capable models and those deployed in high-risk use cases
  2. Clarifying that inherently dangerous AI is, in fact, a product and that developers assume the role and responsibility of a product manufacturer, including liability for harms caused by unsafe product design or inadequate product warnings
  3. Requiring reporting by both developers and deployers, including an “AI Data Sheet” to ensure that users and the public are aware of the risks of inherently dangerous AI systems
  4. Providing for both a limited private right of action and government enforcement
  5. Providing for limited protections for developers and deployers who uphold their risk management and reporting requirements, further protections for deployers using AI products within their terms of use, and exemptions for small business deployers. In order to realize AI’s full benefits and ensure U.S. international competitiveness, such protections are necessary to promote the safe development of AI

Take Action

Published on
September 12, 2024

Posted in: