The CLeAR Documentation Framework for AI Transparency: Recommendations for Practitioners & Context for Policymakers

  • Kasia Chmielinski ,
  • Sarah Newman ,
  • Chris N. Kranzinger ,
  • Michael Hind ,
  • ,
  • Margaret Mitchell ,
  • Julia Stoyanovich ,
  • Angelina McMillan-Major ,
  • Emily McReynolds ,
  • Kathleen Esfahany ,
  • ,
  • Audrey Chang ,
  • Maui Hudson

Harvard Kennedy School Shorenstein Center discussion paper

This report introduces the CLeAR (Comparable, Legible, Actionable, and Robust) Documentation Framework to offer guiding principles for AI documentation. The framework is designed to help practitioners and others in the AI ecosystem consider the complexities and tradeoffs required when developing documentation for datasets, models, and AI systems (which contain one or more models, and often other software components). Documentation of these elements is crucial and serves several purposes, including: (1) Supporting responsible development and use, as well as mitigation of downstream harms, by providing transparency into the design, attributes, intended use, and shortcomings of datasets, models, and AI systems; (2) Motivating dataset, model, or AI system creators and curators to reflect on the choices they make; and (3) Facilitating dataset, model, and AI system evaluation and auditing. We assert that documentation should be mandatory in the creation, usage, and sale of datasets, models, and AI systems.

This framework was developed with the expertise and perspective of a team that has worked at the forefront of AI documentation across both industry and the research community. As the need for documentation in machine learning and AI becomes more apparent and the benefits more widely acknowledged, we hope it will serve as a guide for future AI documentation efforts as well as context and education for regulators’ efforts toward AI documentation requirements.