Overcoming Failures of Imagination in AI Infused System Development and Deployment
- Margarita Boyarskaya ,
- Alexandra Olteanu ,
- Kate Crawford
In the Navigating the Broader Impacts of AI Research Workshop at NeurIPS 2020 |
NeurIPS 2020 requested that research paper submissions include impact statements on ‘potential nefarious uses and the consequences of failure.’ When researching, designing, and implementing systems, a key challenge to anticipating risks, however, is to overcome what Clarke (1962) called ‘failures of imagination.’ The growing research on bias, fairness, and transparency in computational systems aims to illuminate and mitigate harms, and could thus help inform reflections on possible negative impacts of particular pieces of technical work. The prevalent notion of computational harms — narrowly construed as either allocational or representational harms — does not fully capture the context dependent and unobservable nature of harms across the wide range of AI infused systems. The current literature primarily addresses only a small range of examples of harms to motivate algorithmic fixes, overlooking the wider scope of probable harms and the way these harms may affect different stakeholders. The system affordances and possible usage scenarios may also exacerbate harms in unpredictable ways, as they determine stakeholders’ control (including non-users) over how they interact with a system output. To effectively assist in anticipating and identifying harmful uses, we argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, uses, and outputs, as well as viable proxies for assessing harms in the widest sense.
Failures of imagination: Discovering and measuring harms in language technologies
Auditing natural language processing (NLP) systems for computational harms remains an elusive goal. Doing so, however, is critical as there is a proliferation of language technologies (and applications) that are enabled by increasingly powerful natural language generation and representation models. Computational harms occur not only due to what content is being produced by people, but also due to how content is being embedded, represented, and generated by large-scale and sophisticated language models. This webinar will cover challenges with locating and measuring potential harms that language technologies—and the data they ingest or generate—might surface, exacerbate, or cause. Such harms can range from more overt issues, like surfacing offensive speech or reinforcing stereotypes, to more subtle issues, like nudging users toward undesirable patterns of behavior or triggering memories of traumatic events.
Join Microsoft researchers Su Lin Blodgett and Alexandra Olteanu, from the FATE Group at Microsoft Research Montréal, to examine pitfalls in some state-of-the-art approaches to measuring computational harms in language technologies. For such measurements of harms to be effective, it is important to clearly articulate both: 1) the construct to be measured and 2) how the measurements operationalize that construct. The webinar will also overview possible approaches practitioners could take to proactively identify issues that might not be on their radar, and thus effectively track and measure a wider range of issues.
Together, you’ll explore:
- Possible pitfalls when measuring computational harms in language technologies
- Challenges to identifying what harms we should be measuring
- Steps toward anticipating computational harms
Resource list:
- A Critical Survey of “Bias” in NLP (opens in new tab) (Publication)
- When Are Search Completion Suggestions Problematic? (opens in new tab) (Publication)
- Social Data (opens in new tab) (Publication)
- Characterizing Problematic Email Reply Suggestions (opens in new tab) (Publication)
- Overcoming Failures of Imagination in AI Infused System Development and Deployment (opens in new tab) (Publication)
- Defining Bias with Su Lin Blodgett (opens in new tab) (Podcast)
- Language, Power and NLP (opens in new tab) (Podcast)
- Su Lin Blodgett (opens in new tab) (researcher profile)
- Alexandra Olteanu (opens in new tab) (researcher profile)
*This on-demand webinar features a previously recorded Q&A session and open captioning.
Explore more Microsoft Research webinars: https://aka.ms/msrwebinars (opens in new tab)