top of page

Researching the public mental model of AI-labeled technology

Documented harmful misuse of AI-labeled technology has been growing over recent years. As a cognitive neuroscientist aware of how contextual information influences the way people perceive and use objects, I wanted to understand and document the ways neuroscience concepts affect public attitudes and beliefs about the capabilities and limitations of AI-labeled technology. This is an open-access publication meant to show how the neuroscience community influences AI technology use, and as of December 2022, has been cited multiple times and discussed in high-profile academic conferences. See more neuroscience-related work on my Neuroscience page.

Goals

​Design research goals
  • Create a POV communication about the potential misuses of neuroscience terminology in Big Tech based on existing scholarly work

​

Attitudinal research goals
  • Understand how visual and text descriptions influence broad, public mental models of technology

Methods

  • Literature review

Crucial insights

  • The general public is unclear about what is and is not "artificial intelligence", as well as the capabilities and limitations of this technology

  • Linguistic framing sets users' and affected non-users' expectations about the appropriate use of different technologies

  • For decades, technologists have been wary of how AI is anthropomorphized, and are concerned about how it can lead to misuse

Research impact

Strategic impact
  • The work was recognized by a future employer, and led to new design-based projects for improving transparency in content design

​

Stakeholder impact
  • Self and business stakeholders: I added another study to my personal publication record, and the work eventually led to new design projects at a future employer

  • Societal stakeholders: The research publication has been cited 5 times as of December 2022, and highlighted in a conference talk by a high-profile scientist; users and affected non-users may benefit from the call for greater tech transparency in this project

​

Product impact
  • This is difficult to measure, but hopefully the POV published research will have an impact on AI-labeled products in general, and improve users’ and affected non-users’ understandings of how they work

The resulting POV of this project is meant to improve technological transparency at all levels of academia and business, and call into question whether design images like that above really are helpful for user experience. The image on the right is modified from Bender, 2022

What I learned

  • Neuroscience metaphors are everywhere in AI design -- not only in visuals and content, but also in the way researchers communicate with each other and potential users

  • You have find ways to work with existing mental models -- they are difficult to overhaul, even if they don't seem helpful to the user

bottom of page