Miner AS, Milstein A, Hancock JT. (2017). Talking to machines about personal mental health problems. Journal of the American Medical Association. 318(13): 1217-1218. doi: 10.1001/jama.2017.14151
The authors describe use of artificial intelligence programs that converse with people as if they were human (conversational agents) in mental health. Recent advances have improved the quality of conversational agents and made the implementation of conversational agents in mental health care more feasible. Conversational agents may be well suited for mental health care because diagnosis and treatment often relies on conversation between a patient and professional, more so than in physical health care. Research has found that people responded more candidly about mental health-related symptoms when they believed a conversational agent was controlled by AI than by a therapist. These findings highlight a potential disadvantage to developing conversational agents that are indistinguishable from humans. Additionally, there are safety and ethical concerns with developing conversational agents to discuss mental health care with people who are unable to tell if the conversational agent is human or AI controlled. The authors raise additional concerns for the future development and implementation of conversational agents. First, is the risk that conversational agents may respond inappropriately to sensitive topics, particularly early in implementation. Also, users may have expectations of privacy that are incongruent with the program’s use of their information (e.g., tracking and storing information, using information for machine learning). Finally, the authors emphasize the need to better understand user perceptions of who they are talking to when using a conversational agent (i.e. the imagined audience), as this imagined audience can influence user expectations (e.g., of privacy, confidentiality, reliability of information).