OpenAI CEO Sam Altman said therapy sessions with ChatGPT won’t necessarily always remain private.
Suggested Reading
He said there aren’t currently any legal grounds to protect sensitive, personal information someone might share with ChatGPT if a lawsuit requires OpenAI to share the information.
Related Content
Altman made the statement during a sit down with Theo Von for his podcast “This Past Weekend w/ Theo Von” at OpenAI’s San Francisco office. Von initially prompted him with a question about what legal systems are currently in place around AI, in which Altman responded by saying “we will certainly need a legal or a policy framework for AI.”
He went on to point to a specific legal gray area in AI — people using the chatbot as their therapist.
“People talk about the most personal s**t in their lives to ChatGPT,” Altman said. “People use it — young people especially use it — as a therapist, a life coach.”
“Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it, there’s doctor patient confidentiality, there’s legal confidentiality. And we haven’t figured that out yet for when you talk to ChatGPT. ”
“So if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that,” Altman said. “And I think that’s very screwed up.”
“I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,” he added. “And no one had to think about that even a year ago. And now I think it’s this huge issue of like, how are we going to treat the laws around this?”
Altman said this issue needs to be addressed “with some urgency,” adding that the policy makers he’s spoken to agree.
Von responded saying that he doesn’t talk to ChatGPT often because of this privacy issue.
“I think it makes sense…to really want the privacy [and] clarity before you use it a lot,” Altman responded.
Legal privacy concerns aren’t the only withdrawal to using AI chatbots as therapists. A recent study from Stanford University found that AI therapy chatbots express stigma and make inappropriate statements against certain mental health conditions.
The researchers concluded that AI therapy chatbots in their current form shouldn’t replace human mental health providers due to their bias and “discrimination against marginalized groups,” among other reasons.
“Nuance is [the] issue – this isn’t simply ‘LLMs for therapy is bad,’ but it’s asking us to think critically about the role of LLMs in therapy,” senior author of the study Nick Haber told the Stanford Report. “LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be.”