Canadian Healthcare Technology Logo
  • Issues
    • Current Print Issue
    • Print Archive
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Events
  • Vendors
  • About Us

GE Revolution Ascend

GE Revolution Ascend

Enovacom EPC

Enovacom EPC

Artificial intelligence

Furor over use of ChatGPT in mental healthcare

January 11, 2023


Rob MorrisSAN FRANCISCO – A digital mental health company called Koko is creating an ethical flap for using GPT-3 technology without informing users. ChatGPT is a variant of GPT-3, which creates human-like text based on prompts, both created by OpenAI. Rob Morris (pictured) – co-founder of Koko, a free mental health service and nonprofit that partners with online communities to find and treat at-risk individuals – wrote in a Twitter thread on Friday that his company used GPT-3 chatbots to help develop responses to 4,000 users.

Morris said in the thread that the company tested a “co-pilot approach with humans supervising the AI as needed” in messages sent via Koko peer support, a platform he described in an accompanying video as “a place where you can get help from our network or help someone else.”

“We make it very easy to help other people and with GPT-3 we’re making it even easier to be more efficient and effective as a help provider,” Morris said in the video.

Koko users were not initially informed the responses were developed by a bot, and “once people learned the messages were co-created by a machine, it didn’t work,” Morris wrote on Friday.

“Simulated empathy feels weird, empty. Machines don’t have lived, human experience so when they say, ‘that sounds hard’ or ‘I understand’, it sounds inauthentic,” Morris wrote in the thread. “A chatbot response that’s generated in 3 seconds, no matter how elegant, feels cheap somehow.”

However, on Saturday, Morris tweeted “some important clarification.”

“We were not pairing people up to chat with GPT-3, without their knowledge. (in retrospect, I could have worded my first tweet to better reflect this),” the tweet said.

“This feature was opt-in. Everyone knew about the feature when it was live for a few days.”

Morris said Friday that Koko “pulled this from our platform pretty quickly.” He noted that AI-based messages were “rated significantly higher than those written by humans on their own,” and that response times decreased by 50% thanks to the technology.

Nevertheless, the experiment led to outcry on Twitter, with some public health and tech professionals calling out the company on claims it violated informed consent law, a federal policy which mandates that human subjects provide consent before involvement in research purposes.

“This is profoundly unethical,” media strategist and author Eric Seufert tweeted on Saturday.

“Wow I would not admit this publicly,” Christian Hesketh, who describes himself on Twitter as a clinical scientist, tweeted Friday. “The participants should have given informed consent and this should have passed through an IRB [institutional review board].”

In a statement to Insider on Saturday, Morris said the company was “not pairing people up to chat with GPT-3” and said the option to use the technology was removed after realizing it “felt like an inauthentic experience.”

“Rather, we were offering our peer supporters the opportunity to use GPT-3 to help them compose better responses,” he said. “They were getting suggestions to help them write more supportive responses more quickly.”

Morris told Insider that Koko’s study is “exempt” from informed consent law, and cited previous published research by the company that was also exempt.

“Every individual has to provide consent to use the service,” Morris said. “If this were a university study (which it’s not, it was just a product feature explored), this would fall under an ‘exempt’ category of research.”

He continued: “This imposed no further risk to users, no deception, and we don’t collect any personally identifiable information or personal health information (no email, phone number, ip, username, etc).”

Still, the experiment is raising questions about ethics and the gray areas surrounding the use of AI chatbots in healthcare overall, after already prompting unrest in academia.

Arthur Caplan, professor of bioethics at New York University’s Grossman School of Medicine, wrote in an email to Business Insider, an online publication, that using AI technology without informing users is “grossly unethical.”

“The ChatGPT intervention is not standard of care,” Caplan told Insider. “No psychiatric or psychological group has verified its efficacy or laid out potential risks.”

He added that people with mental illness “require special sensitivity in any experiment,” including “close review by a research ethics committee or institutional review board prior to, during, and after the intervention.”

Caplan said use of GPT-3 technology in such ways could impact its future in the healthcare industry more broadly. “ChatGPT may have a future as do many AI programs such as robotic surgery,” he said. “But what happened here can only delay and complicate that future.”

Morris told Insider his intention was to “emphasize the importance of the human in the human-AI discussion.”

“I hope that doesn’t get lost here,” he said.

PreviousNext

WP 900×150

WP 900x150

News and Trends

  • RACE streamlines patient journey
  • Healthcare supply chain needs a re-think, observers say
  • EDI spots pricing anomalies in Ontario’s healthcare supply chain
  • AI centres of excellence and companies collaborate on apps
  • Talking Stick: New hope for Indigenous mental healthcare
More from the Print Edition

Subscribe

Subscribe

Free of charge to Canadian hospital managers and executives in nursing homes and home-care organizations. Learn More

Follow us on Social Media!

Follow us on Social Media!

CAN Health Network

CAN Health Network

Nihi Data [Winter 2023]

Nihi Data [Winter 2023]

Advertise with us

Advertise with us

Sectra One Cloud

Sectra One Cloud

Change Healthcare [2]

Change Healthcare [2]

Infoway [Feb2023]

Infoway [Feb2023]

CHT print-200×400

CHT print-200x400

WP 900×150

WP 900x150

Advertise with us

Advertise with us

Sectra One Cloud

Sectra One Cloud

Change Healthcare [2]

Change Healthcare [2]

Infoway [Feb2023]

Infoway [Feb2023]

CHT print-200×400

CHT print-200x400

Contact Us

Canadian Healthcare Technology
1118 Centre Street, Suite 207
Thornhill, Ontario, Canada L4J 7R9
Tel: 905-709-2330
Fax: 905-709-2258
info2@canhealth.com

  • Quick Links
    • Current Print Issue
    • Print Archive
    • Events
    • Vendors
    • About Us
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Resources
    • White Papers
    • Writers’ Guidelines
    • Privacy Policy
  • Topics
    • Administrative Solutions
    • Clinical Solutions
    • Companies
    • Continuing Care
    • Diagnostics
    • Education & Training
  •  
    • Electronic Records
    • Government & Policy
    • Infrastructure
    • Innovation
    • People
    • Privacy and Security

© 2023 Canadian Healthcare Technology

The content of Canadian Healthcare Technology is subject to copyright. Reproduction in whole or in part without prior written permission is strictly prohibited. Send all requests for permission to Jerry Zeidenberg, Publisher.

Search Site

Error: Enter a search term

  • Issues
    • Current Print Issue
    • Print Archive
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Events
  • Vendors
  • About Us