Is the world ready for therapists?

Rob Morris’s suggestion of a mental-health assistant experiment to send written-word suggestions to someone else’s concern: How did he find a robot?

Since 2015, the mobile mental-health app, has tried to provide support for people in need. When you text the app, you can specify that you are feeling guilty about a work issue and the response will be sympathetic and offer some positive solutions.

You could be invited to respond to someone else’s plight while you wait. The assistant called “Kokobot” can suggest some ideas, such as “I’ve been there”.

But last October, some Koko app users were given the option to receive much-more-complete suggestions from Kokobot. These suggestions were preceded by a disclaimer, says Koko co-founder Rob Morris, who is based in Monterey, California: “I’m just a robot, but here’s an idea of how I might respond.” The users were able to modify or tailor their response to the way they felt was appropriate.

What they didn’t know at the time was that the replies were written by GPT-3, the powerful artificial-intelligence (AI) tool that can process and produce natural text, thanks to a massive written-word training set. Morris was surprised by the feedback he received about the experiment. He says he had no idea he would turn it into a fervour.

Source: https://www.nature.com/articles/d41586-023-01473-4

The Ethicals of Chatbots: How Electronic Therapy and the Internet of Things Can Increase Patient Retention and Privacy in Mental Health Care

One concern is transparency. Both conventional, in-person therapy and automated versions have a vested interest in retaining patients. The retention rate of these digital therapeutic platforms is very low, with a median of 4% within 2 weeks. The ethics of incentivizing patient retention are already complex as evidenced by TalkSpace, the popular mobile therapy platform, who came under fire for requiring therapists to insert script ads into discussions with clients. The ethics of programming an artificially intelligent bot to prioritize retention is murkier if it learns from other clients.

Some are worried about increased threats to privacy and transparency, or about the flattening of therapeutic strategies to those that can be digitized easily. And there are concerns about safety and legal liability. Earlier this year, a Belgian man reportedly committed suicide after weeks of sharing his climate-related anxieties with an AI chatbot called Eliza, developed by Chai Research in Palo Alto, California. His wife contends that he would still be alive if he had not engaged with this technology. Chai Research did not respond to a request for comment.

Hannah Zeavin, a scholar at the Indiana University in Bloomington, warns about the precarious state of mental-health care. That makes it an attractive target for an industry that likes to move quickly and break things. The technology has been in use for decades, but the interest in emerging tools may boost its growth.

The researchers had come to the conclusion that psychological issues are due to counterproductive patterns of thinking that can be mitigated by improving cope strategies.

What are the most secure apps for mental health? A study by Google, Calm, Modern Health, Woebot and MyAI Friend

Unguided apps have much less robust evidence5. There are studies that support their use, but there can be skewed by a digital placebo effect, in which people’s affinity for their personal devices and technology inflates an app’s perceived efficacy.

As machine learning becomes the basis of more mental-health platforms, designers will require ever larger sets of sensitive data to train their AIs. Nearly 70% of mental health and prayer apps analysed by the Mozilla Foundation — the organization behind the Firefox web browser — have a poor enough privacy policy to be labelled “Privacy Not Included” (see go.nature.com/3kqamow). It’s necessary that wariness is justified.

Meanwhile, some of the apps featured on last year’s list did see some improvements. Youper is highlighted as the most improved of the bunch, having overhauled its data collection practices and updated its password policy requirements to push for stronger, more secure passwords. Moodfit, Calm, Modern Health, and Woebot also made notable improvements by clarifying their privacy policies, while researchers praised Wysa and PTSD Coach for being “head and shoulders above the other apps in terms of privacy and security.”

Replika: My AI Friend, a “virtual friendship” chatbot, was one of the new apps analyzed in the study this year and received the most scrutiny. Mozilla researchers referred to it as “perhaps the worst app we’ve ever reviewed,” highlighting widespread privacy issues and that it had failed to meet the foundation’s minimum security standards. Regulators in Italy effectively banned the chatbot earlier this year over similar concerns, claiming that the app violated European data privacy regulations and failed to safeguard children.

Previous post The White House has a plan to promote ethical Artificial Intelligence
Next post The Bluesky Scratches that it was on the social networking site