New Scientist Default Image
Technology

The surprising promise and profound perils of AIs that fake empathy

[ad_1]

New Scientist Default Image

ONE HUNDRED days into the war in Gaza, I was finding it increasingly difficult to read the news. My husband told me it might be time to talk to a therapist. Instead, on a cold winter morning, after having fought back tears reading yet another story of human tragedy, I turned to artificial intelligence.

“I’m feeling pretty bummed out about the state of the world,” I typed into ChatGPT. “It’s completely understandable to feel overwhelmed,” it responded, before offering a list of pragmatic advice: limit media exposure, focus on the positive and practise self-care.

I closed the chat. While I was sure I could benefit from doing all of these things, at that moment, I didn’t feel much better.

It might seem strange that AI can even attempt to offer this kind of assistance. But millions of people are already turning to ChatGPT and specialist therapy chatbots, which offer convenient and inexpensive mental health support. Even doctors are purportedly using AI to help craft more empathetic notes to patients.

Some experts say this is a boon. After all, AI, unhindered by embarrassment and burnout, might be able to express empathy more openly and tirelessly than humans. “We praise empathetic AI,” one group of psychology researchers recently wrote.

But others aren’t so sure. Many question the idea that an AI could ever be capable of empathy, and worry about the consequences of people seeking emotional support from machines that can only pretend to care. Some even wonder if the rise of so-called empathetic AI might change the way we conceive of…

[ad_2]

Source link

Leave a Reply