It doesn’t take much for humans to recognize human-like traits and abilities in chatbots. The German American computer scientist Joseph Weizenbaum discovered this effect in the 1960s, as he built a program called ELIZA that is widely regarded as the first chatbot. The user typed in statements and the program generated responses that could emulate some forms of natural language conversation between people and computers. ELIZA was primitive compared with today’s large language models (LLMs), operating primarily by identifying ‘keywords’ and performing ‘text manipulation’. Many users were captivated, perceiving human characteristics such as understanding or empathy when interacting with the chatbot. Weizenbaum was surprised and dismayed by the power of the illusion and became concerned about overreliance on artificial intelligence (AI) and its undermining of human values1,2.

Several decades later, LLMs are capable of fluent human-like conversations, producing output in any style as desired. With the wide availability and popularity of LLMs and the emerging capability to personalize chatbots to user-specific data, the tendency of humans to project human traits in computed programs needs further examination given the potential widespread effects on individual users and society.

In a Correspondence in this issue, Garriy Shteynberg et al. consider a human ability and trait that is increasingly emulated by LLMs: empathy — the ability to understand and share the feelings of others. LLMs, in the form of romantic chatbots, personal assistants, mental health apps or therapists3, can give users the illusion of understanding, empathy, caring and love. However, language models cannot experience any of these psychological states, leading the authors to ask an interesting question: does it matter if empathic AI has no empathy?

When it comes to cognitive abilities such as reasoning or understanding, there appears to be a grey area in which LLMs can plausibly, according to some experts, demonstrate such abilities4. For example, one might believe that an LLM that can solve a complex problem via prompts is performing a form of reasoning, even if it is not exactly how humans would solve it. Likewise, an LLM might be said to ‘understand’ a topic if it can converse cogently about it in a detailed way.

But empathy is different, or so it seems. LLMs may give linguistic responses that appear empathic (such as, ‘I feel sorry’), but they are not equipped with complex, machine versions of the biology and psychology of empathy, which, in humans, involves the integration of internal regulatory mechanisms such as homeostatic processes and the control of neural pathways5. Unlike many instances of cognitive abilities, empathy often involves autonomic signals. Put facetiously, no LLM has shown changes in heart rate or the galvanic skin response when making empathic claims. Yet, although LLMs cannot feel empathy, they can use the language of empathy, and may induce real feelings and emotions within their human users.

Shteynberg et al. call for research to probe the ethical questions and consequences of empathic AI. For example, an argument in defence of empathic AI applications is that users are informed or warned that the AI chatbot they interact with only simulates empathy, friendship or love. However, the efficacy of empathic LLMs depends on how much users believe that the chatbot they interact with truly feels empathy. A research question posed by the authors is whether long-term users of empathic LLMs can (or should) sustain the belief that AI empathy is simulated rather than real. Another question they explore is how the experience of LLM disillusionment compares with losing a human social bond. Examination of such ethical questions on users’ interactions with empathic chatbots will be important, given the increasingly wide adoption of personalized AI bots in which users can create personalized AI-based companions.

In a recent Perspective, Hannah Kirk et al.6 highlight personalization of LLMs as a frontier development in AI. As they discuss, the potential benefits of LLMs tailored to individual preferences are extensive. Information retrieval, tutorship and mentoring, and mental health support might become more efficient and targeted. However, the risks are real. In addition to issues with privacy infringement, there is the concern that individuals using personalized LLMs are caught in an echo chamber. A further risk is that users may foster a perceived emotional connection between themselves and the LLM, with the heightened risk that users form unhealthy attachments or reveal sensitive information.

There is no doubt that the technology will develop quickly, as tech companies are racing to integrate LLM-based products in everyday applications. In a potential future scenario, personalized LLMs could rapidly become the norm. However, without proper ethical consideration of the effects on users and responsible deployment, the dangers expressed by Shteynberg et al. and Kirk et al.6 may inadvertently become rooted in everyday life. Weizenbaum’s apprehensions about the human tendency to attribute human-like qualities to machines have become more urgent with the rise of sophisticated LLM-based chatbots that may seem attuned to our emotional needs. Kirk et al.6 ask a critical question: what are the appropriate bounds of personalization, and who decides?