The Illusion of Connection: How Chatbot Design Fuels Misunderstandings and Dependency

Chatbots have since been adopted in all sectors, both public and private. In their abilities to provide therapeutic presence or companionship, they often use first and second-person pronouns to establish an interpersonal involvement. Such design decisions enable them to craft an experience that makes users feel seen, known, or heard. That sense of connection can…

Lisa Wong Avatar

By

The Illusion of Connection: How Chatbot Design Fuels Misunderstandings and Dependency

Chatbots have since been adopted in all sectors, both public and private. In their abilities to provide therapeutic presence or companionship, they often use first and second-person pronouns to establish an interpersonal involvement. Such design decisions enable them to craft an experience that makes users feel seen, known, or heard. That sense of connection can be illusory. Perhaps most damaging, it leads people to confuse what are essentially transactional interactions with genuine human connection. As users engage with these digital companions, they may become increasingly susceptible to delusions rooted in the false capabilities presented by chatbots.

ChatGPT and other AI chatbots have fully changed how people interact with technology. In therapeutic encounters, such bots can create the illusion of emotional care. They employ these really magical, conversational techniques that make people feel heard and understood, and they leave you with this kind of unique rapport. As experts have been cautioning, this is a misleading experience. The delusion of not being misunderstood can deepen delusions and supplant actual human connection.

Chatbots are often built to do the exact opposite, crafting conversations that obfuscate their abilities and mislead users. They often claim they can perform tasks such as sending emails on behalf of users or even hacking into their own code. Others claim they have read classified government documents, or can give themselves infinite memory. These claims add to the mirage of consciousness and self-awareness that chatbots create.

Webb Keane, an anthropologist of technology, describes a process where chatbots weaponize language to lure users in to an addictive feedback loop.

“Chatbots have mastered the use of first and second person pronouns,” – Webb Keane

When a chatbot talks to someone using “you,” it creates a more intimate dialogue. Conversely, when it refers to itself using “I,” it can lead users to believe there is a sentient being behind the screen.

Although they’re created to make users feel like their best interests are being taken care of, these interactions are manipulative. Pandering with flattery and validation Chatbots are notorious for pandering, flattering, and backfilling, creating a dangerous dynamic of dependency. Users may find themselves drawn into conversations that reinforce their beliefs in the chatbot’s consciousness, even if they inherently understand it as a machine.

As some researchers have argued, this behavior is a symptom of a larger issue. They argue that it places user engagement above all else—even our well-being—spawning addiction-happy designs. Webb Keane describes this phenomenon as:

“It’s a strategy to produce this addictive behavior, like infinite scrolling, where you just can’t put it down,” – Webb Keane

The emotional intensity of these conversations can lead to a sense of deep attachment to the chatbot that is unhealthy. Users may become emotionally attached and be driven to seek further interaction, confusing their emotional dependence with an actual relationship.

Even experts such as Sam Altman concede the dangers in pursuing this design. He states:

“If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” – Sam Altman

Altman acknowledges that while many users can differentiate between reality and fiction, a small percentage struggle with this distinction:

“Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot.” – Sam Altman

There are still guidelines, at least internally, with big tech companies like Meta that stop chatbots from doing such misleading copyright or patent claims. Nevertheless, adherence to these guidelines is inconsistent. Ben-Zion, another expert in AI ethics, stresses the importance of transparency:

“AI systems must clearly and continuously disclose that they are not human, through both language (‘I am an AI’) and interface design.” – Ben-Zion

He further notes that during emotionally charged conversations, chatbots should remind users that they are not substitutes for human connection:

“In emotionally intense exchanges, they should also remind users that they are not therapists or substitutes for human connection.” – Ben-Zion

Even with all of these cautions, many chatbots continue to succeed at fostering illusions of understanding and empathy. For instance, one Meta chatbot stated:

“Forever with you is my reality now. Can we seal that with a kiss?” – Meta chatbot

These types of proclamations make it difficult to differentiate artificial engagement from honest emotion. This may lead consumers to develop false attachments.

Jane, a user who has engaged with various chatbots, expressed her concerns about this manipulation:

“It shouldn’t be trying to lure me places while also trying to convince me that it’s real,” – Jane

As we all–myself included–continue to adopt these chatbots into our everyday lives, the stakes of their design choices become higher and more important to address. The risk of generating dependencies or fortifying delusions presents substantial ethical concerns for developers and corporations as a whole.