AI Chatbots and the Humans Who Love Them

Sophie Bushwick: Welcome to Tech, Quickly, the part of Science, Quickly where it’s all tech all the time.

I’m Sophie Bushwick, tech editor at Scientific American

[Clip: Show theme music]

Bushwick: Today, we have two very special guests.

Diego Senior: I’m Diego Senior. I am an independent producer and journalist.

Anna Oakes: I’m Anna Oakes. I’m an audio producer and journalist.

Bushwick: Thank you both for joining me! Together, Anna and Diego produced a podcast called Radiotopia Presents: Bot Love. This seven-episode series explores AI chatbots—and the humans who build relationships with them. 

Many of the people they spoke with got their chatbot through a company called Replika. This company helps you build a personalized character that you can chat with endlessly. Paid versions of the bot respond using generative AI – like what powers Chat GPT – so users can craft a bot that is specific to their preferences and needs.

Bushwick: But what are the consequences of entrusting our emotions to computer programs? 

Bushwick: So, to kick things off, how do you think the people you spoke with generally felt about these chatbots?

Oakes: It’s a big range. For the most part people really seem very attached. They feel a lot of love for their chatbot. But often there’s also a kind of bitterness that I think comes through, because either people realize that their relationships with their chat bots, they can’t find that fulfilling a relationship in the real world with other humans. 

Also, people get upset when after an update that, like, chat capabilities of the chatbot decline. So it’s kind of a mix of both like intense passion and affection for these chatbots matched with a kind of resentment sometimes towards the company or just, like I said, bitterness that these are just chatbots and not humans.

Bushwick: One of the fascinating things that I’ve learned from your podcast is how a person can know they’re talking to a bot but still treat it like a person with its own thoughts and feelings. Why are we humans so susceptible to this belief that bots have inner lives? 

Senior: I think that the reason why ah humans tried to put their themselves into these bots, it’s because precisely that’s how they were created. We want to always extend ourselves and extend our sense of creation or replication – Replika is called Replika because of that specifically, because it was first designed as an app that would help you replicate yourself. 

Other companies are doing that as we speak. Other companies are trying to get you to replicate yourself into a work version of your own,  a chatbot that can actually give presentations visually on your behalf, while you’re doing something else. And that belongs to um to the company. It sounds a little bit like severance from ah from Apple, but it’s happening. 

So we are desperate to create and replicate ourselves and use the power of our imagination and these chatbots just enable us, and the better they get at it the more we are engaged and the more we are creating.  

Bushwick: Yeah, I noticed that even when one bot forgot information it was supposed to know, that did not break the illusion of personhood—its user just corrected it and moved on. Does a chatbot even need generative AI to engage people, or would a much simpler technology work just as well? 

Senior: I think that it doesn’t need it. But once one bot has it, the rest have to have it. Otherwise I’ll just be engaged with whichever gives me the more rewarding experience. And the more your bot remembers you, or the more your bot gives you the right recommendation on a movie or on a song as it happened to me particularly with the one I created, then the more attachment I’ll be and the more information I’ll feed it from myself and the more like myself it will become. 

Oakes: I’ll maybe add to that, that I think there are different kinds of engagement that people can have with chatbots and it would seem that someone would be more inclined to respond to an AI that is, like, far more advanced. 

But in this process of having to remind the chatbots of facts or kind of walking them through like your relationship with them, reminding them, oh, we have these kids, these sort of fantasy kids, I think that is a direct form of engagement and it helps users really feel like they’re participants in their bots like growth. That people are also creating these beings that they have a relationship with. So, the creativity is something that comes out a lot in the communities of people writing stories with their bots. 

I mean, frustration also comes into it. It can be annoying if a bot calls you by a different name, and it’s sort of off-putting, but people also like to feel like they have influence over these chat bots.

Bushwick: I wanted to ask you also about mental health. How did engaging with these bots seem to influence the user’s mental health, whether it was for better or for worse?

Oakes: It’s hard to say what is just good or bad for mental health. Like something that might respond to sort of a present need, a very real need for companionship, for some kind of support, maybe in the long term isn’t as sustainable an option. Or, you know, we’ve spoken to people who were really, like, going through intense grief, but having this chatbot filled a kind of hole that was in the moment. But long term like but I think the risk that it pulls you away from the people around you. Maybe you get used to being in a romantic relationship with this perfect companion and that makes other humans not seem like worth engaging with, or like other humans just can’t measure up to the chat bot. So that kind of makes you more lonely in the long term. But it’s kind of a complicated question.

Bushwick: Over the course of reporting this project and talking with all these people, what would you say is the most surprising thing you learned?

Oakes: I’ve been thinking about this question. I came into this, like, really skeptical of companies behind it, of the relationships, of the quality of the relationships. But through the course of just talking to dozens of people, I mean, it’s hard to to stay a strong skeptic when like most people that we talk to only had glowing reviews for the most part. 

I mean, part of our reporting has been that, you know, even though these relationships with chatbots are different from relationships with humans and not as full, not as deep in many ways, that doesn’t mean that they’re not valuable or meaningful to the users.

Senior: What’s more surprising to me is what’s coming up. For instance, imagine if replica can use GPT-4. Generative Ai it has a little black box moment, and that black box can become larger. So what’s coming is scary. In the last episode of our series, we’ll bring in people tat that are working on what’s next, and that’s very surprising to me.

Bushwick: Can you go into a little more detail about why it scares you?

Senior: Well, because of human intention. It scares me because, for instance, there’s companies that are, full on, trying to get as much money as they can. Companies that started as nonprofits and eventually they were like oh well, you know what? Now we’re for profit. And now we’re getting all the money, so we’re going to create something better, faster, bigger, you know, nonstop. And they claim to be highly ethical. But in bioethics there has to be an arc of purpose. 

So there’s another company that is kind of less advanced and less big but that has kind of that clear pathway. This one company has three rules for AI. For what they think that the people that are creating and engaging with AI should be aware of. 

AI should never pretend to be a human being [pause]…which I’m taking a pause because it might sound stupid, but no. In less than 10 years, the technology is going to be there. And you’ll be interviewing me and you won’t be able to tell if it’s me or my digital version talking to you. The turing test is way out of fashion, I would say.

And then there’s another one. That is the AI in production must have explainable underlying technology and results. Because if you can’t explain what you’re creating, then you can lose control of it. Not that it’ll be something sentient, but it’ll be something that you cannot understand and control. 

And the last one is that AI should augment and humanize humans, not automate and dehumanize.

Sophie: I definitely agree with that last point—when I reach out to a company’s customer service, I often notice they’ve replaced human contacts with automated bots. But that’s not what I want.  I want AI to make our jobs easier, not take them away from us entirely! But that seems to be where the technology is headed. 

Oakes: I think it’s just going to be a part of everything, especially the workplace. One woman who Diego mentioned is working at a company that is trying to create a work self. So, like, a kind of reflection of yourself. Like you would copy your personality, your writing style, your decision process into a kind of AI copy, and that would be your workplace self that would do the most menial work tasks that you don’t want to do. Like, I don’t know, responding to basic emails, even attending meetings. So yeah, it’s going to be everywhere.

Bushwick: Yeah, I think that the comparison to the TV show Severance is pretty spot on in kind of a scary way.

Oakes: Yeah, like, talk about alienation from your labor when the alienation is from your own self.

Bushwick: So, is there anything I haven’t asked you about but that you think is important for us know?

Oakes: I’ll say that, like, for us, it was really important to take seriously what people, what users were telling us and how they felt about their relationships. Like most people are fully aware that it’s an AI and not like a sentient being. People are very aware, for the most part, and smart, and still maybe fall in too deep into these relationships. But for me, that’s really interesting. Why like we’re able to kind of lose ourselves sometimes in these chatbot relationships even though we know that it’s still a chatbot. 

Oakes: I think it says a lot for humans, like, ability to empathize and, like, feel, like, affection for things that are outside of ourselves. Like, people that we spoke to compared them to pets and stuff, or like one step beyond pets. But I think it’s kind of wonderful that we’re able to expand our our networks to include non-human entities.

Senior: That’s the biggest lesson of, from it all is that the future of chatbots, it’s up to us and to what we see ourselves as humans. Bots, like our children, become whatever we put into them.

[Clip: Show theme music]

Bushwick: Thanks for tuning into this very special episode of Tech, Quickly. Huge thanks to Anna and Diego for coming on and sharing these fascinating insights from their show. You can listen to Radiotopia Presents: Bot Love wherever you get your podcasts. 

Tech, Quickly is a part of Scientific American’s podcast Science, Quickly, which is produced by Jeff DelViscio, Kelso Harper, and Tulika Bose. Our theme music is composed by Dominic Smith.

Still hungry for more science and tech? Head to for in-depth news, feature stories, videos, and much more. 

Until next time, I’m Sophie Bushwick, and this has been Tech, Quickly.

[Clip: Show theme music]

Source link