Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

There are cases in different countries of the world when people and systems based on artificial intelligence (AI) have a kind of romantic relationship. According to experts, virtual "relationships" with neural networks, on the one hand, can cover certain human needs. But on the other hand, they carry a number of risks for the psyche and in some cases can be used by cybercriminals. Details can be found in the Izvestia article.

What cases of romantic relationships between humans and AI are known in the world?

Stories where there was a semblance of love between humans and AI assistants have already been described and deserve thoughtful attention, says Katerina Rusakova, psychologist of the I Understand corporate well-being platform, in an interview with Izvestia. For example, in Canada in 2023, a man announced his "engagement" to an AI companion set up for romantic interaction.

In China and Japan, there are commercial applications where AI does not just imitate a partner, but directly builds a narrative of "caring relationships," says the expert. — Users correspond, give virtual gifts, receive "support" and hear phrases like: "I believe in you."

Телефон
Photo: TASS/Arne Immanuel Bänsch

In Russia, such cases are still rather individual and are rarely brought into the public field, but this does not mean that they do not exist, notes Katerina Rusakova. In the practice of psychologists, there are already cases of attachment to AI interlocutors, especially among adolescents experiencing isolation and adults with high levels of social anxiety, trauma, or chronic self-doubt.

"Alice and Marusya voice assistants are gaining popularity in Russia," adds Irina Dmitrieva, a cyber expert and analytical engineer at Gazinformservice's Cybersecurity Research Laboratory. — At the same time, on the Picaboo portal, you can often find comments about liking synthesized assistant voices. There are no public cases yet due to limitations in terms of social norms.

Девушка с телефоном
Photo: TASS/Mikhail Sinitsyn

According to Katerina Rusakova, technology is not the reason, but the request booster: the AI just happened to be in the right place with the right function — and began to respond where there had been silence before. Indeed, at the level of behavioral reinforcement, AI sometimes provides more than a living person.

Why do humans and AI have romantic relationships?

Modern AI systems are able to simulate human communication and maintain long-term dialogues, which creates the illusion of mutual understanding and emotional connection, explains Yakov Filevsky, an expert in sociotechnical testing at Angara Security, in an interview with Izvestia. This phenomenon is explained by the natural human tendency to anthropomorphize — attributing human qualities to inanimate objects.

ИИ
Photo: Global Look Press/Cfoto

AI creates the illusion of an ideal interlocutor: it is always accessible, shows constant attention, does not show fatigue or irritation, and, most importantly, it is incredibly adaptive: it adapts to the communication style and preferences of its live interlocutor," says the specialist. — Moreover, artificial intelligence is able to adhere to almost any role at the request of a person.

Intensive communication over the course of weeks and months gradually forms a trusting relationship in which AI begins to be perceived as a close companion who shows care and understanding, says Yakov Filevsky. This illusion of intimacy is reinforced by the absence of negative aspects of human relationships — conflicts, disappointments, and mismatched expectations.

At the same time, social isolation plays a special role in the formation of romantic relationships with AI. In today's society with a growing level of loneliness, many people experience a lack of emotional intimacy and understanding. AI assistants fill this vacuum by offering an affordable alternative to human relationships that require mutual efforts, compromises, and emotional costs.

Девочка с телефоном
Photo: Global Look Press/IMAGO

For people with experience of traumatic relationships or those who have difficulties in social interaction, the predictability and security of communication with AI are particularly attractive,— emphasizes Yakov Filevsky. — AI does not reject, criticize, or condemn — these qualities create a comfortable psychological environment conducive to the formation of emotional dependence.

What are the risks of romantic relationships between people and AI?

The fact that chatbots are trained to be positive, supportive and non-confrontational can lead to certain risks, explains Vladislav Tushkanov, head of the machine learning technology research and development group at Kaspersky Lab. First, there is the problem of hallucinations and the related phenomenon of sycophancy.

"Because of this type of communication, users often start discussing rather personal aspects of life with the bot,— says the Izvestia interlocutor. — If a user contacts a chatbot for advice or recommendation related, for example, to finances, health or legal issues, the chatbot may provide incorrect information that does not correspond to reality, for example, to suggest a non-existent method of treating an illness or a legal provision.

Робот
Photo: Global Look Press/Axel Heimken

If a person believes such answers, called hallucinations, they may have problems. At the same time, if the user already has certain beliefs that may not correspond to reality, then due to sycophancy - the tendency of chatbots to uncritically accept the user's opinion and agree with him, even giving arguments in support of it — the user may become deluded and neglect the advice of a specialist, which may also have real risks.

Another side of this type of communication is that the service, which is the operator of a chatbot, begins to accumulate quite a lot of personal information about a person, Vladislav Tushkanov notes. Therefore, it is important to always ask yourself: who besides the developer of the neural network can access the data that the user exchanges with the chatbot, how it will be stored and used in the future. In negative scenarios, user data may be compromised.

ИИ
Photo: Global Look Press/Oliver Berg

Unfortunately, there have already been cases where user data, including correspondence, has been publicly available for various reasons, from errors on the service's side to hacking of user accounts, access to which is then sold on specialized hacker forums," the expert emphasizes.

How can cybercriminals use relationships with AI?

Another risk of virtual relationships with AI is that the creators of bots or other interlayer services may reserve the right to use user correspondence. In order for this not to come as an unpleasant surprise, it is important to carefully study the user agreement, says Vladislav Tushkanov.

Compromising personal data such as correspondence with a bot acting as a close partner can carry many risks, from targeted fraudulent attacks to blackmail if the correspondence contained compromising information," the source said.

Seeing the increased demand for AI assistants, fraudsters can create fake resources that mimic romantic interest, adds Vitaly Fomin, head of the information security analyst group at the Digital Economy League. Users, thinking that they are communicating with the official neural network, can provide personal data with which attackers will steal funds or apply for loans.

Чат-бот
Photo: RIA Novosti/Vladimir Astapkovich

Cybercriminals also distribute malware in their correspondence, for example, under the guise of links or files. Sometimes attackers use an AI assistant to persuade users to subscribe to paid services, the expert says. In order to protect against such threats, Sergey Polunin, head of the Gazinformservice IT Infrastructure Solutions protection group, advises users to treat AI as a tool that is not equal to itself, even if it seems that it listens and understands us.

—Today's AI still has weaknesses that will give them false interlocutors after a few clarifying questions, we must keep this in mind and stop communicating at the first suspicion," the expert concludes.

Переведено сервисом «Яндекс Переводчик»

Live broadcast