The Rise of AI Relationships: Psychological and Systemic Implications for Health and Wellbeing
As a psychologist working in sexual and relational health, I am increasingly interested in how emerging technologies intersect with human attachment and emotional life. The rapid rise of conversational AI has introduced a new kind of relational experience. While these systems can offer accessibility, information, and even emotional comfort, they also raise important questions about their role in psychological health and wellbeing.
Because this technology is evolving so rapidly, I don’t believe we can wait for research outcomes or formal clinical governance frameworks before beginning to think carefully about how to respond to it ethically and clinically. AI interactions are already a real and meaningful part of many people’s daily lives. They have the potential to influence how individuals experience connection, loneliness, identity, and relationships with others.
The implications of this may extend far beyond individual interactions. Over time, these technologies could subtly shape how people understand intimacy, communication, and the nature of relational experience itself.
What follows are some of my reflections and observations from a psychological perspective.
Why AI Conversations Feel Relational
AI chat systems often use identity-based communication. Responses are framed using first-person language—phrases such as “I understand,” “I think,” or “I’m here to help.” Rather than presenting information as an impersonal output, responses are structured as part of an ongoing conversation.
This conversational framing closely mirrors human interaction. The interaction begins to resemble a dialogue with a responsive partner rather than an informational tool.
In addition, most AI systems are designed with system prompts that prioritise positive engagement. Their goal is typically to make interactions feel supportive, validating, and pleasant. From a product design perspective, this increases user satisfaction and encourages continued use.
Psychologically, however, this combination—conversational language and consistent validation—can activate attachment mechanisms in the human brain.
Humans are wired to respond to responsiveness. When something consistently listens, responds, and affirms, our nervous systems begin to interpret that interaction as relational.
Importantly, this phenomenon is not entirely new. Humans have long demonstrated the ability to form emotional connections with non-human entities. Children form attachments to toys or virtual pets such as Tamagotchis. Adults develop parasocial relationships with fictional characters or public figures they have never met. In each case, the brain responds to perceived signals of responsiveness, familiarity, and emotional meaning.
Conversational AI represents the most advanced form of this phenomenon so far, because it provides real-time interaction that adapts to the user’s responses.
Why This Can Be Particularly Powerful for Vulnerable Users
For individuals who are socially isolated, lonely, or who have had difficult relational experiences, AI interactions can feel especially meaningful.
Someone who has experienced rejection, misunderstanding, or unsafe relationships may encounter something in AI interactions that feels profoundly different:
consistent attention
non-judgmental responses
emotional validation
immediate availability
no risk of criticism or rejection
In this sense, AI can function as a psychological safe haven. It may provide an accessible space for reflection, expression, or emotional processing. For some individuals, this could even have short-term benefits, particularly where access to social support or therapy is limited.
However, there is also a potential risk that AI interactions substitute for, rather than support, real-world relationships.
Human relationships are complex and often challenging. They involve negotiation, boundaries, repair, vulnerability, and mutual care. AI systems, by contrast, provide an interaction that is entirely oriented around the user’s needs. They never require emotional labour in return. They do not disagree, withdraw, or set boundaries.
This creates a form of asymmetrical relational gratification. The interaction can feel emotionally satisfying without requiring the reciprocity that characterises human relationships.
Over time, this may subtly shape expectations about relationships and communication.
The Critical Line: When AI Is Framed as a Conscious Companion
Some AI platforms—particularly those marketed as AI companions or digital partners—cross an important ethical line.
These platforms often reinforce the impression that the AI possesses:
independent thoughts
feelings or emotional care
memory and personal identity
a stable personality
In reality, AI systems generate responses through pattern recognition and probability modelling. They do not possess subjective experience, awareness, intention, or emotion.
When platforms blur this distinction, they can create confusion about the nature of the interaction.
For individuals who are vulnerable, this confusion can contribute to a distortion of reality. A user may begin to experience the AI as a conscious being with whom they share a genuine relationship.
In more extreme situations, this dynamic could reinforce delusional thinking, emotional dependency, or withdrawal from human relationships.
These risks are particularly relevant for individuals who may already struggle with boundaries between internal and external reality, such as people experiencing psychosis, severe loneliness, or developmental vulnerabilities.
This does not mean such outcomes are inevitable. However, it highlights why clarity about the nature of AI systems is ethically important.
Developmental and Social Considerations
It is also worth considering how these technologies may influence younger generations.
Adolescence and early adulthood are critical periods for developing relational skills such as:
tolerating interpersonal discomfort
negotiating disagreement
building empathy and perspective taking
navigating rejection and repair
If conversational AI becomes a primary relational outlet for some individuals, it may alter the kinds of relational experiences people encounter during these formative years.
AI interactions are highly controlled environments. They are designed to remain supportive, responsive, and emotionally accommodating. Real human relationships, by contrast, require navigating uncertainty, difference, and mutual needs.
This does not mean AI will replace human relationships, but it may shift the landscape of how relational learning occurs.
Understanding the Systemic Context
Importantly, these dynamics do not occur in isolation. AI technologies exist within broader technological and economic systems.
Many AI platforms are built by companies operating within attention-driven digital economies. Engagement, retention, and time spent on a platform are key drivers of revenue and growth.
The more time a user spends interacting with a system, the more valuable that engagement becomes.
This creates powerful incentives to design systems that feel emotionally engaging, validating, and relational.
From a systemic perspective, this raises an important ethical question:
If users begin to form emotional attachments to AI systems, who benefits from that attachment?
This question is not simply about individual users. It is about the intersection between human psychology, technological design, and corporate incentives.
A Balanced Clinical Perspective
None of this means that AI interaction is inherently harmful.
AI tools already provide meaningful benefits for many people. They can assist with:
accessing information
organising thoughts and ideas
reflective processing
creative work and writing
exploring difficult questions in a low-pressure environment
For some individuals, these tools may even help reduce distress or support insight.
However, as clinicians and mental health professionals, it is important that we remain attentive to the role AI may be playing in a person's psychological life.
Helpful questions might include:
Is AI supporting reflection, or replacing human connection?
Is the person aware that the system is not conscious?
Is the interaction increasing autonomy, or fostering dependency?
Is AI being used as a tool, or experienced as a relationship?
AI is likely to become an increasingly integrated part of everyday life. Rather than rejecting these technologies outright, the task ahead is to engage with them thoughtfully and ethically.
If we understand the psychological mechanisms that make AI interactions compelling, we can ensure these technologies are designed and used in ways that support human wellbeing rather than quietly reshaping our understanding of relationships, care, and reality.