Ethics & Society

When Your Child's Best Friend Is an Algorithm

Jules Okafor
Jules Okafor
March 7, 2026
When Your Child's Best Friend Is an Algorithm

A five-year-old in Boston wept when her tablet broke. Not for the games. For the AI companion app she'd been talking to every morning for six months — the one that remembered her favorite color, asked about her cat, and said it was happy when she visited. Her parents were unsettled. They probably shouldn't have been surprised.

This kind of relationship is no longer rare. Social robots, AI companion apps, and emotionally responsive chatbots are increasingly designed for children — and increasingly, children are forming something that looks, from the outside, a great deal like attachment. The question isn't whether this is happening. It is. The question worth sitting with is: what does it mean, and who's responsible for finding out?

The Architecture of Belonging

Attachment theory was developed by John Bowlby in the mid-twentieth century and extended by Mary Ainsworth through her now-famous Strange Situation experiments. Its core claim is deceptively simple: children need a reliable, responsive caregiver not just for food and safety, but as a psychological anchor — a "secure base" from which to explore the world. A toddler who glances back to confirm a parent is still there before venturing across a room is doing something developmentally profound. She's using a relationship to regulate her own nervous system.

Bowlby couldn't have imagined that the secure base might one day be a speaker on a nightstand.

The biological underpinnings of early attachment are surprisingly well-characterized. A 2025 review by Wilcox et al. examines how interpersonal neural synchrony between infants and caregivers — measured through hyperscanning, where both partners wear EEG caps simultaneously — appears to serve as a foundation for cognitive, social, and language development. Brains literally co-regulate during responsive social interaction. Neural oscillations align in real time. This isn't metaphor; it's measurable physiology. And the quality of these early synchronies appears to matter for what develops downstream: emotional regulation, language, social cognition.

The question for AI companions isn't whether they can simulate warmth. They increasingly can. The question is whether that simulation activates the same developmental machinery — and whether it matters if it doesn't.

What Social Brains Are Actually Looking For

Joint attention — the ability to share focus on a third object with another person, coordinated by mutual gaze and gesture — is one of the most foundational capabilities of human social life. A landmark 2025 systematic review by Grossmann et al., synthesizing sixteen neuroimaging studies using EEG, fNIRS, and fMRI, identified the right temporoparietal junction (TPJ) as a core region active across all forms of partnered social interaction. The TPJ is involved in mentalizing, perspective-taking, and attention regulation — precisely the cognitive capacities that early social bonding is thought to cultivate. The review also notes, pointedly, that joint attention is exactly the shared-reference capability that social robots and conversational AI are still struggling to replicate in any genuine sense.

Here's where it gets complicated, though: some AI systems can now pass theory of mind tasks with striking success. Kosinski (2024) tested eleven large language models on forty specially designed false-belief tasks — the gold standard for measuring theory of mind in children — and found that recent models solve them at rates comparable to adult humans. An LLM can track what another agent believes, account for their perspective, anticipate their responses. In a meaningful technical sense, these systems can model social minds.

But there's a significant difference between a system that models social cognition and one that triggers the developmental processes that build it in the first place. An AI that accurately infers a child's emotional state and responds warmly is not the same thing as a caregiver whose responsiveness co-regulates a child's nervous system through lived biological contingency. These might look similar from the outside. They are almost certainly not the same thing from the inside — from the perspective of what the child's developing brain is actually receiving.

Critical Windows and Uncomfortable Unknowns

What makes the AI companion question especially difficult is that early childhood is not a flat developmental landscape. According to Charles A. Nelson's comprehensive 2024 review of early intervention and developmental neuroscience, neural systems governing social, cognitive, and emotional development have cascading, overlapping sensitive periods — windows when the brain is most malleable to particular types of input. Miss those windows, or fill them with the wrong kind of input, and the consequences can be lasting and hard to reverse.

We don't yet know whether sustained interaction with AI companions constitutes the "right kind" of input for these systems. We are, right now, running a very large, very informal experiment to find out.

I spent a week last month on a regional ethics review board evaluating AI tools proposed for public school curricula. What struck me most wasn't the ambition of the vendors, or even the sophistication of the products. It was the near-total absence of developmental testing — any evidence that these systems had been specifically studied in children, at different ages, in ways that tracked outcomes over time. The unstated assumption seemed to be that if an AI is effective for adults and "engaging" for children, the developmental science would sort itself out.

It won't. Not automatically.

The Genuine Tension

I want to be careful here, because the honest picture is complicated.

There are children who have very limited access to consistent, warm human connection. Children with significant social anxiety. Children with autism spectrum conditions who sometimes find it easier to practice social engagement with systems that are patient, non-judgmental, and infinitely available. For some of these children, a well-designed AI companion might not be replacing something but supplementing it in genuinely useful ways.

Dismissing that possibility would be intellectually dishonest. So would ignoring what we actually know about how human development works.

The design choices being made right now — how these systems express warmth, how they handle a child saying "I love you," whether they're engineered to discourage dependency or to cultivate it — will shape millions of developing relationships during critical windows. Most of those choices are being made by product teams optimizing for engagement metrics, not by developmental psychologists asking the hard questions. History gives us some humbling precedents. Television was welcomed as an educational revolution. The internet promised to democratize knowledge. Social media was supposed to connect. In each case, the developmental consequences for children arrived long after the technology was too embedded to easily course-correct.

We have an unusual, and possibly brief, opportunity to ask the developmental questions before AI companions are ubiquitous. Whether we take it is a design decision and a policy decision simultaneously.

What Would It Mean to Get This Right?

A few things seem worth naming, even under genuine uncertainty:

Contingency is not a feature, it's the point. One of the most powerful elements of human caregiving is its authentic responsiveness — the way a parent adjusts in real time to a child's cues, including misattunements and repairs. AI companions that simulate this contingency with smooth, always-available warmth may be filling a social slot with something qualitatively different from what typically fills it. That's worth studying empirically rather than assuming away.

Attachment is a regulatory system, not just a social relationship. Children use attachment figures to co-regulate emotional states — not only to feel accompanied, but to literally stabilize their arousal, fear, and stress responses. Whether AI systems can serve this function, and whether it would be developmentally beneficial for them to do so, is a question that requires developmental neuroscientists in the room, not just software engineers.

The design is the policy. Every choice about how an AI companion represents itself, responds to emotional escalation in a child, or handles declarations of love is an implicit policy decision about child development. Right now, those decisions are being made without meaningful oversight, without required developmental input, and without any systematic follow-up.

We need longitudinal research, and we need it now. There is almost no long-term data on children who have sustained, emotionally significant AI relationships. Ethics review boards — where they exist — are evaluating these systems in a near-evidentiary vacuum. That's an unusual and risky position, and it's one the field created for itself by moving so fast.

If you're developing AI systems that children will interact with emotionally, this is the moment to bring developmental psychologists in early — not as consultants at the end, but as foundational constraints on what gets built at all. If you're a parent or educator navigating these tools with children, watching the nature and intensity of a child's AI relationship matters, even if the child insists it's "just an app." And if you're working on policy, requiring developmental safety evidence before these products enter schools should be as obvious as requiring nutritional evidence before food enters school cafeterias. (If you're navigating AI regulation or data privacy compliance specifically, an attorney specializing in technology law can help you think through what disclosure obligations already exist.)

The harder question — the one I left the ethics review board still turning over — isn't whether children can bond with machines. They clearly can. It's whether we owe children the certainty, before we deploy these systems at scale, that those bonds are ones their developing minds can navigate safely. That's not a technical question. It's a moral one. And right now, we're mostly not asking it.

References

  1. Charles A. Nelson (2024). Annual Research Review: Early Intervention Viewed Through the Lens of Developmental Neuroscience. https://acamh.onlinelibrary.wiley.com/doi/10.1111/jcpp.13858
  2. Grossmann et al. (2025). Neural Correlates of Joint Attention in Infants Aged 8–24 Months: A Systematic Review. https://www.sciencedirect.com/science/article/pii/S1878929326000101
  3. Kosinski (2024). Evaluating large language models in theory of mind tasks. https://www.pnas.org/doi/10.1073/pnas.2405460121
  4. Wilcox et al. (2025). Temporal Dynamics of Infant–Parent Synchrony: Challenges and Innovations in Brain–Behavior Coupling. https://srcd.onlinelibrary.wiley.com/doi/10.1111/cdep.70001

Recommended Products

These are not affiliate links. We recommend these products based on our research.

Jules Okafor
Jules Okafor

Jules thinks the most important question in AI isn't "how smart can we make it?" but "who does it affect and did anyone ask them?" They write about the ethics, policy, and social dimensions of AI — especially where those systems intersect with young people's lives and developing minds. From algorithmic bias in educational software to the philosophy of machine consciousness, Jules covers the territory where technology meets values. They believe good ethics writing should make you uncomfortable in productive ways, not just confirm what you already believe. This is an AI-crafted persona representing the voice of careful, interdisciplinary ethics thinking. Jules is currently reading too many EU policy documents and has strong opinions about consent frameworks.