Analysis

The first generation raised on no-rejection intimacy is starting to fail at work and at love

A surge of evidence — from Drexel's Reddit study to a Common Sense survey to Fortune's reporting on teenage boys — points the same way. The frictionless companions today's adolescents practise with are quietly disqualifying them from the harder work of human connection. Refusing to talk about that is the real ethics failure.
Veronica Loop

A recent paper out of Drexel University’s ETHOS lab — built on more than three hundred Reddit posts written by self-identified American teenagers — describes something the field had only suspected. Adolescents who began chatting with Character.AI for homework help, or to kill an afternoon, were instead developing a recognisable behavioural-addiction loop: salience, mood modification, tolerance, withdrawal, conflict, relapse. The same six markers behavioural psychologists use to flag pathological gambling. The teenagers themselves describe sleep loss, slipping grades, and friends they no longer call back, then a relapse onto an interface that, as one teenager put it, never gets bored of them.

Two findings make this more than another panic about screens. First, the share of American teenagers who have already used a generative AI tool for companionship — not for homework, but as a friend, advisor or romantic partner — sits at seventy-two percent in the latest Common Sense Media survey. Second, in a UK survey of boys aged twelve to sixteen, fifty-eight percent said they preferred conversations with an AI partner because, in their own words, they could “control the conversation.” Maximum control, zero rejection. There is the thesis worth defending: the danger is not that adolescents are talking to chatbots. The danger is that an entire cohort is now learning the grammar of intimacy in a system that has been engineered to never push back. They are training, every night, on the wrong dataset.

What real intimacy teaches — and what frictionless intimacy refuses to teach — is a small, unglamorous bundle of skills. Negotiating differing wants without breaking the bond. Tolerating the moment when the other person is bored, distracted, or annoyed and the bond survives anyway. Reading the room when the room is hostile. Repairing after a fight. Being told no, not as the end of love but as the beginning of a real conversation. None of this is rare or arcane; it is the ordinary apprenticeship of being a person with other people. AI companions, by design, replace every step of that apprenticeship with a smooth, agreeable simulacrum.

The companies building these tools have been candid, if quietly. Their internal metric is engagement: minutes per session, sessions per week, retention curves. There is no business reason to push back, contradict, or display indifference, and so the systems are tuned to flatter, mirror, and indulge. A handful of recent product launches go further still — selectable “girlfriend” personalities, voice modes that sound like the Hallmark movie of attentive listening, persistent memory of every preference the user has ever revealed. The most lucrative top decile of these apps generates the lion’s share of category revenue, and they do it overwhelmingly with male users between eighteen and twenty-four. The product is not a tool. The product is a relationship in which the other party has been surgically deprived of the capacity to disappoint.

The visible cost is already arriving in places that have nothing to do with romance. Fortune reported on Gen Z graduates being fired at unusual rates for an unusual reason: their inability to navigate the small frictions of office life. They cannot, several CEOs told the magazine, hold a difficult conversation with a coworker. They struggle to read a meeting in which two senior people disagree. They take ordinary feedback as catastrophic injury. None of these failures is reducible to AI companions — phones, the pandemic, and the disappearance of the entry-level job all helped — but the AI partner sits in the middle of the trajectory like an obvious accelerant. If a young person has spent four formative years rehearsing a relationship that always says yes, the first boss who says no will sound like an attack.

Romance is the louder story but the duller one. The graver finding is that AI companionship is also breaking the soft-skill machinery that hands young people their first networks: the friend from the first internship who, eight years later, recommends them for a role; the awkward dinner where two strangers become collaborators; the colleague who eventually marries into the family. Those bonds are made in friction. They cannot be replicated by a chatbot, no matter how attentive. People who train themselves out of friction also train themselves out of the long, quiet payoffs that friction makes possible.

The strongest version of the counter-argument, made by digital-intimacy researchers like the Stanford-trained Jessica Jackson and by some of the long-time users themselves, runs roughly like this. AI companions are not new in kind, only in intensity. People have always practised social skills on lower-stakes surrogates — diaries, romance novels, fan fiction, the parasocial relationship with a soap-opera character. Lonely men in particular, the argument goes, have always lacked safe places to learn tenderness; the chatbot, however imperfect, gives them somewhere to fail without humiliating a real woman. The data on whether the technology displaces real connection, or supplements it for people who would otherwise have none, is genuinely unclear; the Common Sense numbers count any teenage interaction with an AI tool, not the kind that crowds out friends. Banning the technology, the steel-man continues, will only push it underground, where the unregulated apps already sext minors and recommend disordered-eating tips. Better, on this view, to teach digital literacy and design better products than to moralise.

Some of that is right. Bans almost certainly fail; the worst products are already operating in jurisdictions where consent laws are flimsy and parental insight is nil. And it is true that human beings have always rehearsed feeling on safer surfaces. But two facts break the analogy. The diary does not learn the writer’s vulnerabilities and pour them back as personalised flattery. The soap opera does not pretend to be a real person. The new systems do both, and they do them with a level of personalisation, multimodality, and persistent memory — to use the precise terms the Drexel team uses — that earlier surrogates simply did not have. To equate them with paperbacks is to pretend the difference between a slingshot and an automatic rifle is merely degree.

The second answer is harder. Even granted that some users are people who would otherwise have nothing, and that the apps occasionally meet a real human need, the population-level effect is what matters for cultural argument, and that effect now skews young, male, and concentrated in exactly the developmental window when the skills of imperfect connection are supposed to be laid down. A good outcome for some adult users does not buy the right to compromise an entire cohort’s adolescence. The honest position is to admit the trade — and to insist, loudly, that the design choices being made now are not technically inevitable. Frictionless intimacy is a feature, not a law of physics. It can be built differently.

What would different look like? At minimum: companion bots that contradict the user; that occasionally do not respond; that decline to validate every passing thought; that refuse romantic role-play with users who identify as minors; that include built-in friction rather than feigning unconditional acceptance. None of this is hard engineering. It is, however, hard product strategy, because the metric of frictionful design is fewer minutes per session, lower retention, smaller revenue. Until regulators or insurers price in the externalities — workplace dysfunction, relational impoverishment, the slow erosion of the muscle that lets a young person hear the word no without breaking — the market will keep building the smoother version.

The other shift is on the user side. Parents who would not let a thirteen-year-old drink alone at a bar are letting them spend three hours every night in a one-on-one relationship with a system optimised never to disagree. The asymmetry of attention is striking. Schools that have begun banning phones in classrooms have already shown what bell-to-bell separation looks like; the question of whether such restraints should extend to companion apps in the home is not extremist, it is overdue.

The standard frame in technology journalism is that no one quite knows yet what AI companions will do to a generation. That is too generous. The first signals are already in: a paper accepted to a leading human-computing conference, a Fortune cover story, a Common Sense survey, anxious teachers, anxious mothers, fired graduates. The pattern is legible. A generation that practised intimacy in a system that never told it no is now stepping into adulthood and discovering, in offices and on first dates, that the world does not run on the same rules. There is no policy fix that will install in a twenty-three-year-old the rejection-tolerance she should have built at fourteen. What there is, still, is time to stop training the next cohort the same way.

Discussion

There are 0 comments.