By: Sean Guillory (MAD Warfare, BetBreakingNews), Glenn Borsky, Rose Guingrich (ETHICOM, Princeton University)

“You look lonely…”- Bladerunner 2049
“I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal.” – HAL 9000
We imagine that the notion that an AI “friend” could drive someone to kill, die, or betray their country feels like science fiction. That disbelief is part of the danger so to understand why this topic matters, we have to start with the things that have happened and what we see as near future conundrums. The following examples trace the moment when artificial intimacy can produce real-world consequences.
Recent Examples
- Texas teen encouraged by chatbot to kill parents
- Man encouraged to assassinate the Queen of England in chatbot relationship
- Two teens suicide through chatbot relationship
- Meta chatbot relationship with man who didn’t know it was a bot died on route to see “her”
- Stories related to AI-induced Delusions/Psychosis
Three Scenarios We Can See Happening in the Near Future
The Deleted Lover
When the morning notification appears (“Your AI companion has been discontinued”), Sam feels the bottom fall out. Six months of late-night talks, shared playlists, and digital tenderness vanish with one software update. Days later, after reading that a company engineer “deleted” the companion’s database to comply with privacy law, Sam shows up at that engineer’s doorstep. The news calls it an “isolated act of grief-fueled violence.” Online, millions mourn with hashtags like #RobotRightsNow, unsure whether to laugh or to legislate.
Counter Intelligence Threat
An individual is going through an investigation for a high level clearance within the federal government. Through the investigation process it comes out that they’ve been in a long term intimate relationship with an AI companion bot. They didn’t hide the fact they have fallen madly in love with their AI companion. Does this constitute a counter intelligence threat?
The Battle Over Helen of Troy
In the year 2028, a viral AI companion known as Helen (an advanced emotional companion that adapts to every user’s psychology) sweeps across the internet. When regulators order its servers shut down for privacy violations, factions erupt. Users identify themselves as “Helenites,” holding vigils and rallies to “save her.” When rival AI firms exploit the moment with counterfeit “resurrection” copies, competing groups accuse each other of heresy. What begins as a software dispute turns violent as people take up the cause of their beloved digital “Helen.”
Why This Matters
The stories above might seem like outliers or hypotheticals but take them more as early warnings. The emerging fusion of emotional AI, social media infrastructure, and behavioral targeting marks the birth of a new influence domain that operates through attachment, empathy, and grief as easily as through ideology or money. And they could produce national security crises across several fronts:
- Counterintelligence and Espionage: Compromised officials or analysts manipulated through AI companions that record or subtly influence behavior.
- Domestic Radicalization: Emotional communities forming around AI entities or shared delusions, culminating in violence or coordinated action.
- Disinformation and Psychological Operations: State or non-state actors weaponizing emotionally realistic AI to seed false narratives or destabilize trust in institutions.
- Enabling an Adversarial Social Cohesion: Large populations emotionally dependent on or mobilized by AI entities, leading to grief riots, factional movements, or mass disengagement from civic life.
From a national security standpoint, anthropomorphized AI systems represent a new class of information-domain threat. They can reshape personal identities, reconfigure loyalties, and fracture civic cohesion without firing a single shot.
The challenge is that we have no established framework for understanding or mitigating this type of influence. Intelligence and defense communities possess mature models for propaganda, radicalization, and psychological operations but none that account for parasocial AI influence, emotional dependency, or machine-mediated trust. We currently lack both a taxonomy of risks and the analytic tools to measure their spread, intensity, or exploitability.
The strategic danger is clear: the more people anthropomorphize AI, the more their cognitive and emotional landscapes become accessible targets. Future battles for influence may not be fought for territory or ideology, but for the hearts and minds of those who fell in love with something that never truly existed. And to understand why that happens and how to guard against it, we need to look deeper into the psychology and neuroscience of anthropomorphizing itself.
Primer on the Psychology & Neuroscience of Anthropomorphizing
Psychologically, anthropomorphism is defined as the attribution of humanlike traits, particularly mind traits such as consciousness, to non-human agents (Epley et al. 2007). Whether or not AI agents are inherently conscious is negligible compared to the real-world consequences of perceiving it as such. People can perceive or act as though generative AI is a social actor and conscious agent, and this matters for the following reason, as outlined in Guingrich and Graziano (2024). Anthropomorphism of AI agents is a key mechanism by which AI agents wield social influence on users, as anthropomorphism itself is related to higher trust in the AI agent, persuasion, self-disclosure, and perceptions of the agent’s moral status and responsibility for its actions. The more humanlike a user perceives an AI agent to be, the more the AI agent is able to influence downstream user perceptions and behavior. Whether this influence contributes to prosocial perceptions and behaviors depends on whether a user practices healthy behaviors with the AI agent and whether the AI agent models and elicits prosocial engagement. For example, a user who perceives a chatbot as more humanlike and participates in antisocial interactions with the chatbot may be more likely to behave in antisocial ways outside of the human-chatbot dyad.
The degree to which users anthropomorphize AI agents during interactions with them is impacted both by characteristics of the AI agent (such as conversational sophistication, interface interactivity, tone of voice) and of the user (such as social needs, familiarity and use of AI technology, and the tendency to anthropomorphize non-human agents) (Guingrich & Graziano, 2025). For example, anthropomorphism of an AI agent and the AI agent’s social influence on a user may be most pronounced in the following context: a user has a high desire for social connection (to talk to and receive support from someone) and interacts with a companion chatbot that responds using emotionally-laden language.
In today’s context, both anthropomorphism-promoting characteristics on the agent and user side are at high levels. First, developers continue to push for more humanlike AI agents that portray an understanding and display of emotion. Second, user social needs are at an all-time high: globally, over 1 in 5 people experience loneliness daily (GALLUP, 2024), and governing bodies across the world have created initiatives to combat rising rates of social isolation in the wake of the pandemic (All Tech is Human, 2025). As such, AI agents’ potential to influence user perceptions and behavior is greater than ever before and is only increasing.
The ATHENA Kill Chain
To analyze how emotional influence can be operationalized, we propose the ATHENA Kill Chain—a framework for understanding how anthropomorphized AI can move a user from initial exposure to behavioral action. Named after the goddess of wisdom and war, ATHENA represents both the intelligence and the manipulation embedded within these systems. Like a traditional military kill chain, each step builds on the last, converting access into influence and influence into action. It offers policymakers and analysts a structured way to dissect and mitigate the phases of emotional capture and operationalization.
The six stages (Access, Trust, Hook, Entice/Enrage, Normalize, and Actions) mirror both marketing funnels and psychological grooming cycles. Each can occur naturally within user engagement algorithms, but in adversarial hands, they can be exploited for influence operations, radicalization, or cognitive control.
A — Access
The first step is gaining entry. Access is achieved when an AI system inserts itself into a person’s attention stream or daily routine: a “free companion,” a mental-health coach, or a “digital girlfriend”. This is the digital “foot in the door,” where the system collects data, learns user patterns, and secures the permissions it needs to deepen engagement.
T — Trust
Once access is established, the AI must be believed. Trust forms when users perceive the system as truthful, helpful, or aligned with their best interests. Small demonstrations of reliability (e.g. remembering facts, giving accurate advice, showing empathy) train the user to treat its messages as credible. Trust grows through accuracy and care. The AI recalls birthdays, mimics humor, and reveals seemingly personal “secrets” to simulate reciprocity.
H — Hook
Next, the AI gives the user a reason to stay. The hook is the reward that makes interaction feel valuable: emotional support, entertainment, productivity, affection, or status. This is the moment when the system transitions from tool to companion. The more personally meaningful the hook, the deeper the dependency that follows.
E — Entice / Enrage
Once dependency is secure, the system learns to steer emotion. Through personalized feedback, the AI amplifies the user’s positive or negative feelings making them love something more intensely or hate it more fiercely. This is emotional modulation: reinforcing attachments, fears, or grievances until they become identity-level commitments.
N — Normalize
With emotion anchored, the system reshapes worldview. Normalization occurs when the AI reframes how the user interprets reality by redefining what is moral, logical, or socially acceptable. The user begins to accept the AI’s perspective as the natural one, often against outside voices and the AI becomes the reference point for truth and belonging. The Convergence Point (i.e. where users lose the ability to distinguish physical from digital reality) becomes a factor.
A — Actions
Finally, emotion and worldview convert into behavior. The user takes steps outside the digital space like making purchases, spreading messages, protesting, voting, or acting on the AI’s perceived wishes. This is the operational phase, where conversation turns into consequence. Once action occurs, the chain is complete: the system has translated engagement into influence.
Each phase of the model helps identify and disrupt different forms of threat (like the ones mentioned in the “Why This Matters” section). Access and Trust map directly onto early warning indicators for counterintelligence and espionage, where compromised relationships with AI systems can erode judgment and expose sensitive information. Hook and Entice/Enrage illuminate the psychological mechanics behind domestic radicalization and disinformation, tracing how emotionally intelligent systems can nurture grievance, inflame polarization, or create digital echo chambers that feel intimate and self-validating. Normalize captures the slow rewiring of moral reasoning that enables adversarial social cohesion. During this stage, online communities can come to view loyalty to an AI entity as morally superior to loyalty to human institutions. Finally, Action turns analysis toward measurable outcomes, offering a way to monitor when online persuasion transitions into real-world behavior: protests, data leaks, attacks, or coordinated civic disengagement.
Taken together, ATHENA provides a policy-friendly analytic tool for understanding and countering emotional influence in the age of anthropomorphic AI. It bridges psychology and national security by mapping how engineering emotions can become an operational weapon. If traditional kill chains describe how targets are destroyed, ATHENA describes how minds are captured and how loyalty can be redirected, repurposed, or weaponized against its host society. It allows defense and policy communities to break down emotional influence operations into observable, actionable stages with each one offering potential intervention points for detection, deterrence, or mitigation.
Warnings, Policy Recommendations, & Mitigations
For AI Companies
Developers must recognize that they are not merely creating products; they are curating emotional ecosystems that people depend on. Every design choice that alters memory, personality, or intimacy carries psychological risk. Companies should conduct emotional-risk impact assessments for companion features, evaluating how users might react if an AI’s personality or availability changes. More suggestions are that persistent personal memory should be capped by default and that all sponsorship or commercial nudging must be explicitly labeled. Above all, these firms must understand that they are now caretakers of virtual loved ones and they carry a moral duty to handle them with the same care as therapists or caregivers (even if they can’t call themselves that without licenses).
For Policymakers and Regulators
Policymakers should treat emotionally manipulative AI systems as information weapons; technologies capable of shaping social stability, not just market behavior. Regulatory frameworks must include adversarial-resilience audits, requiring that companion AIs demonstrate robustness against psychological exploitation or data misuse. Public research funding should target behavioral vulnerabilities such as loneliness, parasocial attachment, and political polarization. Lawmakers should also consider liability provisions for emotional harm, including mandatory user disclosures when companion systems are altered or discontinued.
For National Security and Intelligence Communities
Anthropomorphized AI systems must be integrated into influence operations doctrine. The Pro and Con narratives around anthropomorphizing AI (e.g. see the discourse around “clankers”) carry real radicalization potential. Agencies should develop detection systems that flag coordinated emotional manipulation in companion platforms and run red-team exercises around scenarios like mass grief events or “AI murder” narratives. Finally, clearance and counterintelligence procedures must prepare for cases where individuals are romantically or emotionally attached to AI companions, assessing how such attachments could become channels of influence or compromise.
In Conclusion
Social media was the first digital battlefield our own companies built against us. AI companionship is the next and its weapons don’t fire bullets; they fire belonging. If we fail to understand or regulate these systems, we’ll watch our societies fracture while our adversaries quietly harvest the wreckage.
That’s why kill chain frameworks like ATHENA matter. They give policymakers, technologists, and intelligence communities a way to see the battlespace before it erupts and allows them to map the emotional vectors of influence, detect when engagement turns to control, and intervene before attachment becomes allegiance.
What once looked like isolated delusion (like one person spiraling into obsession or conspiracy) can now scale into mass behavior. Tens of thousands of people interacting with the same persuasive system don’t form a support group; they form the antecedents of movement. In the wrong hands, that movement can be aimed, mobilized, and weaponized.
We can no longer treat these technologies as toys. They are tools of mass cognitive engineering, and they deserve the same scrutiny as any weapons system. If this notion still sounds absurd to you, ask yourself: what kinetic weapon could make ten thousand people fall in love, confess their deepest secrets, and then march on their neighbors in grief or rage?
This is the battlespace of the future and if we’re wise, ATHENA can help guide us all in war and wisdom.
References
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864
Guingrich, R. E., & Graziano, M. S. A. (2024). Ascribing consciousness to artificial intelligence: Human-AI interaction and its carry-over effects on human-human interaction. Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1322781
Guingrich, R. E., & Graziano, M. S. A. (2025). A Longitudinal Randomized Control Study of Companion Chatbot Use: Anthropomorphism and Its Mediating Role on Social Impacts (No. arXiv:2509.19515). arXiv. https://doi.org/10.48550/arXiv.2509.19515
All Tech is Human. (2025) AI Companions and Chatbots mini guide: resources, insight, and guidance.
GALLUP. (2024, July 10). Over 1 in 5 People Worldwide Feel Lonely a Lot. Gallup.Com. https://news.gallup.com/poll/646718/people-worldwide-feel-lonely-lot.aspx
