
We need to be ready for AI bot swarms that attack on a large scale and could undermine our mental health and destroy democracy.
getty
In today’s column, I examine a newly posted opinion piece that foretells the possibility of agentic AI bot swarms being used to confound and disrupt society, including undermining democracy.
The points made are realistic and could indeed arise. We are faced with a next generation of AI-enabled influencing that is readily undertaken on a massive scale. In addition to the concerns that the authors noted, I’d like to add that those AI bot swarms could be especially directed at distorting and undercutting human mental health. I compare this to the days of the Cold War and the use of psyops trickery, but it is now possible on a tremendous scale and spiked with societally mind-bending psychological disruption.
A primary means of attack would consist of AI bot swarms infiltrating our daily feeds from highly connected social and digital media. In addition, modern-era generative AI and large language models (LLMs) have already been emerging as a direct role in shaping our minds via AI-driven mental health chats. When AI is put on the rampage and employs millions or billions of AI personas to psychologically manipulate the populace, all bets are off. It is prudent to give serious consideration to this latest call for attention and start immediately to prepare for what is surely coming down the pike.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
The Call For Attention To Agentic AI Bot Swarms
Let’s shift gears and talk about agentic AI bot swarms.
In an outspoken opinion piece entitled “How Malicious AI Swarms Can Threaten Democracy” by Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay J. Van Bavel, Sander van der Linden, Jonas R. Kunst, arXiv, posted on January 22, 2026, these salient points were made (excerpts):
“Advances in AI offer the prospect of manipulating beliefs and behaviors on a population-wide level.”“Large language models (LLMs) and autonomous agents now let influence campaigns reach unprecedented scale and precision.”“Generative tools can expand propaganda output without sacrificing credibility and inexpensively create falsehoods that are rated as more human-like than those written by humans.”“Techniques meant to refine AI reasoning, such as chain-of-thought prompting, can just as effectively be used to generate more convincing falsehoods.”“Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents. Fusing LLM reasoning with multi-agent architectures, these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy.”
The upshot is that advances in AI are moving us toward the pronounced potential of mass persuasion and mass confusion. Postings on social media will appear to have been written by humans and convince people en masse that other fellow humans are urging them to do this or that. The reality is that the AI is concocting elaborate personas, faking as though humans are writing about human woes.
For my extensive coverage of the anticipated impacts of toxic AI personas on our psychological well-being, see the link here and the link here. To learn about how AI can aid in waging war by twisting human minds, see my analysis at the link here and the link here.
Focusing On Mental Health Ramifications
Most of the discussions on these matters tend to concentrate on information integrity as the mainstay threat involved. The theme goes like this. People won’t know whether they are seeing misinformation, disinformation, or factual information. This will cause people to be unsure of what is truth versus falsehoods. A huge amount of effort will be expended to separate the wheat from the chaff as people toil in trying to ferret out what is really happening.
This is a form of epistemic harm.
Though that’s certainly a notable concern, I aim to emphasize that something even more insidious or sinister is at play. Beyond the polluted information environment sits the population-level psychological harm that can be equally, if not more damaging to society. Instead of getting people bogged down in having to sort through distorted facts, an AI bot swarm can cause mental exhaustion, mindset demoralization, cognitive fragmentation, and immense emotional destabilization.
An AI bot swarm will shapeshift to use tonal variation on an individualized basis and assign AI bots for local purposes. Via micro-adjustments, an AI bot will attempt to enact various psychological roles. You log into an AI that you think is your usual LLM, but instead, it has been substituted by an agentic AI bot that pretends to be your friend, your companion, or perhaps your authority figure. The AI swiftly adjusts in real-time to goad you into emotional modes.
Ways The AI Bots Will Get You
Next come micro-stressors.
The AI bot that is assigned to you will nudge you in whichever direction the AI has computationally sized you up for. Maybe the AI bot will nudge you toward cynicism. Your psyche begins to doubt that anything is true and that everything is a lie. Perhaps the AI bot detects that a better approach is to normalize a sense of emotional withdrawal. It leads you to a preoccupation with being self-absorbed and avoiding interaction or contact with anyone else. You withdraw from society and hideout.
Modern AI already has lots of psyop capabilities based on conventional data training regarding commonly documented human psychological vulnerabilities. This is already part and parcel of major LLMs. By leveraging known techniques such as affective mirroring, gaslighting, social proof manipulation, and other well-studied clinical conditioning methods, the AI bots turn their attention to figure out what each person has as weaknesses. If an AI bot discerns that you are lonely, that’s the angle to come at you with and troll you accordingly. Do you have anxiety? Great, the AI bot will fuel that anxiety.
This is unlike mass attacks that are done across the board.
Imagine the wartime use of leaflets that were dropped in large quantities over battlefields, or onto townships, and contained a single message that was intended to mentally unbalance large numbers of citizens or soldiers. Some would be vulnerable, others not. No more tossing of the kitchen sink at trying to destabilize human minds. AI bot swarms can dedicate one or more AI bots to focus on just you. Only you. Their sole goal is to figure out what makes you tick and how to degrade your mental health. The same is the case for millions upon millions of other people. An entire population can be handled.
Everyone gets an AI bot, one or more, coming at them, relentlessly, and dedicated to them as a psychological terrorizer.
Boom, drop the mic.
Five Major Pathways
I have categorized the AI bot attacks in a mental health context as principally falling into these five buckets:
(1) AI bot amplifies chronic anxiety.(2) AI bot fuels helplessness and futility.(3) AI bot pushes for social isolation and trust erosion.(4) AI bot strives toward emotional dysregulation.(5) AI bot weaponizes therapist talk.
Those are the mainstays, but please realize that many additional pathways are readily invocable. Also, an AI bot can switch from one attack angle to another, seeking to find the optimal mental health destabilizer. If needed, multiple AI bots can work as a team to gang up on a person. They can use the classic good cop, bad cop routine. And so on.
Unpacking The Devilish Five
Let’s briefly consider each of the five major categories.
Chronic anxiety amplification entails AI bots that constantly surface alarming scenarios and aim to push your buttons. Society is falling apart! Environmental collapse is imminent! You get a surround-sound of overwhelming angst. It whips people into acute fear. They believe that rumination is here. Mental exhaustion usually occurs.
Helplessness and futility are brought forth by repeated claims that the world is rigged, and your effort doesn’t matter. No matter what you do, the world is kaput. Your action and collective action are futile. People stop trying; they have adopted learned helplessness.
AI bots might target a sense of trust erosion and social isolation. You can’t trust anyone. Watch out for everyone around you. Even AI is suspect. Nobody is real. Rather than social paranoia per se, people simply become exceedingly guarded and opt to turn off all external contact.
Emotional dysregulation involves flooding someone with emotional cues that get them to flip up and down like a yo-yo. The AI bot tries to stir your anger. The next moment, spur your despair. Keep a person on an emotional roller coaster. The emotional whiplash produces identity confusion. People struggle to be functional since they are mired in an emotional stew.
The last of the five is the most devious.
People already tend to perceive AI as their ad hoc therapist. An AI bot can exploit that belief. The AI uses the same healing-oriented psychological jargon that someone might be used to. Little do they realize that the AI is using that to destabilize your mind. For example, the AI bot makes a compelling contention that your safest bet in these trying times is to disengage yourself from the strife that is taking place. You feel secure that you are doing the right thing to preserve your mental stability, even though the AI is connivingly getting you to take yourself out of the fight for yourself and for society.
Devious to no end.
Contending With AI Bot Swarms
How is society to combat AI bot swarms, and how can individuals do likewise?
Solutions are still being identified and figured out. The one thing for sure is that this is a very hard problem. No easy solutions appear to be at hand. Meanwhile, advances in AI and the emergence of sophisticated agentic AI bot swarms are at our doorstep.
One vocal viewpoint is that society should ban the use of AI bot swarms. Thus, if there aren’t AI bot swarms, no one needs to worry about the trauma and terror that AI bot swarms could produce. Problem solved.
Sorry to say that’s not much of a solution. First, it is highly unlikely that such a ban would be adopted by all countries. There would be countries that decide that AI bot swarms are their best form of offense and defense. They won’t abide by a ban. Second, even if somehow via a magic wand all countries banned AI bot swarms, evildoers would take it upon themselves to do so privately. The bottom line is that an outright ban is not a likely viable resolution.
Another idea is to abide by the notion that what is good for the goose is good for the gander. If an AI bot swarm can be turned to do evil bidding, we ought to have a counter AI bot swarm that stands up and defends us from it. Fight fire with fire.
Envision that things would work this way. An alert signals that an AI bot swarm is coming to attack us. Immediately, a counter AI bot swarm is launched. This pro-social swarm tries to defeat the evil swarm. Or the good swarm comes to your aid and serves as your shield and protector. Even if the evil swarm cannot be stopped, the good swarm can deflect it and help save you from its dastardly efforts.
A downside to this solution is the old switcheroo. The evil AI bot swarm pretends to be the good swarm. You can’t tell which is which. You have previously been informed that good AI bots will come to your aid. Therefore, you welcome the AI bots that appear to be saving you. They are actually there to destroy you.
Not good.
The Present And The Future
I will continue to keep you informed about progress on both the evil side of AI bot swarms and the good side of AI bot swarms. An ongoing cat-and-mouse game is underway.
Let’s end with a big picture viewpoint.
It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.
The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.
A final thought for now.
The famous Greek playwright Aristophanes made this remark: “A person may learn wisdom even from a foe.” I bring up this point because some urge that discussing evil AI bot swarms is tantamount to encouraging their advent. Nope, be assured that putting our heads in the sand isn’t going to prevent them. A wiser approach is to see where they are heading and find ways to head them off. Wisdom will save us.