Is this a good news story, or apocalyptic?
For several years, surveys of young people in countries like the United States report a growing number reporting chronic loneliness. A new study from the research and advocacy group Common Sense Media, whose mandate is to protect children online, has found that a surprising number (or is that “alarming number”) of young people are coping by engaging with artificial intelligence (AI) companions for emotional support and friendship. Out of a sample of 1,060 young people ages 13 to 17 years, 52 percent used an AI companion at least a few times a week, with a third of those youth relying on these online tools for social interactions and relationships. That can mean seeking advice to flirting. The danger appears to be that these tools can be sycophantic. They reflect back to users what they want to hear. There is no challenging expectations or seeking compromise. One can only imagine what the long-term impact of that kind of interaction will be on young people’s expectations for intimacy later in life.
While it would be easy to say, ‘”Ban access to these tools for children under the age of 18,” I’m doubtful that will work. Putting aside the dangers that big tech pose to our kids, I see, instead, in this report hints of a bigger issue. If our children have managed to find through a virtual solution a way to maintain their mental health, what are we failing to provide them elsewhere in their lives? And if we’re failing so badly at preventing epidemic levels of loneliness among teens, should we encourage them to have AI friends, even with the dangers they pose? Or are we simply reinforcing the very problem we should be helping to fix? If we give up and embrace AI companions, do we know what the long-term consequences will be for a generation that is losing its social skills?
These questions are, to my mind, frightening. But they also remind us that removing AI companions will simply perpetuate the problem of loneliness (continuing to leave our children vulnerable). Instead, we should be asking ourselves “Why” our children need AI companions and “What” those companions are providing that our children can’t easily access elsewhere.
Turning this problem on its head, the Common Sense Media report could be useful as a catalyst for seeing how desperate our kids are for intimate connections. If we acknowledge that, then AI companions offer possibilities for improving our children’s resilience rather than undermining it.
Could AI companions, for example, become a space for rehearsing social skills? Could algorithms be changed to encourage kids to reach out to real people, too? Or would that be counter to the profit motive of the corporations providing access to these virtual avatars and the need for user frequency to drive income?
Could we see a child’s use of an AI companion as the proverbial “canary in the coalmine”? A child who is that desperate for emotional connection may also be the one we need to invest time and energy to help make real-world connections. These are the children we should be sending to summer camp, ensuring their phones are off during mealtimes, offering access to recreational activities, supporting schools that have no-phone policies, scheduling time to visit extended family, and doing just about anything else that comes to mind that puts our kids in close proximity to other people with whom they can create relationships.
The problem is not the AI companion. The problem is that we have created a world of overprotected, fragile young people and then stopped providing them with the opportunities to engage socially, other than online.