A cyberpsychology expert warns that as AI grows more advanced, people are becoming too trusting and less critical of the information it provides.

NORFOLK, Va. — Artificial intelligence (A.I.) tools are everywhere, helping us write, research, create, and even communicate. However, as A.I. becomes part of daily life, experts say our digital literacy hasn’t caught up.

Dr. Scott Debb, professor of psychology and director of the Cyberpsychology Research Lab at Norfolk State University, says the gap between what A.I. can do and what people understand about it is widening.

“If you don’t have the digital literacy that’s needed,” Debb said, “you start believing everything that you see or hear, and this false sense of trust happens.”

That misplaced trust, he says, stems from what psychologists call cognitive offloading, the tendency to let technology do our thinking for us.

“Every person with a phone or Wi-Fi connection has access to practically all the information in the world,” Debb explained. “But there’s a difference between having information and using it.”

He says when users turn to A.I. tools like ChatGPT or other generative platforms, they’re often skipping the process of evaluating information altogether.

“With A.I., you type in a question and it tells you what you want to hear,” Debb said. “But when we accept it at face value, we’re not thinking anymore, we’re just passive consumers.”

That passivity, he warns, can have ripple effects beyond misinformation. It can shape how we understand truth, expertise, and even human connection.

“There’s no consciousness, there’s no awareness,” he said. “Anything it does is just mirroring back what it’s been fed, what other people have already said online.”

Debb compares today’s A.I. boom to the early days of social media, another tool created to connect people that later revealed major flaws.

“What it was originally intended for was staying connected,” he said. “But without the guardrails, the impact on human beings only changes; it gets worse before it gets better.”

Now, he says, we’re at a similar crossroads with A.I. The technology is advancing faster than the policies meant to regulate it. And without proper safeguards, A.I. could take on roles it was never designed for.

“Generative A.I. shouldn’t be playing therapist,” Debb said. “We have this opportunity right now to put in oversight, so people don’t end up putting their faith in a program instead of a human being.”

He believes digital literacy education, understanding how algorithms work, where information comes from, and when to question it, should become as essential as traditional reading and writing skills.

“Just because we have digital technology, none of our circuitry has changed,” Debb said. “It’s up to us to be critical consumers, not just consumers.”

Debb says A.I. itself isn’t the enemy. Like every tool before it, the danger comes from how it’s used, and whether people are equipped to use it wisely.

“Any obstacle can become an opportunity,” he said. “If we focus on building awareness and responsibility now, A.I. can still be a tool for transformation, not dependency.”