EXPLAINS, MANY PEOPLE ARE GROWING TIRED OF TRYING TO TELL THE DIFFERENCE. AI IS A TERM THAT EXPERTS HAVE GIVEN TO A CERTAIN KIND OF LOW QUALITY AI CONTENT. IF YOU’VE BEEN ON SOCIAL MEDIA, YOU NO DOUBT HAVE SEEN SOME AI SLOP. THE VIDEOS OF THE TALKING DOGS, TALKING BABIES, THINGS LIKE THAT, THINGS THAT ARE JUST HARD TO BELIEVE BECAUSE THEY ARE NOT REAL. SOME OF THESE VIDEOS ARE VERY OBVIOUSLY AI. THEY’RE FANTASTICAL SCENARIOS OF THINGS, YOU KNOW, THAT JUST CAN’T HAPPEN IN REAL LIFE. KATELYN MT. JOY IS A REPORTER FOR THE ONLINE PUBLICATION CNET, COVERING THE GROWTH OF ARTIFICIAL INTELLIGENCE. THE PEOPLE THAT SHARE THIS KIND OF AI CONTENT ARE REALLY JUST TRYING TO GRAB YOUR ATTENTION, BECAUSE ONCE THEY DO THAT, THEY CAN MONETIZE THAT CONTENT AND MAKE MONEY OFF OF IT. IN OTHER WORDS, THE MORE EYEBALLS ON THESE VIDEOS, THE MORE MONEY THE CREATORS CAN MAKE. BUT A RECENT SURVEY DONE BY CNET SHOWS THAT PEOPLE ARE MORE CONCERNED WITH AI VIDEOS, ERODING THEIR CONFIDENCE IN THE WORLD AROUND THEM. LESS THAN HALF OF US ARE CONFIDENT THAT WE CAN SPOT IT WHEN WE SEE IT. SO THERE’S THIS CHALLENGE OF WE KNOW AI CONTENT IS OUT THERE, BUT WE’RE NOT SURE WE CAN PINPOINT IT WHEN WE SEE IT. ONE HALF OF THE 2400 PEOPLE SURVEYED BY CNET SAID WE NEED BETTER LABELS FOR AI CREATED CONTENT ONLINE, AND 21% BELIEVE THERE SHOULD BE A TOTAL BAN ON AI GENERATED CONTENT. ON SOCIAL MEDIA. PEOPLE ARE GROWING REALLY FRUSTRATED WITH AI CONTENT, AND THEY WANT REALLY CLEAR WAYS OF UNDERSTANDING IF WHAT THEY’RE SEEING IS REAL OR AI. RIGHT NOW, THERE ARE NO LEGAL REQUIREMENTS FOR AI GENERATED CONTENT TO BE LABELED AS AI. THERE ARE VERIFICATION TOOLS THAT YOU CAN DOWNLOAD, THOUGH THEY SCAN VIDEOS, IMAGES, AND TEXT TO SEE IF IT HAS BEEN CREATED BY AI, BUT MOST OF THEM REQUIRE A MODERATE LEVEL OF COMPUTER KNOWLEDGE AND A SUBSCRIPTION. SO LAST YEAR, ALL 50 STATES INTRODUCED SOME TYPE OF LEGISLATION TO REGULATE THE USE OF AI. THAT’S ACCORDING TO THE NATIONAL CONFERENCE OF STATE LEGISLATURES. BUT BECAUSE AI IS EVOLVING SO RAPIDLY, VERY FEW LAWS HAVE ACTUALLY BEEN APPROVED AND ARE IN EFFECT. SO WHEN IT COMES TO DETERMINING IF SOMETHING IS REAL OR GENERATED BY AI, THE EIGHT ON YOUR SIDE TEAM ENCOURAGES YOU TO ALWAYS QUESTION WHAT YOU’RE SEEING OR READING UNTIL YOU CAN VERIFY I

Survey shows growing frustration with AI content on social media

WGAL logo

Updated: 6:45 PM EDT May 14, 2026

Editorial Standards ⓘ

A recent survey by CNET highlights growing concerns over AI-generated content on social media, with users expressing frustration and uncertainty about distinguishing real content from AI creations. If you’ve been on social media, you no doubt have seen some A.I. slop. The videos of the talking dogs, talking babies. Things that are just hard to believe—because they are not real. Katelyn Chedraoui, a reporter for the online publication CNET who covers the growth of artificial intelligence, explained the nature of such content. “Some of these videos are very obviously AI. They’re fantastical scenarios of things, you know, that just can’t happen in real life,” she said. “The people that share this kind of AI content are really just trying to grab your attention, because once they do that, they can monetize that content and make money off of it,” Chedraoui said. Many users want clearer labelsA survey conducted by CNET found that less than half of respondents feel confident in their ability to identify AI-generated content.”Less than half of us are confident that we can spot it when we see it. So there’s this challenge of we know AI content is out there, but we’re not sure we can pinpoint it when we see it,” Chedraoui said. The survey, which polled 2,400 people, revealed that 51% believe AI-generated content should be better labeled, while 21% support a total ban on such content on social media.”People are growing really frustrated with AI content, and they want really clear ways of understanding if what they’re seeing is real or AI,” Chedraoui said. Currently, there are no legal requirements for AI-generated content to be labeled as such.Tools and laws still catching upVerification tools, such as Winston AI, Hive, ZeroGPT, WasItAI, DeepAI, TruthScan, and SightEngine, are available to help users identify AI-generated videos, images, and text.However, these tools often require a moderate level of computer knowledge and a subscription. Last year, all 50 states introduced legislation to regulate the use of AI, according to the National Conference of State Legislatures.However, due to the rapid evolution of AI technology, very few laws have been approved. The “8 On Your Side” team encourages users to question the authenticity of what they see or read online until they can verify it with complete certainty.

A recent survey by CNET highlights growing concerns over AI-generated content on social media, with users expressing frustration and uncertainty about distinguishing real content from AI creations.

If you’ve been on social media, you no doubt have seen some A.I. slop. The videos of the talking dogs, talking babies. Things that are just hard to believe—because they are not real.

Katelyn Chedraoui, a reporter for the online publication CNET who covers the growth of artificial intelligence, explained the nature of such content. “Some of these videos are very obviously AI. They’re fantastical scenarios of things, you know, that just can’t happen in real life,” she said.

“The people that share this kind of AI content are really just trying to grab your attention, because once they do that, they can monetize that content and make money off of it,” Chedraoui said.

Many users want clearer labels

A survey conducted by CNET found that less than half of respondents feel confident in their ability to identify AI-generated content.

“Less than half of us are confident that we can spot it when we see it. So there’s this challenge of we know AI content is out there, but we’re not sure we can pinpoint it when we see it,” Chedraoui said.

The survey, which polled 2,400 people, revealed that 51% believe AI-generated content should be better labeled, while 21% support a total ban on such content on social media.

“People are growing really frustrated with AI content, and they want really clear ways of understanding if what they’re seeing is real or AI,” Chedraoui said.

Currently, there are no legal requirements for AI-generated content to be labeled as such.

Tools and laws still catching up

Verification tools, such as Winston AI, Hive, ZeroGPT, WasItAI, DeepAI, TruthScan, and SightEngine, are available to help users identify AI-generated videos, images, and text.

However, these tools often require a moderate level of computer knowledge and a subscription.

Last year, all 50 states introduced legislation to regulate the use of AI, according to the National Conference of State Legislatures.

However, due to the rapid evolution of AI technology, very few laws have been approved.

The “8 On Your Side” team encourages users to question the authenticity of what they see or read online until they can verify it with complete certainty.