YouTube said it plans to offer creators the ability to opt-out of the platform’s recent enhancements to some Shorts after a handful of creators expressed concerns the subtle changes were being done without their permission or knowledge.

“Creators, we’ve heard your feedback on YouTube’s deblurring and denoising Shorts,” Rene Ritchie, YouTube’s Creator Liaison, wrote in a post on X on Tuesday. “There’s a lot of good stuff coming in that pipeline, tbh. But if it’s not for you, we’re working on an opt-out. Stay tuned!”

The announcement comes after creator Rhett Shull posted a video, titled “YouTube Is Using AI to Alter Content (and not telling us),” to his channel earlier this moth. Since it was posted 11 days ago, many online have reshared clips of it and tagged YouTube to ask about the claim. One person even posted side-by-side screenshots of Beato’s McCready videos to showcase the subtle changes.

Shull said he shared the video after his friend, YouTuber Rick Beato called him with a question: Did one of his recent videos look a little off?

Beato’s video, which he had uploaded to YouTube Shorts on Aug. 5, was a clip of his interview with Pearl Jam guitarist Mike McCready. The creator, who is also a music producer and multi-instrumentalist, has built a following of 5.1 million subscribers for his guitar-related content. He’d posted the same video to his Instagram page.

Shull, a guitarist who also primarily makes videos about music, said he noticed something did stand out about the YouTube Shorts version: It looked as if it had been enhanced using generative artificial intelligence. Aspects of the background looked smudged, giving it an “oil painting” effect, he said. Other details, like Beato’s hair, appeared especially sharp.

“I’ve been making videos for a long time and I’m someone that spends a lot of time trying to get their videos to look a certain way, with the lighting and the color grade and stuff,” Schull told NBC News in a phone interview before Ritchie announced the news Tuesday. “And I know what the normal YouTube compression looks like … But what was going on here is very, very different.”

In a post to X last week, YouTube disputed allegations that it used generative AI or “upscaling” — when artificial intelligence predicts a high resolution image from a lower resolution one using a deep learning model — on creator videos.

“We’re running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing,” Ritchie wrote on X in response to a question from a user who had seen the discourse around Shull’s video.

It was the first official comment from YouTube regarding the concerns, which were first raised by creators on Reddit in June. Several people had previously posted similar observations to Shull and Beato in r/Youtube.

“I was wondering if it was just me,” wrote one Reddit user.

“omg, i had this today too, i freaking hate this, i don’t want a platform altering my content,” added another.

Shull and others had said the issue isn’t just that YouTube could be using AI to alter their content. Many creators have been experimenting with various AI tools for a while, including ones rolled out by the platform last year, to help them improve their videos.

The main problem, according to some creators, is that they weren’t being given the option to opt out.

“It doesn’t really matter if you’re using ‘traditional machine learning’ or “GenAI’, you’re still altering the videos without notice or consent from the content owners. In my opinion, I view this practice as both deceptive and malicious,” wrote one Reddit user, who was first to post about the topic.

Viewers could also grow distrustful of creators’ content, according to Shull, especially amid the rise of AI “fakery,” which is when AI tools are used to generate or modify content without viewers’ knowledge.

“If someone sees a piece of content that I’ve made that looks like it’s been altered with AI, the logical conclusion that that person, in my opinion, would jump to is that, ‘oh well, Rhett’s using AI to make videos or to alter videos,’” Shull said. “Or that I’m somehow using it as like a, a shortcut or a cheat code, or that it’s not real, or that it’s been deep faked. It raises a lot of questions.”

It’s not the first time a video giant has come under fire for purportedly using AI to enhance content.

In January, viewers mocked Amazon for using what appeared to be AI on a poster of the 1922 film “Nosferatu.” The company didn’t publicly comment on the backlash.

In February, Netflix also sparked controversy with its “HD remasters” of “The Cosby Show” and “A Different World,” after viewers said they noticed warped facial features on the actors and distorted backgrounds. Netflix did not issue a comment regarding whether it used AI to enhance the shows.

In his initial post, Ritchie said that “YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features.”

Later, in response to a different X user, Ritchie elaborated further.

“GenAI typically refers to technologies like transformers and large language models, which are relatively new,” he wrote.

“Upscaling typically refers to taking one resolution (like SD/480p) and making it look good at a higher resolution (like HD/1080p)” the post continued. “This isn’t using GenAI or doing any upscaling. It’s using the kind of machine learning you experience with computational photography on smartphones, for example, and it’s not changing the resolution.”

But as Shull’s video picked up more traction, many people on X continued to express their concerns.

“Awesome, now let me turn it off because it’s actually making my Shorts look worse,” wrote X user CaptainAsthro, who goes by the same username on YouTube where he posts about the video game Star Citizen, in response to Ritchie’s X post.

“The issue isn’t what technology is being used,” wrote Ari Cohn, a First Amendment and defamation lawyer who serves as lead counsel for tech policy at the Foundation for Individual Rights and Expression (FIRE). “It’s that you’re changing the content without the permission or even knowledge of its creator.”

“YouTube has confirmed it’s testing AI for clarity in Shorts, but the lack of choice for creators is sparking conversations about trust and authenticity in digital content,” AI strategist and former IP lawyer Wes Henderson said in a post on X. “It really makes you think about the evolving role of AI in shaping what we see online and the importance of creator awareness.”

Ritchie did not provide a timeline for when the opt-out feature would be available to creators.

NBC News has reached out to YouTube for additional comment.