The rush to integrate generative AI into workflows and automate manual work is resulting in presentations and reports that may look polished and professional but are error-strewn and lack substance.
Ask any employee who uses AI and theyâll probably tell you that theyâve spent a significant amount of time at least once cleaning up the technologyâs mistakes, even if theyâre simple grammatical ones.
This workplace epidemic has been termed AI âworkslopâ by leadership coaching platform BetterUp and researchers at the Stanford Social Media Lab. Writing in the Harvard Business Review in September, they defined âworkslopâ as âAI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given taskâ.
‘Slop’ has become a common term used for low-quality AI outputs, which is plaguing security teams, gumming up submissions to open source bug bounty programs, and driving prominent figures such as Andrej Karpathy to cast doubt on the technology as a whole. Despite appeals from the likes of Satya Nadella, the term is still widely used.
According to the Betterup and Stanford Social Media Lab research, approximately four in ten of 1,150 US desk-based workers encounter workslop at least once a month. Each incident takes, on average, almost two hours to rectify.
The cost to business is believed to be up to $186 per employee per month. Based on the workslopâs estimated prevalence (41%), productivity at a company with a headcount of 10,000 would take a $9m hit over the course of a year.
Beyond the financial impact, workslop is fueling tensions among co-workers. When asked how they felt about receiving workslop, employees said they felt annoyed (53%), confused (38%) and offended (22%). Colleagues who relied on AI to generate content were deemed to be âless creative, capable and reliableâ. Consequently, 42% of respondents said theyâd consider their co-workers âless trustworthyâ, while 37% would view them as âless intelligentâ.
The situation is threatening to create rifts within teams. A third of those surveyed said they had notified teammates or managers about a colleagueâs workshop and that they would be less inclined to want to work with someone from whom theyâve received workslop.
And things could be getting worse, not better. A recent study by Zapier suggested the productivity impact of workshop could be even greater, with employees forced to spend four and a half hours each week fixing AI mistakes.
Putting a good AI content generation policy in place
“AI workslop is a real problem and itâs unsurprising that it has crept into our workplace products,â says Brian Shannon, CTO at SaaS-based IT management company Flexera.
While there are undoubtedly workers who rely on AI to enhance the quality of their work, others are turning to it for a quick fix and the result is content that often lacks context and can be unhelpful. Leaders âmust be diligent about creating processes that validate AI outputâ and âestablish best practices for teams to determine when generative AI helps versus when it hurts,â advises Shannon.
Sharon Bernstein, chief human resources officer at SaaS provider WalkMe, acquired by SAP in 2024, agrees that leaders have to step up their monitoring of how the technology is being used by their workers.
âWorkslop arises when employees are told to use AI without guidance on how or when to use it,â says Bernstein. âValuable AI output should be enhancing a taskâs clarity, creativity, or efficiency, not simply replicating human effort in a lower-quality way.â
Whatâs needed are clear policies on what content AI should be used and shouldnât be used for. They should also set out rules on what data is used to generate content. For example, it could stipulate that only first-party data can be used and the content should be easily sourced and properly referenced.
âA good policy isnât about restrictions, itâs about giving teams guardrails so they can confidently use AI to add value,â says Eric Ritter, CEO and founder of digital marketing agency Digital Neighbor. The risk of not having guardrails in place is that employees could end up using AI recklessly. Aside from quality issues, this could potentially have liability and legal ramifications, damaging brand integrity and client trust.
Ritter adds that Digital Neighbor has seen the benefit of a strong AI content generation policy first-hand with its clients. âWhen businesses define where AI fits in their workflow, be it initial research or draft creation, versus where human expertise is non-negotiable, like client-facing content, teams arenât second-guessing AIâs every decision.â
Workslop vs genuine AI content
Even with a robust policy in place, thereâs still the chance that workslop could slip through the net. Some of this can be due to human error â employees may be under pressure to deliver a report or presentation within a tight deadline â but other instances can be down to a lack of care and attention, i.e. employees simply failing to question the quality of AIâs output and checking it for context and relevance.
More often than not, the reason for employees not spotting workslop is because leaders havenât provided them with the right training. Employees have to understand that âslop happens when someone hits âgenerateâ and copies the output verbatim,â argues Ritter. âWhereas value happens when AI is used as a âthinking partnerâ that you feed specific context, review critically, and then add your expertise.â
Bernstein says that employees at WalkMe are âencouraged to be critical of AI output the same way they would approach an internâs work⊠You wouldnât ship off an internâs work without review as it can always benefit from constructive criticism.â
As for how employees should go about identifying the difference between workslop and genuine, valuable content, Bernstein advises that they look for âcontent thatâs generic and inconsistent with company toneâ. They also need to be wary of content that has a lot to say yet doesnât say anything memorable or useful.
Ritter echoes this and says that âif they read three paragraphs and canât recall a single specific example or actionable takeawayâ, then the reasonable assumption is that the content is workslop. He recommends that employees look out for stock words and phrases, such as âto dive deepâ, âto leverage synergiesâ and âto unpack insightsâ. Another thing that would get his alarm bells ringing is repetitive structure, such as an opening sentence, followed by a few bullet points and then a summary paragraph.
âReal human writing has rhythm variation and personality. So, if the AI content reads like it could apply to any company in any industry, it’s probably workslop,â warns Ritter.
While AI-generated content can be useful, itâs clear it can also cost businesses time and money and cause friction among teams if employees donât know when and how best to use it.