Describing work as slop and sludge is not the ideal feedback. But the terms are a warning to employers of the risks and limitations of content generated by artificial intelligence.
“Work slop is a new form of automated sludge in organisations,” says André Spicer, author and dean of Bayes business school. “While old forms of bureaucratic sludge like meetings or lengthy reports took time to produce, this new form of sludge is quick and cheap to produce in vast quantities. What is expensive is wading through it.”
Many executives are championing new AI tools that help their staff to synthesise research, articulate ideas, produce documents and save time — but at times the technology may be doing the opposite.
Deloitte this month announced it would be partially refunding the Australian government for a report it produced that contained mistakes made by AI, demonstrating the risks for professional service companies.
The potential harm is not only external — to corporate reputations — but also internal, as poor AI generated content can result in bloated reports with mangled meanings and excessive verbiage, creating extra work for colleagues to decipher.
While AI significantly decreases the effort to put pitches and proposals together, it does not “equally decrease the costs of processing this information”, adds Spicer.
Michael Eiden, managing director at Alvarez & Marsal’s digital technology services, says: “The accessibility of generative AI has made it easier than ever to produce work quickly — but not necessarily to the highest standard.”
A recent report by Better Up, the coaching platform and Stanford Social Media Lab, found that on average, US desk-based employees estimate 15 per cent of the work they receive is AI work slop.
The emerging problem heightens the need for clear policies and increased monitoring of AI’s use, as well as staff training.
The Financial Reporting Council, the UK accountancy regulator, warned in the summer that the Big Four firms were failing to monitor how automated tools and AI affected the quality of their audits, even as firms escalate their use of the technology to perform risk assessments and obtain evidence. Last week, one of the professional accountancy body issued a report on AI’s ethical threats — such as fairness, bias and discrimination — to finance professionals.
Meanwhile, the UK High Court has called for the legal profession to be vigilant after two cases in which lawyers were thought to have used AI included written legal arguments and witness statements containing false information, “typically a fake citation or quotation”.
“Firms shouldn’t simply hand employees these tools without guidance,” says Eiden. “They need to clearly define what good looks like.”
A&M is developing practical examples and prompt guides to help staff use AI responsibly and effectively. “For high-stakes work”, says Eiden, “human review remains non-negotiable — the technology can assist, but it should never be the final author.”
James Osborn, group chief digital officer at KPMG UK and Switzerland, agrees, stressing the importance not just of staff verifying the accuracy of the content but also “suitable governance processes” to ensure the technology is being used appropriately.
It is not just AI’s ability to help with the substance of employees’ work that is under scrutiny, but also administrative tasks, including scheduling meetings and taking notes, according to a report by Asana. It highlighted workers’ complaints of AI agents sending false information and forcing teams to redo tasks, adding to their workload.
Where employers are not setting out a clear policy on AI’s use in the workplace, staff may use it on the sly. A report by Capgemini this year found that 63 per cent of software developers were using unauthorised tools, which have serious ethical and security implications, such as sharing company data.
It is not only ethics and errors that are a problem but the demands on staff to identify and fix “work slop”, a term coined this month by researchers to describe “AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task”. Resulting content can be “unhelpful, incomplete, or missing crucial context about the project at hand”, they wrote in a piece published in the Harvard Business Review. This means the receiver may have to “interpret, correct, or redo the work”.
Kate Niederhoffer, social psychologist and vice-president at BetterUp Labs, a research arm of the coaching service, and one of the report’s authors, insists employees are not creating work slop for “nefarious” reasons but typically because they “have so much work to do”. Dividing users broadly into two mindsets, she describes “pilots” as those who are curious about AI, using it to augment their capabilities rather than replace them, and “passengers” who are begrudging, burdened by work, and who use AI to buy themselves more time. “One of the reasons people are creating work slop may be the result of too few people, everything feeling urgent and important.”
Niederhoffer urges managers to give staff support, and to be clear about the likely effect of poor work on colleagues.
Clarity about the purpose and use of AI is key, says Joe Hildebrand, managing director of talent and organisation at Accenture. “When you clearly understand the tangible and specific value AI can bring to your context, you are better able to design and deploy tools that create meaningful impact, not just noise.”
Mark Hoffman, head of Asana’s Work Innovation Lab, advocates four core foundations to AI use, starting with guidelines that balance legal, IT and security concerns with practical business needs. He also recommends training that goes beyond the technical skills of prompt writing to teach softer delegation skills; accountability rules that clarify who is responsible when things go wrong; and quality control standards that prioritise accuracy and error tracking alongside efficiency. “The goal is not to just figure out what behaviours to prevent, but what behaviours to empower and enable.”
Hildebrande stresses the importance of “reversibility”. “Every AI deployment should include a human override or kill switch. Monitoring how often humans reverse AI decisions and using those insights to improve the system can enhance trust.”
As AI increasingly automates work processes, manual input will become increasingly critical, some experts say. Spicer observes that more universities are asking students to take a written exam or a verbal presentation, instead of an electronic submission. “It is likely firms will increasingly rely on analogue input and processes to make high-stakes decisions.”
Stuart Mills, assistant professor of economics at Leeds University, believes managers have become swept up by “the excitement of AI and immediateness of the results” and distracted from “asking big questions about organisations and productivity.”
The tendency is to measure knowledge work output by lines of code or numbers of reports, he says, which can create “an illusion of productivity”.
He suggests: “Managers need to ask, ‘What do we do to create value? And can we use AI in our current structure, or do we need to change our structure?’ I don’t see those questions being asked.”
Source link