Just a couple of years after the disruptive potential of generative artificial intelligence hit legal technology, a new subset is emerging: AI agents. This software can perform tasks on its own — make decisions, take action or solve problems — with less human guidance than generative AI. And it will enable further cost savings and speed up processes.
At least, that is the promise.
Companies’ in-house legal departments were keen early adopters of generative AI. They explored opportunities for further automation of standard legal tasks, from reviewing contracts to checking that policies complied with regulations.
Now agentic AI is being touted as a potential next big step for taking the human out of legal tasks, thanks to its ability to undertake multi-step processes.
In-house legal teams are overwhelmed by the tasks they must undertake, says Ryan O’Leary, a legal tech expert at research company IDC. “If you can use [agentic AI] to take away the low-hanging fruit and the administrative stuff, it’ll certainly be a major win for the organisation.” But accomplishing more, and doing it faster, will depend to some extent on legal departments “kind of leaving AI to its own devices”. That would probably require less human oversight than generative AI, O’Leary adds.
So far, use of AI agents is extremely limited, with less than 1 per cent of the world’s biggest 5,000 companies using it for legal work, estimates Weston Wicks, expert in legal and compliance technology at research company Gartner.
Suppliers of legal agentic AI software so far include legal AI business Harvey and start-up Eudia, both US-based, and Sweden’s Legora.
Salesforce, the international business software company, which employs about 500 people in its global legal team, uses its own agentic AI software, Agentforce, plus agent tools from other suppliers, to handle a number of legal tasks. These include answering sales staff questions about customer agreements, negotiating contracts and working out which enquiries to prioritise before sending them to the legal team.
The company says its lawyers always check the work of agentic AI.
Nevertheless, Salesforce adds that it is on track to save about 9,500 hours annually in staff time on compliance and risk tasks, by using agentic AI software. A pilot project found that using it to negotiate non-disclosure agreements was 25 per cent quicker, for instance.
When AI agents cannot answer legal queries immediately, they route them to the right Salesforce lawyers according to priority.
“It kind of [takes] on this . . . managerial role, where it’s assigning the work to the right team,” says Salesforce chief legal officer Sabastian Niles.
Eventually, it may be possible for the Salesforce AI agent to negotiate part of a contract with the AI agent of a customer or supplier, he speculates.
How accurate is the agentic AI output when used by its in-house lawyers? Niles does not have a figure, only adding that the company is “very focused” on Agentforce’s accuracy and ensuring it draws on reliable sources of data.
Agentforce is used by thousands of companies, including jobs marketplace Indeed, tyre company Goodyear and drug company Pfizer.
Generally, if mistakes do happen, who is liable?
The answer may be “the customer”. As Harvey puts it: “We advise every prospect and customer of Harvey that our approach is technology-enabled but verified by humans, so any final output produced should be reviewed thoughtfully by a lawyer.”
For their part, legal departments are well aware of the risks of agentic AI hallucinations — producing incorrect information, sometimes insidiously — and the need for human oversight.
Indeed, they may be reluctant to grant full autonomy to their AI agents, whether bought in or built in-house, even for relatively basic tasks.
In July, early-stage start-up Eudia recruited a large team of human legal experts to act as a quality control for its agentic AI software, when it acquired Johnson Hana, which employs more than 300 people.
“What we’ve found is [an] expert human with an agentic AI solution will always outperform an expert human without that solution, or that solution without the expert human,” says co-founder and chief executive Omar Haroun. Eudia’s clients include battery business Duracell and agricultural group Cargill.
Other suppliers cite examples where agentic AI technology has been more accurate when working without any human intervention.
Legora co-founder and chief executive Max Junestrand recalls how one customer — a “market-leading, global financial services company” — raised a discrepancy in results when its own lawyers and Legora AI agents analysed old merger and acquisition deals. “[It] was really, really concerned that the results were different in Legora versus what the humans had done,” says Junestrand. “It turns out that the humans were incorrect.”
The international division of Japanese advertising group Dentsu is using agentic AI supplied by Harvey.
Robert Clark, general counsel at Dentsu International, oversees a legal team of about 260 people, excluding Japan. This year they began using the agentic software to check that its legal policies complied with legislation and regulations.
The team has yet to calculate return on investment, but Clark is cautiously optimistic.
One future use of agentic AI could be to analyse spending on external legal advisers, for instance.
He adds: “[Agentic AI] could be transformational, but I don’t think it’s going to be transformational overnight, because, like anything, it will take time to learn how to use the technology. [And] it will take time as well for in-house legal teams to identify where the real benefits are.”
Source link