ChatGPT and the law: a useful but imperfect tool

Generative artificial intelligence tools such as ChatGPT will not replace lawyers or judges, but they are expected to be used more and more in the world of law and justice.

Generative artificial intelligence tools such as ChatGPT will not replace lawyers or judges, but they are expected to be used more and more in the world of law and justice.

Credit: Getty

In 5 seconds

Generative artificial intelligence tools such as ChatGPT raise questions about how it can be used and controlled in the field of law, say two UdeM professors.

AI content generators like ChatGPT will never replace lawyers and judges, but they’re increasingly being used in the legal field.

There's already a lot of excitement about what these tools can do in terms of interpreting the law, providing access to justice and education about legal matters, writing contracts and legal documents, offering legal aid, supporting decision-making, and facilitating lawyer-client communications.

But the arrival of ChatGPT and applications that work with it have raised two kinds of questions about how to control and use it, according to professors Nicolas Vermeys and Karim Benyekhlef of Université de Montréal’s Faculty of Law.

Dependent on jurisdiction and territory

Karim Benyekhlef and Nicolas Vermeys

Karim Benyekhlef and Nicolas Vermeys

Credit: Karim Benyekhlef (Christian Fleury) and Nicolas Vermeys (Centre de recherche en droit public(

The first issue, according to Vermeys, is that unlike medicine or other scientific fields, the law is a discipline that applies to a specific region or territory.

“For example, Canadian criminal law only applies within Canada, just like Quebec's civil law only applies within Quebec,” he said.

So even though AI content generators can be trained, there’s a real risk of getting the wrong legal information from the tool, "especially since Quebec is under-represented in the legal datasets used to train these algorithms,” he added.

The very design of tools like ChatGPT has created another issue,“ said Vermeys, who also leads UdeM’s Research Centre in Public Law.

"ChatGPT was designed to provide the most likely answer to a question, not the best answer. That means its results could be fake. When I asked it to cite my five most important published studies, it referred me to four that didn’t exist and a fifth that wasn’t even mine."

The arrival of AI content generators raises the question of how to control the content the software provides.

“Among other things, who's responsible if ChatGPT uses content protected by copyright?” asked Vermeys. “And who's responsible if these tools provide an answer that includes someone’s personal information?”

Enhanced rather than artificial intelligence

Professor Karim Benyekhlef agrees with his colleague about the risks posed by generative AI in the field of law, but he nonetheless believes the software can be used judiciously.

“This type of tool can improve access to justice by giving people information about their rights—like the Justicebot tool that we developed at the UdeM Cyberjustice Laboratory,” said Benyekhlef, the lab’s director.

The lab also developed PARLe, the first online conflict resolution platform, which has directly resolved 65 per cent of the disputes submitted to it.

“When PARLe isn’t able to resolve a dispute on its own, a mediator steps in and is given access to a decision-making tool that flags derogatory statements made by the parties and suggests more polite alternatives and responses,” said Benyekhlef . “That’s why I think it's more appropriate to talk about enhanced intelligence rather than artificial intelligence.”

But Benyekhlef also points out that these tools and bots are useful for consumer, labour or neighbourly disputes—in essence, common, low-stakes issues.

Legal experts are here to stay

For complicated cases, lawyers and judges still need to be involved.

“Given the risks of fabrication posed by generative AI—which can do things like make up case law out of whole cloth, as has happened in the U.S. — input from legal experts is still needed,” he said.

Like Vermeys, Benyekhlef emphasized that the law is a product of its environment that changes over time. Both professors believe AI content generators are unable to consider the perspective and foresight involved in a line of argument.

“Many people think that AI treats people equally and therefore completely eliminates bias,” said Benyekhlef. “But treating everyone as if they were the same, as a blanket rule and without room for a nuanced interpretation, isn’t the same as treating them equally. Justice has to take the human element into account.

"As far as equal treatment is concerned, I don’t believe that an algorithm can distinguish between a single mother who steals a loaf of bread for her children and a privileged teen who does the exact same thing just to show off to their friends.”

On the same subject

artificial intelligence law professors