Alberta courts caution against using unverified citations generated by AI or large language models

They warn a human needs to be in the loop to check that cases aren't fictitious hallucinations

Alberta courts caution against using unverified citations generated by AI or large language models
James Swanson

Courts in Alberta are cautioning lawyers to ensure that any research conducted with the help of large language models (LLMs) – but especially legal reference research – is verified for accuracy.

Or, as the Court of Appeal of Alberta, the Court of King’s Bench of Alberta, and the Alberta Court of Justice put it, lawyers need to have “a human in the loop.”

In a notice to the profession issued on October 6, 2023, the Alberta Courts acknowledged that there are “significant concerns surrounding the potential fabrication of legal authorities through large language models (LLMs).” Because of that, and because “reinforcing the integrity and credibility of legal proceedings is critical,” they explain that “any AI-generated submissions must be verified with meaningful human control. Verification can be achieved through cross-referencing with reliable legal databases, ensuring that the citations and their content hold up to scrutiny. This accords with the longstanding practice of legal professionals.”

And, in the opinion of James Swanson, founder of Swanson Law, a Calgary-based firm which focuses on technology, intellectual property, privacy, data protection and cybersecurity law, the courts are taking the right approach.

“I think they’re bang on,” says Swanson. “I think the key point in the court statement – and it says all three courts, so they thought this through – is you need a human in the loop.”

While Swanson isn’t aware of any lawyer using hallucinated or fictitious references in a Canadian court proceeding, it has happened in the US and he speculates that international examples might have played a role in the courts issuing their warning. Specifically, he mentions a case in New York state where somebody sued the airline Avianca and the plaintiff’s lawyer used ChatGPT for legal research, and referenced decisions he learned about through ChatGPT – decisions that didn’t exist and were wholly fabricated.

ChatGPT and its current successor, GPT-4 aren’t legal research tools but predictive language models created by OpenAI that can emulate text-based conversations and answer questions. The accuracy of those answers is not guaranteed. In fact, on its webpage, OpenAI says “GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts,” with “hallucinations” meaning the application just makes up things that sound as if they could be grammatically reasonable, including case citations.

“It's essential that parties rely exclusively on authoritative sources. You have to go and check what did this AI actually look at. Is it citing a case that didn't get from CanLII and it made up? Or did it get the case from the US and decided that it worked in Canada, and maybe it does, but maybe it doesn’t? Those kinds of questions mean that you can’t just plug in a question and blindly accept the answer and then feed it to the courts as being authoritative unless you’ve actually had somebody with a law degree review it and double check it,” says Swanson.

While he falls into the “trust but verify” camp when it comes to the use of LLMs, machine learning technology and what are generally referred to as artificial intelligence or AI tools, Swanson has used LLM-based research himself and finds it has value to him and to his clients.

Specifically, he used Alexsei from Creative Destruction Lab (CDL) and the University of Toronto’s Rotman School of Management, which he describes as a program that uses machine learning to automate the assist with citation research – research which is then reviewed by an actual person. “The memo comes back in a format similar to what a first-year lawyer would produce, so I find it quite useful.”

The first time Swanson used the software, he said he did his own concurrent research. The software found a couple of cases he missed. He found one case the software didn’t. “Between the two of us, it was a really good memo. We kind of nailed it.”

Swanson said one challenge that came with using the software was figuring out how much to charge.

“Supposing it would have taken me 10 hours to do the research and write the memo, although I wouldn't bother with a memo for myself – it would be more like a bunch of notes…do I bill the client for 10 hours? That doesn’t seem right. Do I bill a client for the cost of the service? That seems awfully cheap. What I ended up doing was billing the client the amount of time it took me to double check [the research output] on my own and verify it, plus the disbursement. And it ended up being significantly less than it would have been if I had billed for a full-blown research project, particularly if I billed a first-year lawyer at their full hourly rate.”

What Swanson suggests is that even if lawyers are uncomfortable with the idea of using LLMs, they should get used to it, and that it’s easier to stay in business if you’re on the leading edge of technology. Besides, he expects machine learning and AI will become part of the everyday practice of law, much like e-discovery.

“I think you’re going to see more widespread use of [LLMs]. The courts are already saying if you can save costs and time by using technology tools, you should be using them.”

The other thing the courts in Alberta are saying is that they are encouraging “ongoing discussions and collaborations to navigate these complexities effectively.”

According to a statement emailed to Canadian Lawyer by Darryl B. Ruether, executive legal counsel and director of communications for the Court of King’s Bench of Alberta, that means “The Court is in regular communication with all sectors of the bar through its regular quarterly meetings that continued after the pandemic, and also with the Law Society of Alberta, Canadian Bar Association and Advocates Society representatives. We plan to engage in a continuing dialogue with these groups.”

He added that “On behalf of the Court of King’s Bench only, I can say that we are not aware of instances where lawyers have used LLM in submissions before the Court, verified by a human or otherwise. As the Notice indicates, there is a growing concern about the use of LLM in legal submissions which has been reflected in directives already issued by Courts in a number of other Canadian jurisdictions.”

Recent articles & video

Last few days to nominate in the Top 25 Most Influential Lawyers

Why this documentarian profiled elder rights advocate Melissa Miller in Hot Docs film Stolen Time

Saskatchewan government boosts practical learning at University of Saskatchewan College of Law

BC Supreme Court clarifies the scope of solicitor-client privilege in estate administration

Federal Courts invite public feedback on the conduct of a global review of its rules

BC proposes legislative changes to support First Nations land ownership

Most Read Articles

National Bank cannot fulfill Greek bank’s credit guarantee due to fraud exception: SCC

Canada facing pervasive ransomware, broader cyber-criminal landscape and threat from AI: lawyer

Ontario Court of Appeal rules against real estate developer for breach of a joint venture agreement

Canadian Lawyer partners with legal associations to survey legal graduates