Cyber/data group co-head says laws are coming soon but governance should be implemented now
Artificial intelligence regulation is coming, and even before these new “AI laws” take effect, companies may face steep fines and damaging litigation if they don’t adequately address the risks, says Charles Morgan, the national co-leader of McCarthy Tétrault LLP’s cyber/data group.
Morgan says his firm recently held a client event on artificial intelligence and had over 1,000 clients attend, demonstrating a massive interest in the topic.
“The theme of my opening remarks was ‘This changes everything,’” he says. “Because what we're seeing is that every single industry vertical that AI touches can be potentially transformed.”
The practical risks of AI
Because AI innovation is happening so fast, Morgan says companies must respond quickly to the misuse of these technologies. “It's going to be necessary to work even harder to make sure that the guard rails are more robust. The feedback loop is faster.”
New examples of dangerous illegal uses of the technology are emerging regularly. For example, he cites concerns of copyright holders about the impact on those producing creative content, seen in the recent writer’s strike in the US. “I would suspect it's the first time humans have gone on strike because they are concerned that AI is going to take over their jobs.”
And there are more sinister examples, such as deep fakes of children’s voices being artificially replicated to trick parents into paying a ransom.
He says AI is disrupting every sector, and even companies like Google are feeling like the ground is moving quickly under their feet. “It's coming at exactly the same time that the regulatory environment for the use and deployment of artificial intelligence is also being fundamentally reshaped.”
The legal and regulatory environment
Morgan says that although legal regimes will take a few years to enact, regulators and governments are moving from policy to law relatively quickly.
In Canada, as part of Bill C-27, The Digital Charter Implementation Act, 2022, the federal government introduced the Artificial Intelligence and Data Act. That act has just passed second reading and is now before committee. The government has released preliminary guidance on the legislation.
AIDA seeks to set out clear requirements for the responsible development, deployment and use of AI systems by the private sector. It aims to implement a regulatory framework to govern the responsible adoption of AI systems to limit harms such as the reproduction and amplification of biases and discrimination in decision-making.
But Morgan says the most mature regulatory regime for AI is coming out of the European Union. The EU’s proposed Artificial Intelligence Act is close to being adopted.
“Even in the United States, we're seeing the White House publishing its AI Bill of Rights,” Morgan says. “California and a bunch of states are thinking about implementing some form of state-level AI regulation.”
Morgan says Canada is in many ways between the EU’s advanced approach and the US, which tends to “establish normative structures via litigation rather than regulation.”
Canada is “always trying to ensure that we find a regulatory approach that means that we can interact with our major trading partners in a reasonably harmonious way.”
Morgan says the first iteration of the EU AI act recognized the evolving nature of the technologies by providing a definition that could evolve. He says Europeans have been wrestling over how they can regulate generative AI and systems based on these foundational models.
“They have three categories of artificial intelligence systems,” he explains, including those that pose unacceptable risks, high-risk systems, and limited-risk systems.
“The Canadians have, in a sense, been inspired by the European model, in the same way, that EU GDPR has been a source of inspiration on the data protection front.”
For example, Canada’s legislation refers to “high-impact AI systems,” similar to the EU’s “high-risk AI systems.”
“What is a high-impact AI system? We don't know yet,” says Morgan.
The concepts of fairness, non-discrimination, liability, security and privacy are also found throughout all these laws, Morgan says. Even if the EU regulation is adopted this year, though, Morgan says it likely won't apply until 2026, and Canada says it doesn’t expect AIDA to be in effect until 2025.
“So, the industry absolutely has to step into that regulatory vacuum and implement some of the best practice norms,” says Morgan. “Companies are going to want to be very careful about managing those risks because they don't want to be subjects of lawsuits. They also don't want to be subject to regulatory action… [They] would be very wise to step up to that responsible governance paradigm.”
Current legal concerns
As these legal regimes move towards becoming law, Morgan says he is helping clients implement responsible AI governance and navigate vendor management. This may include advising on how to set up an AI committee or develop and implement policies and responsible AI impact assessments. He is also regularly helping companies negotiate contracts with vendors proposing AI-enhanced solutions.
“There's a whole range of thorny legal issues associated with those types of contracts,” he says. Often, the service provider wants to use the client's data to help enhance their models, which goes beyond simply processing data on behalf of the client.
“It's using the data for their own purposes. And that raises some confidentiality issues, privacy issues, liability issues and competitive issues.”