LEAF one of many organizations asking the federal government to rethink AIDA
In explaining the proposed Artificial Intelligence and Data Act (AIDA) to the public, the federal government says AIDA will “set the foundation for the responsible design, development and deployment of AI systems that impact the lives of Canadians.”
Because of the impact on the “lives of Canadians, " many human rights, privacy advocates and other organizations are asking the government to rethink its entire approach to governing and regulating AI.
“It’s difficult to change laws after they are passed. This is going to be a really significant position for how Canada approaches artificial intelligence and the intersection of law with artificial intelligence,” says Kristen Thomasen, an assistant professor at the University of British Columbia’s Peter A. Allard School of Law and a member of the LEAF technology-facilitated violence project advisory committee.
LEAF is just one of 19 organizations and 26 individuals who signed an open letter to François-Philippe Champagne, the federal minister of innovation, science and industry, asking for AIDA to be removed from Bill C-27, stating “the bill is not adequate for committee consideration.” Specifically, the signatories express concern about who has oversight of the bill and what its focus is.
“AIDA, as it stands, is an inadequate piece of legislation. [Innovation, Science and Economic Development Canada] should not be the primary or sole drafter of a bill with broad human rights, labour, and cultural impacts. The lack of any public consultation process has resulted in proposed legislation that fails to protect the rights and freedoms of people across Canada from the risks that come with burgeoning developments in AI.”
Thomasen says that she understands the desire of the government to establish rules that regulate the business use of AI, but the scope of the technology goes much further than that.
“The bill is framed very much around trying to facilitate the flourishing of the AI industry – the commercial dimensions of the industry in Canada. I think it makes a lot of sense to see that mitigating certain harms will be helpful to the flourishing of the industry because, economically, people aren’t going to buy into AI if they perceive that it will be a harmful industry.
“That said, I think that an industry-first approach is not taking into consideration the social impacts of this technology in the way that we really ought to be doing with our laws. Human rights, equality, equity and privacy are not at the forefront of this proposal. The proposal is really focused on identifying just particular kinds of risks.”
Stepping away from the bigger-picture criticism, AIDA’s critics say there are other problems with the way the bill was drafted. One is that it lacks definitions of key concepts, such as “high-impact AI systems,” which are vital for understanding what exactly is to be regulated.
Another is that AIDA only addresses commercial regulation and doesn’t cover the usage of AI or large language model systems by government agencies, including intelligence and law-enforcement agencies. This is especially troubling to opponents of the bill as biases incorporated into AI systems as they have been developed could affect the results delivered by the systems.
“The concern the group was flagging, and that we also flagged in our submission, is really that a lot of the harmful uses of AI that we’re seeing have come from government uses… And with that, I’m thinking in particular of the Clearview AI revelations exposed by investigative journalism that revealed that many Canadian police forces have been using or accessing ClearView AI,” a facial recognition system.
While Thomasen says having AI under the authority of an ombudsperson rather than a ministry devoted to economic interests would be a preferable approach, regulating AI is a tough challenge for the government due to the many different ways the technology can and will affect the experiences of Canadians.
But one way that she wishes the government would tackle any regulation of new technology, including AI, would be to take a longer-term and historical approach and look at what has happened in the past. She points to the example of automobile regulation and how the regulation of cars has changed over the decades.
“Not to over-romanticize our car regulations but just to point out that there has been more extensive thinking about the range of impacts that vehicles can have, and the different ways that the law is good or not so good at addressing those impacts,” she says.
“Part of the challenge right now—maybe globally but at least in Canada, the US, the EU, is that we’re treating AI like it’s such a novel concept without realizing that as legal systems and as societies, we’ve dealt with novel technologies many times before, and that there’s a lot of valuable insight that we can learn from slowing down and thinking about what really works to keep people safe in a very comprehensive way.”