As AI technology improves, billions of IoT devices are in use, says Blakes lawyer Ronak Shah
Canadian law will need to catch up with the privacy and security issues related to the Internet of Things – those interconnected devices that use the web – says Blake, Cassels & Graydon LLP lawyer Ronak Shah.
“These devices that interact with humans can make our lives more efficient – like a smart doorbell that allows you to see who is coming to your home while you’re at the office – but they also end up collecting personal information, which can be a problem,” says Shah, whose practice focusses on privacy, data governance and data protection.
He adds that while many of these interconnected devices have been around for a long time, as more advances are made with Artificial Intelligence and other technologies, they will “play an increasing role in our lives.” As a result, he says, “it’s more important than ever to develop laws, policies and practices that will help ensure proper privacy and security.” He points to a considerable uptick in data breaches as more digitization has become the norm and more things are done online.
With more innovation in AI, the Internet of Things will lead to even more sophisticated connected devices. Shah notes that some studies estimate that by 2025, 25 billion IoT devices will be in use, and the estimated potential market will be 1.5 trillion devices. “So, it is already a huge part of society and will even be more so,” he says.
“Not only individual consumers will be more connected to the Internet of Things, but businesses are also adopting such devices into manufacturing processes and infrastructure,” he says. Shah points to smart sensors and devices used to make calculations based on demand, for example, or in the case of agriculture, depending on the climate.
Shah notes that Canada does not yet have a regulatory regime that deals expressly with Artificial Intelligence at this point. Rather AI systems are regulated by general privacy, technology, and human rights legislation. While Canada has not yet developed a comprehensive AI regulatory regime, Shah says there is movement from both federal and provincial governments to build more responsive frameworks for regulating AI.
For example, Shah says the federal government’s long-awaited privacy reform Bill C-11 and Québec’s Bill 64 address AI. Bill C-11 died on the order paper before the last federal election, he says, but is likely to be re-introduced And Québec’s Bill 64 is expected to come into effect on September 22, 2022.
Both bills introduced the concept of “algorithmic transparency.” In addition, Bill 64 also provides individuals with rights in relation to automated decision-making and profiling. This Quebec legislation changes the privacy regulatory framework and introduces significant financial penalties for non-compliance.
“The defunct Bill C-11, which will come back to life, I think, sometime this year, talks about algorithmic transparency and talks about automated decision making and profiling of individuals,” he says. “This concept is very important.”
However, Shah says that the European Union has proposed regulations that create a more comprehensive approach to AI and related privacy and security concerns. Last April, the European Commission published a Proposal for a Regulation on a European Approach for Artificial Intelligence, which sets out the “first ever” legal framework for addressing the risks associated with artificial intelligence.
The proposal defines AI systems to include software that is developed with one or more named techniques and which can, for a given set of human-defined objectives, generate content, predictions, recommendations or decisions.
It also suggests a tiered risk-based approach aimed at balancing regulation with innovation, looking at issues such as governance, accuracy, cybersecurity, and transparency.
Under this framework, risks and threats are regulated based on sector and specific cases. These include:
- “Unacceptable risk”: AI technologies that pose a clear threat to people’s security and fundamental rights are deemed to pose an unacceptable risk to society and would be prohibited. Unacceptable risks include AI systems that deploy subliminal techniques to materially distort a person’s behaviour in a way that may cause “physical or psychological” harm and systems that exploit the vulnerabilities of specific groups.
The proposal also prohibits “real time” remote biometric identification systems in publicly accessible spaces for law enforcement, except in limited circumstances.
- “High-risk”: In this category, systems must comply with a strict set of mandatory requirements before they can be placed on the EU market. Companies must provide regulators with proof of the AI system’s safety, including risk assessments and documentation explaining how the technology makes decisions as part of a formal registration process.
Organizations must also establish appropriate data governance and management practices and ensure the traceability, transparency and accuracy of their datasets and technology. They must also inform end-users of the characteristics, capabilities, and limitations of performance of the high-risk AI systems and guarantee human oversight in how the systems are created and used.
- Low-risk systems: AI systems designated as low risk are not subject to the same regulatory obligations as they don’t pose the same threat to health and safety, EU values or human rights.
However, transparency obligations would apply for systems that interact with humans (chatbots), that are used to detect emotions or determine association with social categories based on biometric data (employee monitoring technologies that use emotion-recognition capabilities to differentiate between “good” and “bad” employees), or that generate or manipulate content (“deep fakes”).
Companies that violate the proposed regulations could face fines of up to four per cent of their worldwide annual turnover or 25 million euros.
As for helping businesses prepare for advancements in the IoT and any new regulatory regime for dealing with any privacy or security issues, Shah says, “not only those developing products that involve AI, or the end consumer of these products.”
For developers of IoT technology, Shah says much of advising clients involves focusing on the security concerns of the products they are developing, what data is being collected and what controls can be developed to minimize any concerns.
Shah also says that for those using AI and IoT technology, he advises clients on the importance of being transparent with employees and customers on how the technology is being used. “For example, is there monitoring of employees? Is there adequate disclosure of what is being done? The purpose of it? And precisely what information is being used?”
Shah adds that it is essential to explain to clients where the law related to AI and the IoT is going and how to best prepare for that.
“In doing privacy impact assessments for clients, you do want to take into consideration what the potential developments in the law could be like, you want to look at other jurisdiction and also at industry standards.”