Brave new world: some legal considerations in using AI and IoT Systems

Artificial intelligence and the Internet of Things are hot subjects (and buzzwords) for lawyers in 2018. Beyond the hype, however, lies a plethora of legal and business issues.

Lisa R. Lifshitz

Artificial intelligence and the Internet of Things are hot subjects (and buzzwords) for lawyers in 2018. Beyond the hype, however, lies a plethora of legal and business issues. From Alexa, Amazon’s smart home hub, to the speech recognition capabilities of Siri, Apple’s voice assistant, to “smart” thermostats, self-driving cars and interactive sex toys, consumer and business’ dependence on these technologies show no signs of waning. 

 

And there is no question that Canada is actively trying to position itself as world leader in these technologies rather than just a nation of consumers.

 

The year 2017 was a banner year for AI investment in Canada. In March 2017, the federal government committed $125 million to launch the Pan-Canadian Artificial Intelligence Strategy, delivered through the Canadian Institute for Advanced Research. Intended to promote collaboration among centres of expertise in Toronto, Waterloo, Montreal and Edmonton, the initiative actively promotes Canada as a world-leading destination for companies wishing to invest in AI and innovation. Funding will be flowed to the newly formed Vector Institute in Toronto, an independent research facility for AI, the Alberta Machine Intelligence Institute in Edmonton and the Montreal Institute for Learning Algorithms, which specializes in deep learning and machine learning for AI. Major players, including Google, Microsoft, Facebook and Samsung Electronics, have invested millions of dollars in artificial intelligence labs across Montreal, helping to make the city a global leader in machine-learning development. Facebook alone invested more than US$7 million in the AI ecosystem in Montreal in 2017, including establishing the Facebook Artificial Intelligence Research and launching new partnerships with Université de Montréal, CIFAR and McGill University.

 

Additionally, through its innovation superclusters initiative, which also began in 2017, the federal government is also investing up to $950 million over five years to support business-led innovation superclusters with the greatest potential to build innovation ecosystems and accelerate economic growth. Notably, one of the winners, the SCALE.AI Supercluster (an industry consortium incorporated as Supply Chains and Logistics Excellence.AI, based in Quebec) is focusing on making Canada a world-leading exporter by building intelligent supply chains through artificial intelligence and robotics.

 

However, while AI research is accelerating, the law in Canada regarding AI/IoT systems is arguably not keeping pace. Questions abound. The following is a non-exhaustive list of issues that should be considered by any developer/user of AI/IoT systems and, to the extent possible, be proactively addressed in the contract that governs such relationship.

 

Liability

Who is responsible if something goes wrong as a result of the AI/IoT system? The manufacturer? The distributor? The original programmer or researcher? The consumer or end user? Is a provider liable under contract of supply? What about IoT systems and devices that are not provided under contract and that are accessible by internet users generally? What is a reasonable standard of care for an IoT system?  What if imperfections in the AI/IoT system are more subtle? How can we hold the developer/creator of the AI system liable if we do not understand how a black box algorithm makes decisions? Machine-learning techniques generally cannot tell us their reasoning, and even when they can, the results are often too complex for average individuals to understand.

 

If there are flaws built into the algorithms themselves or if regulation fails to ensure that algorithms are high quality, then the developers of algorithms (or technologies that rely on them) might also become liable under tort law, although they have been reluctant to extend or apply product liability theories to software developers.

 

Other damage claims include: negligence, strict liability, warranty (express or implied), fraud, product liability, false or misleading representations and deceptive marketing practices under the Competition Act, data monopoly/abuse of dominance, price collusion, fixing or anti-competitive behaviour privacy breaches, personal injury and property damages.

 

What are the due diligence obligations of users or buyers that want to use an AI/IoT system? Is the obligation on the buyer or user to perform evaluations at the outset? If so, how often? If machines buy and sell from one another, will consumer laws apply and in which jurisdiction? Any AI/IoT contract should carefully consider and document in detail ownership of data, limitation of liability, governing law and jurisdiction.

 

Intellectual property issues

If a company invests in creating algorithms, how can they protect their investment? Can it be done through patents, copyrights or trade secret protections? Who owns the IP or data generated by AI/IoT systems? Who owns what when IoT devices interact with one another? Who decides how it can be used? Is opt-out possible? How well do current Canadian and other foreign IP laws protect AI/IoT products and systems? What are some of the current IP limitations?

 

Privacy and data considerations 

These are critical issues in any AI/IoT product or system. AI requires the gathering of immense amounts of data and the sharing of data to oversee it. Did the AI developer have sufficient rights to collect the original data? Did the developer have the rights to use the data collected, create derivative works using the data and disclose the data? Did the data come with “strings attached” on how it could be used, i.e., patient data under Ontario’s PHIPA, the GDPR or other laws? Who owns the data generated by the device or system? How anonymized or de-identified is such data and how easily can individuals be re-identified following the anonymization of such data? How can one meaningfully consent to the collection, use and disclosure of data obtained through use of the AI/IoT system? What is meaningful consent in the context of a decision made by an AI? Consider the sensitivity of users using the devices, such as patients and their medical devices or children and IoT toys, where surreptitious data collection has been used in the past in downright creepy ways. Different jurisdictions treat these matters in distinct ways, so one may also be obligated to consider the intersection of various privacy laws in Canada, the U.S., Singapore and the European Union under the GDPR. Is existing legislation sufficient to truly protect privacy rights or is specialized legislation focused on AI/IoT required?

 

Additionally, IoT systems may require increased direct collection of sensitive personal information by such devices as precise geo-location co-ordinates, financial account numbers and health information. This can lead to a lack of anonymity, increased opportunities for businesses to monitor consumers and monetize data. Consider the fact that we now have to face such realities as: content recording (spending habits, behaviours, voice patterns, daily activities) and audio and video recording including voice patterns. Existing smartphone sensors can be used to infer a user’s mood (stress levels, personality types, bipolar disorders), demographics (gender, marital status, job status, age), smoking habits, overall well-being, progression of Parkinson’s disease, sleep patterns, happiness, levels of exercises and types of physical activity or movement. Such inferences can be used benevolently to provide helpful services to consumers, but they can also be misused, i.e., by companies to make biased credit, insurance and employment decisions, by using fitness tracker data to price health or life insurance or to infer the user’s suitability for credit or employment.

 

Security issues 

Of great concern to many critics and users are issues relating to security. What are the minimum security requirements for AI/IoT systems? This is a trick question, as unfortunately there are no minimum standards for security for AI/IoT systems right now. How do you build privacy by design into an AI/IoT system when security is often an afterthought, there can be multiple systems in one IoT device and many IoT systems/devices use open-source software?

 

Moreover, how can one ensure that security can be kept current in an IoT system, as the low cost of many IoT devices may be a disincentive for IoT producers to issue security patches? How can a consumer get an update even if she wants one? IoT companies should continue to monitor products throughout their life cycle and, to the extent feasible, patch known vulnerabilities. Unfortunately, many IoT devices have limited lifecycles, resulting in a risk that consumers will be left with obsolete products that are vulnerable to critical, publicly known security or privacy bugs. Companies should be forthright in their representations about providing ongoing security updates and software patches. Companies that provide ongoing support should notify consumers about security risks and updates.

 

Also, how do you prevent malware and hacking of AI/IoT systems? What happens if a company that produces AI-enabled systems then goes out of business? Who bears the burden of security and safety? Will automakers be required to maintain the AI software throughout the lifetime of the car and multiple owners?

 

Companies should ensure they retain service providers that are capable of maintaining reasonable security and provide adequate oversight to ensure that those service providers do so. For example, such organizations should implement for systems with significant risk a “defence-in-depth” approach where security measures are considered at several levels. They should also consider implementing reasonable access controls to limit the ability of an unauthorized person to access a consumer’s device, data or even the consumer’s network — including employing strong authentication, restricting access privileges, etc.

 

Regulatory issues 

Areas of concern for regulators abound, including unfair or deceptive trade practices. How can regulators ensure that black box algorithms are high quality —that is, that they do what they say they’re going to do and that they do it well and safely? How can manufacturers defend themselves against AI audits from regulators? How much must be/should be disclosed to a regulator? Who should regulate AI/IoT systems, the AI/IoT companies themselves (i.e., IBM’s ethical use guidelines/Partnership on AI), federal regulators, not-for-profits such as AI Global, IEEE, British Standard for Robots and Robotic Devices, provincial or state laws or global treaties?

 

Insurance issues 

Does standard insurance cover risks associated with AI/IoT issues?  What is being insured?

 

Employment issues 

Where an AI system is deployed in the performance of an HR function, is the employer sufficiently aware of issues regarding built-in bias? How much due diligence should be conducted before deploying such a system? Which human rights laws (Canadian federal, provincial human rights acts and codes) apply when a company relies on AI systems for any level of candidate review and recruitment?

 

Ethical issues

Last but not least, concerns over ethics, including bias, continue to put the brakes on the adoption of AI systems by some organizations. Given the black box nature of AI, can a user ever be certain that the AI system is based on sufficient volume and variety of data to avoid biased results? Has the AI software developer sufficiently validated the reliability of the software? Are results consistent and correct? Can one understand the AI system sufficiently to audit it and understand how the results were achieved? Can we verify that the AI system is trustworthy? How do we concretely address concerns about bias? What steps are being taken to reduce bias? How can a developer or user mitigate against inappropriate conclusions if results are not validated?

 

In the absence of specific and tangible black-letter law answers, some academics are taking the initiative to search for answers. Seeking to address the difficult ethical issues of AI, in November 2017, the Université de Montréal spearheaded the Forum on the Socially Responsible Development of Artificial Intelligence, which ultimately announced “the montreal declaration for a responsible development of artificial intelligence” at the conclusion of the forum. The principles and recommendations contained in the declaration are intended to be the basis for ethical guidelines for the development of AI. The declaration currently identifies seven values: well-being, autonomy, justice, personal privacy, knowledge, democracy and responsibility, each of which have principles. These include the following:

 

Well-being: The development of AI should ultimately promote the well-being of all sentient creatures. Autonomy: The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems. Justice: The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental/physical abilities, sexual orientation, ethnic/social origins and religious beliefs. Privacy: The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use. Knowledge: The development of AI should promote critical thinking and protect us from propaganda and manipulation. Democracy: The development of AI should promote informed participation in public life, co-operation and democratic debate. Responsibility: The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.

 

Individuals were also invited to contribute to the drafting of the declaration by answering the questionnaire or by submitting a recommendation (a brief up to five pages) before March 31 of this year. The final version of the declaration is expected later this year. In order to ensure that the declaration is representative, the university will also solicit input from various workshops that will be held with experts and citizen groups, including the Quebec Commission on Ethics of Science and Technology, the Quebec Bar Association, the City of Montreal and others as well as “philosophy workshops” in Quebec primary, secondary schools and secondary institutions (cégeps) and “citizens meetings” in cafes and public spaces. Vive le Quebec!




Recent articles & video

Last few days to nominate in the Top 25 Most Influential Lawyers

Why this documentarian profiled elder rights advocate Melissa Miller in Hot Docs film Stolen Time

Saskatchewan government boosts practical learning at University of Saskatchewan College of Law

BC Supreme Court clarifies the scope of solicitor-client privilege in estate administration

Federal Courts invite public feedback on the conduct of a global review of its rules

BC proposes legislative changes to support First Nations land ownership

Most Read Articles

National Bank cannot fulfill Greek bank’s credit guarantee due to fraud exception: SCC

Canada facing pervasive ransomware, broader cyber-criminal landscape and threat from AI: lawyer

Ontario Court of Appeal rules against real estate developer for breach of a joint venture agreement

Canadian Lawyer partners with legal associations to survey legal graduates