When I left private practice and went in-house, I was given valuable advice from a mentor of mine. She said that my role is not just about being an advisor to my business clients and it was important that I not limit myself to being just a “legal risk manager.”
When I left private practice and went in-house, I was given valuable advice from a mentor of mine. She said that my role is not just about being an advisor to my business clients and it was important that I not limit myself to being just a “legal risk manager.” To be successful, I would have to be proactive. I would have to seize on those opportunities to be a facilitator and assist my organization in navigating rapid changes that affect the marketplace and enable my clients to achieve business objectives.
Artificial intelligence and its introduction into business processes, products and services represents such an opportunity for in-house corporate counsel. We now find ourselves operating at the intersection of a dynamic legal, regulatory, social and economic environment where emerging technologies present transformational opportunities when applied well and consequential risks when applied poorly. In such an environment, and to echo the advice that was provided to me, corporate counsel cannot just be reactive and must move beyond just managing risk. We are well placed within our organization to take a leadership role and assist our businesses in proactively building and maintaining trust for these new technologies, which is an essential component to successful AI adoption and deployment.
The artificial intelligence opportunity
AI technologies are predicted to be one of the major economic driving forces in the 21st century and organizations across all industries are taking notice. A recent McKinsey Study found that by 2030, 70 per cent of companies will have adopted AI technology in some form, with the potential to deliver additional economic activity of around $13 trillion or about 16-per-cent higher cumulative GDP compared with today.
A lack of trust and understanding is a significant barrier to breakthrough adoption
There is a trust and understanding gap in Canada that is affecting Canadian businesses from capturing this AI opportunity. A Deloitte report published in November 2018 found that Canadians generally do not understand AI or its implications. The report cited a survey of more than 1,000 Canadians and found that only four per cent of respondents were confident explaining what AI is and how it works. Furthermore, the report found that there is general distrust of AI among Canadians rooted in, among other things, ethical concerns about the unintended consequences of AI-generated decisions and the potential of AI to manipulate information.
Canadians believe that businesses have a responsibility to tackle the challenges posed by AI to society such as data protection, privacy, cybersecurity and ethical risks; however, they lack confidence in the abilities of business to do so. Just one in 10 Canadians surveyed in the report believe that businesses are up to the task.
This gap in understanding and trust has serious implications for Canadian businesses. People will not adopt and certainly will not purchase a technology they don’t trust. It is, therefore, quite evident that building trust in AI is a business priority.
Corporate counsel are well-positioned to help build trust to accelerate adoption
Whereas corporate counsel typically consults on how to minimize business risk, we are well positioned to take on a more proactive role in advising our business clients and executives on strategies to build trust with customers.
Similar to compliance, building trust in AI can be thought of as a framework that can be developed and applied to all processes and business units. Corporate counsel is already experienced in building out governance frameworks for our organizations around compliance rules, data protection practices under regimes such as GDPR or data-handling procedures under CASL. Similar frameworks and principles need to be developed to shape how organizations create, use and distribute AI. If implemented meaningfully and effectively, such a framework can build consumer and stakeholder confidence in an organization’s approach to AI.
Developing such a framework begins with proactively engaging with business clients and stakeholders at the top of an organization. Effective corporate counsel should already be a go-to resource for an organization’s leadership and board. Together with the executive team, corporate counsel can work to advance principles that will be at the foundation of the design, use and deployment of AI. At Microsoft, for example, legal departments worked with the business to identify and define six values — fairness, reliability and safety, privacy and security, inclusivity, transparency and accountability — to guide the cross-disciplinary development and use of AI.
Adopting a set of principles to guide AI design and use will help organizations cultivate trust as new technologies emerge and evolve. How those principles are developed and implemented to align with legal and regulatory considerations requires significant input and leadership from corporate counsel.
As we are still in the early stages in the growth of AI technologies, in-house corporate counsel has an opportunity to positively impact the future of AI for our society by proactively engaging our business clients on these topics. We are well placed given our position as trusted advisors to take on a more proactive role to drive discussions, decisions and actions within our organizations to unlock the generational opportunities that come with emerging technologies such as artificial intelligence.
The views and opinions expressed in this article are personal views and do not necessarily reflect the official position of Microsoft. Jonathan Leibtag is corporate counsel, corporate, external & legal affairs at Microsoft Canada Inc.