Skip to main content
Home » Industry & Business » An AI Strategy Requires More Than Just Data Governance Principles
Sponsored
Scott Smith

Scott Smith

Senior Director, Intellectual Property & Innovation Policy, Canadian Chamber of Commerce

Artificial intelligence (AI) is now firmly on the international policy agenda. Canada had a head start in this technology race as one of the first countries in the world to unveil an AI strategy in 2017. The strategy was focused on research and innovation, and so far has been successful in generating significant interest in the technology and in enabling several Canadian companies to rise to the cusp of being dominant global players. What’s missing in the strategy is a way to leverage this success and to ensure that global technical and policy standards favour the Canadian approach to AI.

AI, described simply, is an approach to automated decision-making. Automating routine decisions enables humans to focus on more complex tasks, thereby improving efficiency. As a result, AI-enabled systems are increasingly prevalent across all sectors. Their growing ubiquity implicates a range of legal and policy frameworks — including those that pertain to ethics and fairness, competition, privacy, and intellectual property. These frameworks weren’t designed with AI in mind. They are, however, increasingly being adapted and applied to this new technology.

This process of adaptation and application makes AI a tantalizing target for policymakers. The state of public understanding (and misunderstanding) of AI’s practical and social implications makes it tempting to think of its regulation as an unprecedented challenge, one that will require entirely novel approaches.

It’s not, and it won’t.

Rather, what lies ahead for Canada and other countries as we harness AI’s potential is an ordinary process of considering this innovative technology within established legal regimes and policy principles. There are, of course, significant differences between AI systems and other technologies. Some of these will require legislative and regulatory changes. Yet, these changes can and should be evolutionary — even if the technologies to which they pertain are revolutionary.

What lies ahead for Canada and other countries as we harness AI’s potential is an ordinary process of considering this innovative technology within established legal regimes and policy principles.

In May of 2019, the Organisation for Economic Co-operation and Development (OECD) published a list of AI principles:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity, and they should include appropriate safeguards — for example, enabling human intervention where necessary — to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure, and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  • Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning, in line with the above principles.

Canada, along with France, was instrumental in the development of these principles and is now playing an active part in the rollout of the OECD AI Policy Observatory. Yet there’s another, promising yet often-overlooked tool of global governance: international standards-setting bodies, such as the ISO, the IEC, the IEEE, and the ITU, that can complement emerging AI governance efforts with strengths that address these three challenges. Getting Canada’s policy environment right means actively engaging with these standards-setting bodies in a way that leads to real opportunity for global leadership. It would be a shame to let that opportunity slip through our hands because we focused solely on the governance risks.

Next article