Artificial intelligence—the ability of machines to perform intelligent tasks such as sorting, analyzing, predicting and learning—promises substantial benefits for Canadians. Businesses that develop and commercialize AI have the potential to grow and create jobs, while organizations that adopt AI technologies can improve operations, enhance productivity and generate health, social and economic benefits for all.
Yet, some AI applications pose risks for individuals and communities:
- AI-enabled automation threatens to disrupt labour markets and employment
- predictive analytics in finance, education, policing and other sectors can reinforce racial, gender and class biases
- data used in AI development and applications are often collected in ways that violate privacy and consent (see, for example, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Twitter and Tear Gas: The Power and Fragility of Networked Protest, and Data Governance in the Digital Age)
AI policy makers face a tension. They must establish conditions that allow AI to thrive and deliver benefits, while recognizing and responding to the harm that some AI applications can generate or reinforce. Options for addressing the tension range from a laissez-faire approach that would allow AI to develop and diffuse without limit, to a precautionary approach that would restrain the development of AI until risks are better understood and capacity to manage them is in place. Given that AI is a platform technology with many possible applications—and thus various risk profiles—it should be governed with an incremental risk-management approach that is case- and context-sensitive, rather than a blunt laissez-faire or precautionary approach. A risk-management approach allows space for AI technologies and applications to develop while monitoring and managing risks as they emerge in specific applications. To institutionalize a risk-management approach to governing AI in Canada we recommend that the Government of Canada create two new institutions:
- an AI risk governance council
- an algorithm impact assessment agency