Your search
Results 3 resources
-
While AI has been touted by industry as an innovative tool that will yield benefits for the public, examining the impact of AI from a substantive equality perspective reveals profound harms. As a leading national organization with a mandate to advance substantive gender equality, LEAF urges the government to centre substantive equality and human rights as the guiding principles when regulating the growing use of AI. With this goal in mind, LEAF submits that the scope of AIDA must - at least - be substantially expanded in order to enable regulations that can protect against all present and emerging harms from AI. Overview of Recommendations:1. Government institutions must be included in the scope of AIDA (remove s. 3) 2. The statutory definitions of “harm” and “biased output” must be expanded (amend s. 5) 3. Harm mitigation measures must not be restricted to “high-impact” systems (remove s. 7 and remove “high-impact” from ss. 8, 9, 11, 12; amend s. 36(b) so that different obligations for different types of systems can be developed in regulations)4. “Persons responsible” for AI-systems must explicitly include those involved in system training and testing (amend s. 5) 5. “Persons responsible” should be required to perform an equity and privacy audit to evaluate the possibility and likelihood of harm and biased outputs in advance of using, selling, or making available an AI-system. This audit must also be published and made available to the public (amend ss. 8 and 11; amend s. 36 to allow the Governor in Council to outline the requirements for an equity and privacy audit).6. Substantive equality and public consultation must inform the development of regulations (amend preamble and s. 35(1)).
-
This essay explores the idea of “safety” in artificial intelligence (AI) and robot governance in Canada. Regulating robotic and AI-based systems through a lens of safety is a vital, but elusive, task. In Canada, much governance of robotic and AI systems occurs through public bodies and structures. While various laws and policies aim to ensure that AI and robotic systems are used “safely,” the meaning and scope of “safety” are seldom, if ever, explicitly considered. Safety is not a neutral concept and determining what kinds of technologies and applications are “safe” requires normative choices that often go unexpressed in the law and policy-making process. Broad appeals to the policy goal of “safety” can bring conduct or regulation into conflict with the actual safety of individuals and communities. Expanded thinking about “safety” and governance in relation to automated technologies is needed, along with greater precision in law and policy goals. Scholars and activists, particularly those advocating for the abolition of state policing and the prison industrial complex, have robustly critiqued and re-theorized the concept of “safety” in law and policy, particularly in ways that are cognizant of equitable and collectively beneficial outcomes. To imagine a society without policing and prisons, abolitionist thinkers engage in a systemic critique of how society, communities, and the state understand and seek to attain “public safety.” Thus, abolitionist writers engage in a deep rethinking of the concept of “safety” and methods for creating safety, generating a richness that would benefit current discussions about AI and robotics governance. This paper explores some of this scholarship and relates it back to how we might understand and critique the use of “safety” in AI and robotics governance in Canada.