Your search

In authors or contributors
  • While AI has been touted by industry as an innovative tool that will yield benefits for the public, examining the impact of AI from a substantive equality perspective reveals profound harms. As a leading national organization with a mandate to advance substantive gender equality, LEAF urges the government to centre substantive equality and human rights as the guiding principles when regulating the growing use of AI. With this goal in mind, LEAF submits that the scope of AIDA must - at least - be substantially expanded in order to enable regulations that can protect against all present and emerging harms from AI. Overview of Recommendations:1. Government institutions must be included in the scope of AIDA (remove s. 3) 2. The statutory definitions of “harm” and “biased output” must be expanded (amend s. 5) 3. Harm mitigation measures must not be restricted to “high-impact” systems (remove s. 7 and remove “high-impact” from ss. 8, 9, 11, 12; amend s. 36(b) so that different obligations for different types of systems can be developed in regulations)4. “Persons responsible” for AI-systems must explicitly include those involved in system training and testing (amend s. 5) 5. “Persons responsible” should be required to perform an equity and privacy audit to evaluate the possibility and likelihood of harm and biased outputs in advance of using, selling, or making available an AI-system. This audit must also be published and made available to the public (amend ss. 8 and 11; amend s. 36 to allow the Governor in Council to outline the requirements for an equity and privacy audit).6. Substantive equality and public consultation must inform the development of regulations (amend preamble and s. 35(1)).

  • We write as a group of experts in the legal regulation of artificial intelligence (AI), technology-facilitated violence, equality, and the use of AI systems by law enforcement in Canada. We have experience working within academia and legal practice, and are affiliated with LEAF and the Citizen Lab who support this letter. We reviewed the Toronto Police Services Board Use of New Artificial Intelligence Technologies Policy and provide comments and recommendations focused on the following key observations: 1. Police use of AI technologies must not be seen as inevitable2. A commitment to protecting equality and human rights must be integrated more thoroughly throughout the TPSB policy and its AI analysis procedures3. Inequality is embedded in AI as a system in ways that cannot be mitigated through a policy only dealing with use 4. Having more accurate AI systems does not mitigate inequality5. The TPS must not engage in unnecessary or disproportionate mass collection and analysis of data6. TPSB’s AI policy should provide concrete guidance on the proactive identification and classification of risk7. TPSB’s AI policy must ensure expertise in independent vetting, risk analysis, and human rights impact analysis8. The TPSB should be aware of assessment challenges that can arise when an AI system is developed by a private enterprise9. The TPSB must apply the draft policy to all existing AI technologies that are used by, or presently accessible to, the Toronto Police ServiceIn light of these key observations, we have made 33 specific recommendations for amendments to the draft policy.

Last update from database: 9/19/24, 10:50 PM (UTC)

Explore

Author / Editor

Resource type