Your search

In authors or contributors
  • Perpetrators of Technology-Facilitated gender-based violence are taking advantage of increasingly automated and sophisticated privacy-invasive tools to carry out their abuse. Whether this be monitoring movements through stalker-ware, using drones to non-consensually film or harass, or manipulating and distributing intimate images online such as deep-fakes and creepshots, invasions of privacy have become a significant form of gender-based violence. Accordingly, our normative and legal concepts of privacy must evolve to counter the harms arising from this misuse of new technology. Canada’s Supreme Court recently addressed Technology-Facilitated violations of privacy in the context of voyeurism in R v Jarvis (2019). The discussion of privacy in this decision appears to be a good first step toward a more equitable conceptualization of privacy protection. Building on existing privacy theories, this chapter examines what the reasoning in Jarvis might mean for “reasonable expectations of privacy” in other areas of law, and how this concept might be interpreted in response to gender-based Technology-Facilitated violence. The authors argue the courts in Canada and elsewhere must take the analysis in Jarvis further to fully realize a notion of privacy that protects the autonomy, dignity, and liberty of all.

  • We write as a group of experts in the legal regulation of artificial intelligence (AI), technology-facilitated violence, equality, and the use of AI systems by law enforcement in Canada. We have experience working within academia and legal practice, and are affiliated with LEAF and the Citizen Lab who support this letter. We reviewed the Toronto Police Services Board Use of New Artificial Intelligence Technologies Policy and provide comments and recommendations focused on the following key observations: 1. Police use of AI technologies must not be seen as inevitable2. A commitment to protecting equality and human rights must be integrated more thoroughly throughout the TPSB policy and its AI analysis procedures3. Inequality is embedded in AI as a system in ways that cannot be mitigated through a policy only dealing with use 4. Having more accurate AI systems does not mitigate inequality5. The TPS must not engage in unnecessary or disproportionate mass collection and analysis of data6. TPSB’s AI policy should provide concrete guidance on the proactive identification and classification of risk7. TPSB’s AI policy must ensure expertise in independent vetting, risk analysis, and human rights impact analysis8. The TPSB should be aware of assessment challenges that can arise when an AI system is developed by a private enterprise9. The TPSB must apply the draft policy to all existing AI technologies that are used by, or presently accessible to, the Toronto Police ServiceIn light of these key observations, we have made 33 specific recommendations for amendments to the draft policy.

Last update from database: 11/24/24, 11:50 PM (UTC)

Explore

Author / Editor

Resource type