Your search

Author / Editor

Results 4 resources

  • We write as a group of experts in the legal regulation of artificial intelligence (AI), technology-facilitated violence, equality, and the use of AI systems by law enforcement in Canada. We have experience working within academia and legal practice, and are affiliated with LEAF and the Citizen Lab who support this letter. We reviewed the Toronto Police Services Board Use of New Artificial Intelligence Technologies Policy and provide comments and recommendations focused on the following key observations: 1. Police use of AI technologies must not be seen as inevitable2. A commitment to protecting equality and human rights must be integrated more thoroughly throughout the TPSB policy and its AI analysis procedures3. Inequality is embedded in AI as a system in ways that cannot be mitigated through a policy only dealing with use 4. Having more accurate AI systems does not mitigate inequality5. The TPS must not engage in unnecessary or disproportionate mass collection and analysis of data6. TPSB’s AI policy should provide concrete guidance on the proactive identification and classification of risk7. TPSB’s AI policy must ensure expertise in independent vetting, risk analysis, and human rights impact analysis8. The TPSB should be aware of assessment challenges that can arise when an AI system is developed by a private enterprise9. The TPSB must apply the draft policy to all existing AI technologies that are used by, or presently accessible to, the Toronto Police ServiceIn light of these key observations, we have made 33 specific recommendations for amendments to the draft policy.

  • Perpetrators of Technology-Facilitated gender-based violence are taking advantage of increasingly automated and sophisticated privacy-invasive tools to carry out their abuse. Whether this be monitoring movements through stalker-ware, using drones to non-consensually film or harass, or manipulating and distributing intimate images online such as deep-fakes and creepshots, invasions of privacy have become a significant form of gender-based violence. Accordingly, our normative and legal concepts of privacy must evolve to counter the harms arising from this misuse of new technology. Canada’s Supreme Court recently addressed Technology-Facilitated violations of privacy in the context of voyeurism in R v Jarvis (2019). The discussion of privacy in this decision appears to be a good first step toward a more equitable conceptualization of privacy protection. Building on existing privacy theories, this chapter examines what the reasoning in Jarvis might mean for “reasonable expectations of privacy” in other areas of law, and how this concept might be interpreted in response to gender-based Technology-Facilitated violence. The authors argue the courts in Canada and elsewhere must take the analysis in Jarvis further to fully realize a notion of privacy that protects the autonomy, dignity, and liberty of all.

  • Tort law allows parties to seek remedies (typically in the form of monetary damages) for losses caused by a wrongdoer’s intentional conduct, failure to exercise reasonable care, and/or introduction of a specific risk into society. The scope of tort law makes it especially relevant for individuals who are harmed as a result of an artificial intelligence (AI)-system operated by another person, company, or government agent with whom the injured person has no pre-existing legal relationship (e.g. no contract or commercial relationship). This chapter examines the application of three primary areas of tort law to AI-systems. Plaintiffs might pursue intentional tort actions when an AI-system is used to intentionally carry out harmful conduct. While this is not likely to be the main source of litigation, intentional torts can provide remedies for harms that might not be available through other areas of law. Negligence and strict liability claims are likely to be more common legal mechanisms in the AI context. A plaintiff might have a more straightforward case in a strict liability claim against a wrongdoer, but these claims are only available in specific situations in Canada. A negligence claim will be the likely mechanism for most plaintiffs suffering losses from a defendant’s use of an AI-system. Negligence actions for AI-related injuries will present a number of complexities and challenges for plaintiffs. Even seemingly straightforward preliminary issues like identifying who to name as a defendant might raise barriers to accessing remedies through tort law. These challenges, and some potential opportunities, are outlined below.

Last update from database: 3/13/25, 1:50 AM (UTC)

Explore