Your search
Results 21 resources
-
Robots are an increasingly common feature in North American public spaces. From regulations permitting broader drone use in public airspace and autonomous vehicle testing on public roads, to delivery robots roaming sidewalks in major US cities, to the announcement of Sidewalk Toronto — a plan to convert waterfront space in one of North America’s largest cities into a robotics-filled smart community — the laws regulating North American public spaces are opening up to robots. In many of these examples, the growing presence of robots in public space is associated with opportunities to improve human lives through intelligent urban design, environmental efficiency, and greater transportation accessibility. However, the introduction of robots into public space has also raised concerns about, for example: the commercialization of these spaces by the companies that deploy robots; increasing surveillance that will negatively impact physical and data privacy; or the potential marginalization or exclusion of some members of society in favor of those who can pay to access, use, or support the new technologies available in these spaces. Laws that permit, regulate, or prohibit robotic systems in public spaces will in many ways determine how this new technology impacts public space and the people who inhabit that space. This begs the questions: how should regulators approach the task of regulating robots in public spaces? And should any special considerations apply to the regulation of robots because of the public nature of the spaces they occupy? This paper argues that the laws that regulate robots deployed in public space will affect the public nature of that space, potentially to the benefit of some human inhabitants of the space over others. For these reasons, special considerations should apply to the regulation of robots that will operate in public space. In particular, the entry of a robotic system into a public space should never be prioritized over communal access to and use of that space by people. And, where a robotic system serves to make a space more accessible, lawmakers should avoid permitting differential access to that space through the regulation of that robotic system.
-
The combination of human-computer interaction (“HCI”) technology with sensors that monitor human physiological responses offers state agencies purportedly improved methods for extracting truthful information from suspects during interrogations. These technologies have recently been implemented in prototypes of automated international border kiosks, in which an individual seeking to cross a border would first have to interact with an avatar interrogator. The HCI system uses a combination of visual, auditory, infrared and other sensors to monitor an individual’s eye movements, voice, and various other qualities throughout the interaction. This information is then aggregated and analyzed to determine whether the individual is being "deceptive". This paper argues that this type of application poses serious risks to individual rights such as privacy and the right to silence. Highly invasive data collection and analysis is being integrated into a technology that is designed in a way that conceals the full extent of the interaction from those engaging with it. Border avatars are being misconstrued as technological versions of a human border agent, when in fact the technology enables a substantially more invasive interaction. The paper concludes by arguing that courts, developers, and state agencies institute strict and strong limits on how this technology is implemented and what information this emerging technology can collect from the individuals who engage with it.
-
Tort law allows parties to seek remedies (typically in the form of monetary damages) for losses caused by a wrongdoer’s intentional conduct, failure to exercise reasonable care, and/or introduction of a specific risk into society. The scope of tort law makes it especially relevant for individuals who are harmed as a result of an artificial intelligence (AI)-system operated by another person, company, or government agent with whom the injured person has no pre-existing legal relationship (e.g. no contract or commercial relationship). This chapter examines the application of three primary areas of tort law to AI-systems. Plaintiffs might pursue intentional tort actions when an AI-system is used to intentionally carry out harmful conduct. While this is not likely to be the main source of litigation, intentional torts can provide remedies for harms that might not be available through other areas of law. Negligence and strict liability claims are likely to be more common legal mechanisms in the AI context. A plaintiff might have a more straightforward case in a strict liability claim against a wrongdoer, but these claims are only available in specific situations in Canada. A negligence claim will be the likely mechanism for most plaintiffs suffering losses from a defendant’s use of an AI-system. Negligence actions for AI-related injuries will present a number of complexities and challenges for plaintiffs. Even seemingly straightforward preliminary issues like identifying who to name as a defendant might raise barriers to accessing remedies through tort law. These challenges, and some potential opportunities, are outlined below.
-
The impact of drones on women’s privacy has recently garnered sensational attention in media and popular discussions. Media headlines splash stories about drones spying on sunbathing or naked women and girls, drones being used to stalk women through public spaces, and drones delivering abortion pills to women who might otherwise lack access. Yet despite this popular attention, and the immense literature that has emerged analyzing the privacy implications of drone technology, questions about how the drone might enhance or undermine women’s privacy in particular have not yet been the subject of significant academic analysis. This paper contributes to the growing drone privacy literature by examining how the technology can be especially apt to impact women’s privacy. In particular, various features of the technology allow it to take advantage of the ways in which privacy protection has traditionally been - and in many cases continues to be - gendered. While the analytical focus is on the gendered privacy impacts of drone technology, the article and its conclusions are about more than women's privacy. Examining some of the differential impacts of the technology, and the laws that guide its use, helps to reveal broader inequities that can go unseen when we think about technology without social context. The paper ultimately argues that drone regulators cannot continue to treat the technology as though it is value-neutral - impacting all individuals in the same manner. Going forward, the social context in which drone technology is emerging must inform both drone-specific regulations, and how we approach privacy generally. This paper is framed as a starting point for a further discussion about how this can be done within the Canadian context and elsewhere.
-
Robots are an increasingly common feature in North American public spaces. From regulations permitting broader drone use in public airspace and autonomous vehicle testing on public roads, to delivery robots roaming sidewalks in major U.S. cities, to the announcement of Sidewalk Toronto – a plan to convert waterfront space in one of North America’s largest cities into a robotics-filled smart community – the laws regulating North American public spaces are opening up to robots. In many of these examples, the growing presence of robots in public space is associated with opportunities to improve human lives through intelligent urban design, environmental efficiency, and greater transportation accessibility. However, the introduction of robots into public space has also raised concerns about, for example, the commercialization of these spaces by the companies that deploy robots; increasing surveillance that will negatively impact physical and data privacy; or the potential marginalization or exclusion of some members of society in favour of those who can pay to access, use, or support the new technologies available in these spaces. The laws that permit, regulate, or prohibit robotic systems in public spaces will in many ways determine how this new technology impacts public space and the people who inhabit that space. This begs the questions: how should regulators approach the task of regulating robots in public spaces? And should any special considerations apply to the regulation of robots because of the public nature of the spaces they occupy? This paper argues that the laws that regulate robots deployed in public space will affect the public nature of that space, potentially to the benefit of some human inhabitants of the space over others. For these reasons, special considerations should apply to the regulation of robots that will operate in public space. In particular, the entry of a robotic system into a public space should never be prioritized over communal access to and use of that space by people. And, where a robotic system serves to make a space more accessible, lawmakers should be cautious to avoid providing differential access to that space through the regulation of that robotic system.
-
This essay explores the idea of “safety” in artificial intelligence (AI) and robot governance in Canada. Regulating robotic and AI-based systems through a lens of safety is a vital, but elusive, task. In Canada, much governance of robotic and AI systems occurs through public bodies and structures. While various laws and policies aim to ensure that AI and robotic systems are used “safely,” the meaning and scope of “safety” are seldom, if ever, explicitly considered. Safety is not a neutral concept and determining what kinds of technologies and applications are “safe” requires normative choices that often go unexpressed in the law and policy-making process. Broad appeals to the policy goal of “safety” can bring conduct or regulation into conflict with the actual safety of individuals and communities. Expanded thinking about “safety” and governance in relation to automated technologies is needed, along with greater precision in law and policy goals. Scholars and activists, particularly those advocating for the abolition of state policing and the prison industrial complex, have robustly critiqued and re-theorized the concept of “safety” in law and policy, particularly in ways that are cognizant of equitable and collectively beneficial outcomes. To imagine a society without policing and prisons, abolitionist thinkers engage in a systemic critique of how society, communities, and the state understand and seek to attain “public safety.” Thus, abolitionist writers engage in a deep rethinking of the concept of “safety” and methods for creating safety, generating a richness that would benefit current discussions about AI and robotics governance. This paper explores some of this scholarship and relates it back to how we might understand and critique the use of “safety” in AI and robotics governance in Canada.
-
The current focus on the overhyped future existential threats of AI, for example, distracts us from the harms already being perpetuated by AI systems, like discrimination, environmental damage, loss of
-
Perpetrators of Technology-Facilitated gender-based violence are taking advantage of increasingly automated and sophisticated privacy-invasive tools to carry out their abuse. Whether this be monitoring movements through stalker-ware, using drones to non-consensually film or harass, or manipulating and distributing intimate images online such as deep-fakes and creepshots, invasions of privacy have become a significant form of gender-based violence. Accordingly, our normative and legal concepts of privacy must evolve to counter the harms arising from this misuse of new technology. Canada’s Supreme Court recently addressed Technology-Facilitated violations of privacy in the context of voyeurism in R v Jarvis (2019). The discussion of privacy in this decision appears to be a good first step toward a more equitable conceptualization of privacy protection. Building on existing privacy theories, this chapter examines what the reasoning in Jarvis might mean for “reasonable expectations of privacy” in other areas of law, and how this concept might be interpreted in response to gender-based Technology-Facilitated violence. The authors argue the courts in Canada and elsewhere must take the analysis in Jarvis further to fully realize a notion of privacy that protects the autonomy, dignity, and liberty of all.
-
While AI has been touted by industry as an innovative tool that will yield benefits for the public, examining the impact of AI from a substantive equality perspective reveals profound harms. As a leading national organization with a mandate to advance substantive gender equality, LEAF urges the government to centre substantive equality and human rights as the guiding principles when regulating the growing use of AI. With this goal in mind, LEAF submits that the scope of AIDA must - at least - be substantially expanded in order to enable regulations that can protect against all present and emerging harms from AI. Overview of Recommendations:1. Government institutions must be included in the scope of AIDA (remove s. 3) 2. The statutory definitions of “harm” and “biased output” must be expanded (amend s. 5) 3. Harm mitigation measures must not be restricted to “high-impact” systems (remove s. 7 and remove “high-impact” from ss. 8, 9, 11, 12; amend s. 36(b) so that different obligations for different types of systems can be developed in regulations)4. “Persons responsible” for AI-systems must explicitly include those involved in system training and testing (amend s. 5) 5. “Persons responsible” should be required to perform an equity and privacy audit to evaluate the possibility and likelihood of harm and biased outputs in advance of using, selling, or making available an AI-system. This audit must also be published and made available to the public (amend ss. 8 and 11; amend s. 36 to allow the Governor in Council to outline the requirements for an equity and privacy audit).6. Substantive equality and public consultation must inform the development of regulations (amend preamble and s. 35(1)).
-
We write as a group of experts in the legal regulation of artificial intelligence (AI), technology-facilitated violence, equality, and the use of AI systems by law enforcement in Canada. We have experience working within academia and legal practice, and are affiliated with LEAF and the Citizen Lab who support this letter. We reviewed the Toronto Police Services Board Use of New Artificial Intelligence Technologies Policy and provide comments and recommendations focused on the following key observations: 1. Police use of AI technologies must not be seen as inevitable2. A commitment to protecting equality and human rights must be integrated more thoroughly throughout the TPSB policy and its AI analysis procedures3. Inequality is embedded in AI as a system in ways that cannot be mitigated through a policy only dealing with use 4. Having more accurate AI systems does not mitigate inequality5. The TPS must not engage in unnecessary or disproportionate mass collection and analysis of data6. TPSB’s AI policy should provide concrete guidance on the proactive identification and classification of risk7. TPSB’s AI policy must ensure expertise in independent vetting, risk analysis, and human rights impact analysis8. The TPSB should be aware of assessment challenges that can arise when an AI system is developed by a private enterprise9. The TPSB must apply the draft policy to all existing AI technologies that are used by, or presently accessible to, the Toronto Police ServiceIn light of these key observations, we have made 33 specific recommendations for amendments to the draft policy.
Explore
Author / Editor
- Beverly Jacobs (1)
- Kristen Thomasen (20)
Resource type
- Book Section (6)
- Journal Article (3)
- Newspaper Article (1)
- Preprint (9)
- Thesis (2)
Publication year
-
Between 2000 and 2024
(21)
-
Between 2000 and 2009
(1)
- 2008 (1)
- Between 2010 and 2019 (6)
- Between 2020 and 2024 (14)
-
Between 2000 and 2009
(1)