TIME Law

Pet Owners Look to Muzzle Police Who Shoot Dogs

Brittany Preston

Bereaved owners argue that when police shoot dogs it a violates their Fourth Amendment rights

Correction appended, Sept. 26

Lexie, a Labrador mix, was barking in fear when the police arrived at her owner’s suburban Detroit house early in the morning last November. The officers, responding to a call about a dog roaming the area, arrived with dog-catching gear. Yet they didn’t help the one-year-old dog, who had been left outside the house, according to a lawsuit filed in federal court: Instead, they pulled out their guns and shot Lexie eight times.

“The only thing I’m gonna do is shoot it anyway,” the lawsuit quotes an officer saying. “I do not like dogs.”

Such a response, animal advocates say, is not uncommon among law enforcement officers in America who are often ill-equipped to deal with animals in the line of duty. And now bereaved owners like Brittany Preston, Lexie’s owner, are suing cities and police departments, expressing outrage at what they see as an abuse of power by police. Animal activists, meanwhile, are turning to state legislatures to combat the problem, with demands for better police training in dealing with pets.

There are no official tallies of dog killings by police, but media reports suggest there are, at minimum, dozens every year, and possibly many more. When it comes to Preston’s dog, officials from the city of St. Clair Shores and the dog owner agree on little. City police say the dog attacked, prompting officers to open fire in self-defense. But the lawsuit filed by Preston cites police audio recordings to argue that the November 2013 shooting was premeditated, prompted by officers eager to kill a dog. Preston is suing the city for violating her Fourth Amendment right to protection from unreasonable search and seizure.

“We want whatever it takes to make sure it doesn’t happen again,” said Christopher Olson, Preston’s lawyer. “Before this case I wasn’t a dog shooting lawyer, but I am now.”

St. Clair Shores defended the officers’ actions.

“The animal was only put down after a decision was made that it was in the best interest of the residents,” said city attorney Robert Ihrie, who is defending the city in the lawsuit. “Sometimes police officers are in a position where they need to make very quick decisions for the protection of themselves and others.”

The Fourth Amendment argument gained traction in 2005, when the San Jose chapter of the Hells Angels sued the city and the police department because officers had killed dogs during a gang raid in 1998. A federal appeals judge found that “the Fourth Amendment forbids the killing of a person’s dog… when that destruction is unnecessary,” and the Hells Angels ultimately won $1.8 million in damages. In addition to the St. Clair lawsuit, other lawsuits stemming from police shootings of dogs are being planned or filed in Idaho, California, and Nevada.

At the same time, animal-rights activists are lobbying police departments to implement pet training for all officers. Several states including Illinois and Colorado have enacted measures to reduce dog shootings, and others states are considering legislation. In 2011, the Department of Justice published a report on dog-related police incidents, which included advice on how to handle dogs without killing them.

“It’s much more likely that a cop is going to encounter a dog than a terrorist, yet there’s no training,” said Ledy Van Kavage, an attorney for the advocacy group Best Friends Animal Society. “If you have a fear or hatred of dogs, then you shouldn’t be a police officer, just like if you have a hatred of different social groups.”

Brian Kilcommons, a professional dog-trainer who has trained more than 40,000 dogs and published books on the subject, said some police officers accidentally antagonize dogs right from the start, without even trying. “Police officers go into a situation with full testosterone body language, trying to control the situation,” he said. “That’s exactly what will set a dog off.” Kilcommons is developing an app that could help police officers evaluate the best way to handle a dog, including tips on reading body language and non-lethal strategies for containing them. “A bag of treats goes a long way,” he said.

But Jim Crosby, a retired Lieutenant with the Jacksonville Sheriff’s Office in Florida who now works in dog training, said there are sometimes cases that require police force.

If you’re executing a high-risk, hard-going entry with an armed suspect, the officers don’t have time to play nice and throw cookies at the dog,” said Crosby, who was commenting on police handling of dogs in general and not any specific case. But he emphasized that such situations are few and far between: “Police absolutely have the right to protect themselves against a reasonable and viable threat—but the presence of a dog is not necessarily a reasonable or viable threat.”

Ronald Janota, a retired Lieutenant Colonel with the Illinois State Police who now serves as an expert witness on use of force, acknowledged that officers are often at “heightened awareness” when confronting dogs. “If you’re the first or second through the door, you don’t have time to put a collar on the dog if the dog is literally lunging at you,” he said. “If you’re entering the house legally, you have the right to protect yourself.”

Regardless of the circumstances, a dog’s death at the hands of police can be devastating to owners.

“People are getting married later, if at all, people are having children later, if at all, and pets are filling an emotional niche,” Kilcommons said. “Before, if you had a dog and it got killed, you got another one. Now dogs are in our homes and in our hearts. They’re not replaceable. So when they’re injured or killed, people are retaliating.”

In St. Clair Shores, where Lexie died, the city is fighting the lawsuit but the police department now requires its officers to undergo animal control training.

Van Kavage said that kind of training is crucial, even if just to instill a sense of trust in the police.

“If a cop shoots your pet, do you think you’re ever going to trust a cop again?” she said. “To control a dog, 99% of the time you don’t need a gun. You just need to yell ‘sit!’ ‘stay!’”

Correction: The original version of this story misidentified the person who said, “To control a dog, 99% of the time you don’t need a gun. You just need to yell ‘sit!’ ‘stay!’” It was Ledy Van Kavage.

TIME Innovation

America Needs a Federal Robotics Agency

Honda's latest version of the Asimo humanoid robot shakes hands during a presentation in Zaventem near Brussels
Honda's latest version of the Asimo humanoid robot shakes hands during a presentation in Zaventem near Brussels July 16, 2014. Honda introduced in Belgium an improved version of its Asimo humanoid robot that it says has enhanced intelligence and hand dexterity, and is able to run at a speed of some 9 kilometres per hour (5.6 miles per hour). Francois Lenoir—REUTERS

Ryan Calo is an assistant professor at the University of Washington School of Law.

A host of emerging technologies require a coordinated set of laws and regulations as society adapts

This piece originally appeared on Brookings.

The U.S. Department of Transportation had a problem: Toyota customers were alleging that their vehicle had accelerated unexpectedly, causing death or injury. The National Highway Traffic Safety Administration (NHTSA) found some mechanical problems that may have accounted for the accidents—specifically, a design flaw that enabled accelerator pedals to become trapped by floor mats—but other experts suspected a software issue was to blame. Like most contemporary vehicles, Toyotas rely on computers to control many elements of the car. Congress was worried enough at the prospect of glitches in millions of vehicles that it directed the DOT to look for electronic causes.

NHTSA lacked the expertise to disentangle the complex set of interactions between software and hardware “under the hood.” The agency struggled over what to do until it hit upon an idea: let’s ask NASA. The National Aeronautics and Space Administration builds semi-autonomous systems and sends them to other planets; it has deep expertise in complex software and hardware. Indeed, NASA was able to clear Toyota’s software in a February 2011 report. “We enlisted the best and brightest engineers to study Toyota’s electronics systems,” proudly stated U.S. Transportation Secretary Ray LaHood, “and the verdict is in. There is no electronic-based cause for unintended high-speed acceleration in Toyotas.”

Under extraordinary circumstances, the best and brightest at NASA can take a break from repairing space stations or building Mars robots to take a look at the occasional Toyota. But this is not a sustainable strategy in the long run. Physical systems that sense, process, and act upon the world—robots, in other words—are increasingly commonplace. Google, Tesla, and others contemplate widespread driverless cars using software far more complex than what runs in a 2010 sedan. Amazon would like to deliver packages to our homes using autonomous drones. Bill Gates predicts a robot in every home. By many accounts, robotics and artificial intelligence are poised to become the next transformative technology of our time.

I have argued in a series of papers that robotics enables novel forms of human experience and, as such, challenges prevailing assumptions of law and policy. My focus here is on a more specific question: whether robotics, collectively as a set of technologies, will or should occasion the establishment of a new federal agency to deal with the novel experiences and harms robotics enables.

New agencies do form from time to time. Although many of the household-name federal agencies have remained the same over the previous decades, there has also been considerable change. Agencies restructure, as we saw with the formation of the Department of Homeland Security. New agencies, such as the Consumer Financial Protection Bureau, arise to address new or newly acute challenges posed by big events or changes in behavior.

Technology has repeatedly played a meaningful part in the formation of new agencies. For instance, the advent of radio made it possible to reach thousands of people at once with entertainment, news, and emergency information. The need to manage the impact of radio on society in turn led to the formation in 1926 of the Federal Radio Commission. The FRC itself morphed into the Federal Communications Commission as forms of mass media proliferated and is today charged with a variety of tasks related to communications devices and networks.

The advent of the train required massive changes to national infrastructure, physically connected disparate communities, and consistently sparked, sometimes literally, harm to people and property. We formed the Federal Railroad Administration in response. This agency now lives within the U.S. Department of Transportation, though the DOT itself grew out of the ascendance of rail and later the highway. The introduction of the vaccine and the attendant need to organize massive outreach to Americans helped turn a modest U.S. Marine Hospital Service into the United States Centers for Disease Control and Prevention (CDC) and sowed the seeds for the Department of Health and Human Services. And, of course, there would be no Federal Aviation Administration without the experiences and challenges of human flight.

In this piece, I explore whether advances in robotics also call for a standalone body within the federal government. I tentatively conclude that the United States would benefit from an agency dedicated to the responsible integration of robotics technologies into American society. Robots, like radio or trains, make possible new human experiences and create distinct but related challenges that would benefit from being examined and treated together. They do require special expertise to understand and may require investment and coordination to thrive.

The institution I have in mind would not “regulate” robotics in the sense of fashioning rules regarding their use, at least not in any initial incarnation. Rather, the agency would advise on issues at all levels—state and federal, domestic and foreign, civil and criminal—that touch upon the unique aspects of robotics and artificial intelligence and the novel human experiences these technologies generate. The alternative, I fear, is that we will continue to address robotics policy questions piecemeal, perhaps indefinitely, with increasingly poor outcomes and slow accrual of knowledge. Meanwhile, other nations that are investing more heavily in robotics and, specifically, in developing a legal and policy infrastructure for emerging technology, will leapfrog the U.S. in innovation for the first time since the creation of steam power.

This piece proceeds as follows: The first section briefly describes some of the challenges robotics present, both specifically by technology, and in general, across technologies. The second describes what a federal robotics agency might look like in the near term. Section three addresses a handful of potential objections to the establishment of a federal robotics agency and a final section concludes. My hope for this piece is to give readers a sense of the challenges ahead, diagnose our potentially worrisome trajectory here in the United States, and perhaps open the door to a conversation about what to do next.

Law & Robotics

Robotics stands poised to transform our society. This set of technologies has seen massive investment by the military and industry, as well as sustained attention by the media and other social, cultural, and economic institutions. Law is already responding: several states have laws governing driverless cars. Other states have laws concerning the use of drones. In Virginia, there is a law that requires insurance to cover the costs of telerobotic care.

The federal government is also dealing with robotics. There have been repeated hearings on drones and, recently, on high speed trading algorithms (market robots) and other topics on the Hill. Congress charged the Federal Aviation Administration with creating a plan to integrate drones into the national airspace by 2015. The Food and Drug Administration approved, and is actively monitoring, robotic surgery. And the NHTSA, in addition to dealing with software glitches in manned vehicles, has looked extensively at the issue of driverless cars and even promulgated guidance.

This activity is interesting and important, but hopelessly piecemeal: agencies, states, courts, and others are not in conversation with one another. Even the same government entities fail to draw links across similar technologies; drones come up little in discussions of driverless cars despite presenting similar issues of safety, privacy, and psychological unease.

Much is lost in this patchwork approach. Robotics and artificial intelligence produce a distinct set of challenges with considerable overlap—an insight that gets lost when you treat each robot separately. Specifically, robotics combines, for the first time, the promiscuity of data with physical embodiment—robots are software that can touch you. For better or for worse, we have been very tolerant of the harms that come from interconnectivity and disruptive innovation—including privacy, security, and hate speech. We will have to strike a new balance when bones are on the line in addition to bits.

Robotics increasingly display emergent behavior, meaning behavior that is useful but cannot be anticipated in advance by operators. The value of these systems is that they accomplish a task that we did not realize was important, or they accomplish a known goal in a way that we did not realize was possible. Kiva Systems does not organize Amazon’s warehouses the way a human would, which is precisely why Amazon engaged and later purchased the company. Yet criminal, tort, and other types of law rely on human intent and foreseeability to apportion blame when things go wrong.

Take two real examples of software “bots” that exist today. The first, created by artist and programmer Darius Kazemi, buys random things on Amazon. Were Kazemi’s bot to purchase something legal in the jurisdiction of origin but unlawful where he lives, could he be prosecuted? Not under many statutes, which are formulated to require intent. Or consider the Twitter bot @RealHumanPraise. The brainchild of comedian Stephen Colbert, this account autonomously combines snippets from movie reviews from the website Rotten Tomatoes with the names of Fox News personalities. The prospect of accidental libel is hardly out of the question: The bot has, for instance, suggested that one personality got drunk on communion wine. But free speech principles require not only specific intent but “actual malice” when speaking about a public figure.

The stakes are even higher when systems not only display emergent properties but also cause physical harm. Without carefully revisiting certain doctrines, we may see increasing numbers of victims without perpetrators, i.e., people hurt by robots but with no one to take the blame. Putting on one’s law and economics hat for a moment, this could lead to suboptimal activity levels (too much or too little) for helpful but potentially dangerous activities. What I mean is that, under current law, people who use robotic systems may not be held accountable for the harm those systems do, and hence may deploy them more than they should. Or, alternatively, people might never deploy potentially helpful emergent systems for fear of uncertain and boundless legal liability.

Finally, robots have a unique social meaning to people: more than any previous technology, they feel social to us. There is an extensive literature to support the claim that people are “hardwired” to react to anthropomorphic technology such as robots as though a person were actually present. The tendency is so strong that soldiers have reportedly risked their own lives to “save” a military robot in the field. The law impliedly separates things or “res” from agents and people in a variety of contexts; law and legal institutions will have to revisit this dichotomy in light of the blurring distinction between the two in contexts as diverse as punishment, damages, and the law of agency.

The upshot for policy is twofold. First, robotics presents a distinct set of related challenges. And second, the bodies that are dealing with these challenges have little or no expertise in them, and accrue new expertise at a snail’s pace. It is time to start talking about whether a common institutional structure could help the law catch up, i.e., serve as a repository for expertise about a transformative technology of our time, helping lawmakers, jurists, the media, the public, and others prepare for the sea change that appears to be afoot.

Arguably we have already seen a need for a federal robotics agency or its equivalent based on these three properties of robotics and AI organized to act upon the world. I opened with an example of Toyota and sudden acceleration. But this is just one of the many issues that embodiment, emergence, and social meaning have already raised. Some issues, likes drones and driverless cars, are all over the news. Another, high-speed trading algorithms, is the subject of a best-selling book by Michael Lewis. Still others, however, concern high stake technologies you may have never heard of let alone experienced, in part because the problems they generate have yet to be resolved to stakeholder satisfaction and so they do not see the light of day.

Driverless cars

The state of Nevada passed the first driverless car law in 2011. It represented one of the first robot-specific laws in recent memory, as well as one of the first errors due to lack of expertise. Specifically, the Nevada legislature initially defined “autonomous vehicles” to refer to any substitution of artificial intelligence for a human operator. Various commentators pointed out that car functionality substitutes for people quite often, as when a crash avoidance system breaks to avoid an accident with a sudden obstacle. Nevada’s initial definition would have imposed hefty obligations on a variety of commercially available vehicles. The state had to repeal its new law and pass a new definition.

One of the most significant challenges posed by driverless cars, however defined, is human reaction to a robot being in control. Human error accounts for an enormous percentage of driving fatalities, which number in the tens of thousands. The promise of driverless cars is, in part, vastly to reduce these accidents. In a “perfect,” post-driver world, the circle of fatalities caused by vehicles would simply shrink. The resulting diagram would look something like this:

perfect world

But in reality, driverless cars are likely to create new kinds of accidents, even as they dramatically reduce accidents overall. Thus, the real diagram is more likely to look something like this:

real world

The addition of even a tiny new area of liability could have outsized repercussions. A robot may always be better than a human driver at avoiding a shopping cart. And it may always be better at avoiding a stroller. But what happens when a robot confronts a shopping cart and a stroller at the same time? You or I would plow right into a shopping cart—or even a wall—to avoid hitting a stroller. A driverless car might not. The first headline, meanwhile, to read “Robot Car Kills Baby To Avoid Groceries” could end autonomous driving in the United States—and, ironically, drive fatalities back up. This possibility will be hard for laws, insurance, or video clips to inoculate against, requiring instead a subtle understanding of how the public perceives autonomous technologies in their midst.

Drones

More immediate, because further along, is the case of domestic drones. Back in 2010, I predicted that drones would catalyze a national conversation around technology and privacy. Unlike the Internet and other vehicles of data collection and process, I reasoned, it is easy for people to form a mental model of drone surveillance: there is a flying, inscrutable metal object, one you associated with the theatre of war, looking down upon you. Lawmakers and the public (but not yet the courts) have indeed reacted viscerally to the prospect of commercial and governmental use of drones domestically. Initially, however, the FAA through its officials attempted to distance itself from the problem. The agency observed that its main expertise is in safety, not civil liberties. It was only following tremendous outside pressure that the FAA began formally to consider the privacy impact of drones. The agency missed this issue—continues to miss it, to a degree—because it has no little to no experience with social meaning.

Law that confronts drones also tends to be underinclusive. There is little reason to target robots that can fly and take pictures over those that, say, climb the side of buildings or can be thrown into a building or over a crime scene. Arguably there is no good reason even to exclude birds with cameras attached to them—an old technique that is seeing something of a renaissance with cheap and light digital video. And yet “drone” laws almost inevitably limits themselves to “unmanned aircraft systems” as defined by the FAA, which would leave law enforcement and private companies and individuals quite a few options for mobile surveillance.

Finally, FAA policy toward commercial drones has been roundly criticized for being arbitrary and non-transparent, including by an administrative law judge. Here, again, the agency’s lack of experience with robotics—including what should or should not be characterized as a robot—may be playing a role. On the one hand, operators of small, low-flying drones argue that the FAA should not bother to regulate them because they do not raise issues any different than a remote control airplane flown by a hobbyist. On the other, Amazon is concerned because the company would eventually like to deliver packages by drone autonomously and the recent FAA roadmap on drone interpretation seems to take autonomous navigation off of the table. These debates are ongoing before the agency itself and the courts.

Finance algorithms

I mentioned the prospect of emergent behavior and the challenges it might pose for law and legal institutions. Michael Lewis’ new book Flash Boys has raised awareness of the central role of one potential hazard, algorithmic trading, on Wall Street. The SEC has been looking at the issue of high-speed trading, and the market volatility it can create, for years. The Commission seems no closer today to a solution than it was in the immediate wake of the 2010 “flash crash” where the market lost a significant percentage of its overall value in just a few minutes.

But high-speed trading could be the tip of the iceberg. Imagine, for instance, a programmer that designs software capable of predicting when a stock will make sudden gains in value—surely a gold mine for traders. This software is comprised of a learning algorithm capable of processing large volumes of information, current and historical, to find patterns. Were this software successful, traders would not necessarily understand how it worked. They might feed it data they felt was innocuous but that, in combination with publicly available information, gave the algorithm what would otherwise be understood as forbidden insider information under the mosaic theory of insider trading. These traders or programmers might never be prosecuted, however, again because of the nature of criminal statutes.

Legal scholars such as Tom Lin and Daria Roithmayer are looking at how law can adapt to the new reality of computer-driven investment. Their insights and others in this space will inform not just high frequency trading, but any very fast and automated activity with real world repercussions. In the interim, the law is still unsure how to handle the prospect of emergent behavior that ranges from benign, to useful, to potentially catastrophic.

Cognitive radio

A contemporary example you may not have heard of is the “cognitive radio,” i.e., radios capable of “choosing” the frequency or power at which they will operate. Radios in general are locked down to one specific frequency so as not to interfere with other devices or networks. (A chief reason your cell phone has an FCC emblem on it is because the FCC is certifying non-interference.) But communications bandwidth is scarce, and many believe it is not being used efficiently. Cognitive radio has the capability to modulate various parameters, including frequency and power, intelligently and in real time. These systems could operate on, for instance, emergency frequencies while they are not being used, or put out power just short of interfering with the next broadcaster a few miles away.

The upside of such technology, which is already in use in limited contexts today, is large and clear: suddenly more devices can work at the same time and more efficiently. The downside is equally large. Communications companies pay enormous sums to lease spectrum to provide services to consumers. In the case of emergency frequencies used by first responders, interference could be literally life threatening. Cognitive radios can malfunction and they can be hacked, for instance, by convincing a device it is in the mountains of Colorado instead of the city of San Francisco. Thus, as the FCC has recognized, cognitive radios must have adequate security and there must be a mechanism by which to correct errors, i.e., where the radio uses a frequency or power it should not.

The FCC has been looking at cognitive radio for ten years; comments on how best to implement this technology remain open today. Current proposals include, for instance, a dual structure whereby meta algorithms identify and punish “bad” cognitive radios. Technologists at UC Berkeley, Microsoft, and elsewhere claim these solutions are feasible. But how does the FCC evaluate the potential, especially where incumbent providers or institutions such as the Navy tell the FCC that the risks of interference remain too high? It would be useful, arguably at least, for a single agency with deep expertise in emergent software phenomena to help the SEC and FCC evaluate what to do about these, and many other, artificial intelligence problems.

Surgical robots

I have outlined a few instances where technology policy lags behind or delays robotics and AI. We might be tempted to draw the lesson that agencies move too slowly in general. And yet, problems with robotics can as easily come from an agency moving too quickly. For instance, consider recent lawsuits involving surgical robotics. Some think the FDA moved too quickly to approve robotic surgery by analogizing it to laparoscopic surgery. The issues that arise, at least judging by lawsuits for medical malpractice, seem to stem from the differences between robotic and laparoscopic surgery. For instance, and setting aside allegations that improperly insulated wires burned some patients, robots can glitch. Glitches have not led to harm directly but rather require the surgical team to transition from robotic to manual and hence keep the patient under anesthetic longer.

FRC: A Thought Experiment

I have argued that present efforts to address robotics have been piecemeal in approach and, too often, unfortunate in outcome. Much of the problem turns on the lack of familiarity with robotics and the sorts of issues the mainstreaming of this technology occasions. The FAA does not know what to say to Amazon about delivering goods by drone, and so it says “no.” Even where one government body learns a lesson, the knowledge does not necessarily make its way to any other. Here I conduct a thought experiment: what if the United States were to address this problem, as it has addressed similar problems in the not-so-distant past, by creating a standalone entity—an agency with the purpose of fostering, learning about, and advising upon robotics and its impact on society?

Agencies look all kinds of ways; a Federal Robotics Commission would have to be configured in a manner appropriate to its task. Outside of the factory and military contexts, robotics is a fledgling industry. It should be approached as such. There are dangers, but nothing to suggest we need a series of specific rules about robots, let alone a default rule against their use in particular contexts (sometimes called the “precautionary principle”) as some commentators demand. Rather, we need a deep appreciation of the technology, of the relevant incentives of those who create and consume it, and of the unfolding and inevitable threat to privacy, labor, physical safety, and so on which robotics actually presents.

At least initially, then, a Federal Robotics Commission would be small and consist of a handful of engineers and others with backgrounds in mechanical and electrical engineering, computer science, and human-computer interaction, right alongside experts in law and policy. It would hardly be the first interdisciplinary agency: the FTC houses economists and technologists in addition to its many lawyers, for example. And, taking a page from NASA or the Defense Advanced Research Projects Agency (DARPA), the FRC should place conscious emphasis on getting the “best and brightest.” Such an emphasis, coupled with a decent salary and the undeniable cache of robotics in tech-savvy circles, could help populate the FRC with top talent otherwise likely to remain in industry or academia.25

What would an FRC do then? Here are some tasks for a properly envisioned Commission:

  • Channel federal dollars into basic robotics research in an attempt to solve the still considerable technical challenges this technology presents.
  • Attract highly skilled technologists who might be reticent to work for the government otherwise.
  • Advise other federal agencies on matters having to do with robotics, including the DOT on driverless cars, the SEC on high speed trading, the FDA on robotic medical devices, the FCC on cognitive radios, the FAA on drones and, eventually, the Federal Trade Commission on increasingly sophisticated consumer products.
  • Advise federal, state, and local lawmakers on robotics law and policy.
  • Convene domestic and international stakeholders from industry, government, academia, and NGOs to discuss the impact of robotics and artificial intelligence on society.
  • File amicus curiae (friend of the court) briefs, as many federal agencies do today, in matters involving highly complex interactions between software and hardware.

There is much room for disagreement over this list. One could question the organizational structure. The thought experiment is just that: an attempt to envision how the United States can be most competitive with respect to an emerging transformative technology. I address some deeper forms of skepticism in the next section.

Objections

Today many people appreciate that robotics is a serious and meaningful technology. But suggesting that we need an entirely new agency to deal with it may strike even the robot enthusiast as overmuch. This section addresses some of the pushback—perhaps correct, and regardless healthy—that a radical thought experiment like an FRC might occasion.

Do we really need another agency?

When I have outlined these ideas in public, reactions have varied, but criticism tended to take the following form: We need another federal agency? Really?

Agencies have their problems, of course. They can be inefficient and are subject to capture by those they regulate or other special interests. I have in this very piece criticized three agencies for their respective approaches or actions toward robotics. This question—whether agencies represent a good way to govern and, if so, what is the best design—is a worthwhile one. It is the subject of a robust and long-standing debate in administrative law that cannot be reproduced here. But it has little to do with robotics. As discussed, we have agencies devoted to technologies already and it would be odd and anomalous to think we are done creating them.

A more specific version of the “really?” question asks whether we really want to “regulate” robotics at this early stage. I am very sympathetic to this point and have myself argued that we ought to remove roadblocks to innovation in robotics. I went so far as to argue that manufacturers of open robotics systems ought to be immunized for what users do with these platforms, product liability being a kind of “regulation” of business activities that emanates from the courts.

Let me clarify again that I am using the term “regulate” rather broadly. And further, that there is nothing intrinsically anathematic between regulation and innovation. Copyright is regulation meant to promote creativity (arguably). Net neutrality is regulation meant to remove barriers to competition. Google—a poster child for innovation in business and, last I checked, a for-profit company—actively lobbied Nevada and other states to regulate driverless cars. One assumes they did this to avoid uncertainty around the legality of their technology and with the hopes that other legislatures would instruct their state’s Department of Motor Vehicles to pass rules as well.

Note also that agencies vary tremendously in their structure and duties; the FTC, FDA, and SEC are enforcement agencies, for instance. Contrast them to, say, the Department of Commerce, DARPA, the Office of Management and Budget, or NASA itself. My claim is not that we need an enforcement agency for robotics—indeed, I believe it would highly undesirable to subject robotics and artificial intelligence to a general enforcement regime at such an early place in its life cycle. My claim is that we need a repository of expertise so that other agencies, as well as lawmakers and courts, do not make avoidable errors in their characterization and regulation of this technology.

A possible further response is that we have bodies capable of providing input already—for instance, the National Institute of Standards and Technology, the White House Office of Science and Technology Policy, or the Congressional Research Service. I would concede that these and other institutions could serve as repositories for knowledge about complex software and hardware. OSTP had a very serious roboticist—Vijay Kumar at University of Pennsylvania—serve as its “assistant director of robotics and cyber physical systems” for a time, and the Office’s mandate overlaps here and there with the possible FRC tasks I outline in the previous section.

Yet the diffusion of expertise across multiple existing agencies would make less and less sense over time. If robotics takes on the importance of, for instance, cars, weather prediction, broadcast communications, or rail travel, we would want in place the kernel of an agency that could eventually coordinate and regulate the technology in earnest. Moreover, even in the short run, there would be oddness and discomfort in an institution that generally advises on a particular issue (e.g., standards), or to a particular constituency (e.g., Congress), suddenly acting as a general convener and broad advisor to all manner of institutions that have to grapple with robotics. Although I could see how existing institutions could manage in theory, in practice I believe we would be better off starting from scratch with a new mandate.

How are robots different from computers?

I will address one last critique briefly, inspired by the response science fiction writer Cory Doctorow had to my recent law review article on robotics. Writing for The Guardian, Doctorow expresses skepticism that there was any meaningful distinction at law or otherwise between robots and computers. As such, Doctorow does not see how the law could “regulate” robotics specifically, as opposed to computers and the networks that connect them. “For the life of me,” writes Doctorow, “I can’t figure out a legal principle that would apply to the robot that wouldn’t be useful for the computer (and vice versa).”

In my view, the difference between a computer and a robot has largely to do with the latter’s embodiment. Robots do not just sense, process, and relay data. Robots are organized to act upon the world physically, or at least directly. This turns out to have strong repercussions at law, and to pose unique challenges to law and to legal institutions that computers and the Internet did not.

Consider, for example, how tort law handles glitches in personal computers or how law in general handles unlawful posts on a social network. If Word freezes and eats your important white paper, you may not sue Microsoft or Dell. This is due to a very specific set of legal principles such as the economic loss doctrine. But the economic loss doctrine, by its terms, is not available where a glitch causes physical harm. Similarly, courts have limited liability for insurers for computer or software glitches on the basis that information is not a “tangible” good covered by a general policy. A federal law, meanwhile, immunizes platforms such as Facebook for much of what users do there. It does so rather specifically by disallowing any legal actor from characterizing Facebook as the “publisher” of “content” that a user posts. This includes apps Facebook might run or sell. The same result would not likely obtain were someone to be hurt by a drone app purchased from a robot app store.

In any event, Doctorow’s thesis does not necessarily cut against the idea of a Federal Robotics Commission. We might say that robots are just computers, but that computers today are more powerful and complex, and increasingly organized to act upon the world in a physical or direct manner without even the prospect of human intervention. Few in government, especially on the civilian side, understand this technology well. Accordingly, the latent need for a neutral government body with deep expertise on how to deal with cyber physical systems has become quite acute.

Conclusion

I was recently at a robotics conference at the University of California, Berkeley and a journalist, who is a long-time student of robotics and one of its most assiduous chroniclers, made a remark to a colleague that struck me. He said that in recent years robotics has felt like a tidal wave, looming somewhere in the distance. But in recent months, that wave seems to have touched down upon land; keeping up with developments in robotics today is a frantic exercise in treading water.

Our government has a responsibility to be prepared for the changes robotics already begins to bring. Being prepared means, at this stage, understanding the technology and the unique experiences robots allow. It means helping other institutions make sense of the problems the technology already creates. And it means removing hurdles to development of robotics which, if not addressed, could seriously compromise America’s relevance in robotics and the development of its technology sector.

There are a number of ways our government could go about achieving these goals. I have explored one: the establishment of a federal robotics agency. We have in the past formed formal institutions around specific technologies, for the obvious reason that understanding a technology or set of technologies requires a dedicated staff, and because it can be more efficient to coordinate oversight of a technology centrally. I do not argue we should go so far as to put into place, today, a full-fledged enforcement body capable of regulating anything that touches robotics. That would be deeply inadvisable. Rather, I believe on balance that we should consider creating an institutional repository of expertise around robotics as well as a formal mechanism to promote robotics and artificial intelligence as a research agenda and industry.

The time to think through the best legal and policy infrastructure for robotics is right now. Early decisions in the lifecycle of the Internet, such as the decision to apply the First Amendment there and to immunize platforms for what users do, allowed that technology to thrive. We were also able to be “hands off” about the Internet to a degree that will not be possible with robotics and systems like it that are organized not merely to relay information but to affect the world physically or directly. Decisions we make today about robotics and artificial intelligence will affect the trajectory of this technology and of our society. Please think of this piece, if you do, as a call to be thoughtful, knowledgeable, and deliberate in our dealings with this emerging technology.

This piece originally appeared on Brookings, as part of a Brookings series called The Robots Are Coming: The Project on Civilian Robotics.

Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at The Center for Internet and Society. A nationally recognized expert in law and emerging technology, Ryan’s work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Law

The Last of the Witch-Trial Hangings

Salem Witchcraft
An illustration showing a woman executed by hanging, for the practice of witchcraft, 1692. Published in 'A Pictorial History of the United States', 1845. Interim Archives / Getty Images

Sept. 22, 1692: Seven women and one man are hanged as witches in Salem

It was easy to be declared a witch in Salem in 1692: All you had to do was deny that witches existed.

After a number of the town’s teenagers began to hallucinate and convulse, bark like dogs and run around on all fours, two magistrates were tasked with rooting out the evildoers behind the bizarre afflictions. The men invented their own methods for detecting witchcraft, according to TIME’s 1949 review of the book The Devil in Massachusetts:

After due deliberation the magistrates declared that a devil’s “teat” or “devil’s mark” on the body of the accused was proof of guilt, that mischief following anger between neighbors was ground for suspicion, and, most important of all, that “the devil could not assume the shape of an innocent person.” This last meant that hallucinations would be accepted not as evidence of the wrought-up condition of the accuser but as proof of the guilt of the accused.

In practice, suspicion fell even more broadly. In the absence of a devil’s mark or neighborly mischief, anyone who stood up to authorities and publicly questioned their actions was likely to be Salem’s next top suspect. Such was the case with Martha Cory, “a hearty matron who had rashly asserted she didn’t believe in witches,” and who appeared soon afterward in a stricken girl’s vision. Since the girl was in church at the time, half the town heard her scream, “Look! There sits Goody Cory on the beam, suckling a yellow bird betwixt her fingers!”

Cory (also spelled Corey) was among the seven women and one man hanged as witches on this day, Sept. 22, in 1692. It was the last round of executions before the tide of public opinion turned and the trials began to subside. They had claimed 20 lives.

Although Cory had urged her examiners not to believe “all that these distracted children say,” according to Charles Upham’s book Salem Witchcraft, the jury was less moved by her words than by the convulsions of her accusers while she spoke. In the midst of the proceedings, one woman threw her shoe at Cory, hitting her “square on the head.”

Cory’s husband Giles, who defended his wife and was therefore labeled a “dreadful wizard,” had been executed three days earlier — pressed to death under a pile of stones. In the backwards justice of the trials, those who confessed were spared, while those who protested their innocence were often killed. Giles Cory, who refused to plead guilty or even to stand trial before a corrupt court, endured the torture of a slow death instead.

Asked for his confession whenever a new layer of stones was added to the pile, he is reported to have answered only, “More weight!”

Read a 1999 story about the rise of Wiccanism in the armed forces: I Saluted a Witch

TIME Law

Court Rules Against Gay Marriage Bans in 2 States

US-CALIFORNIA-SAME SEX MARRIAGE
Robyn Beck—AFP/Getty Images

(CHICAGO) — A U.S. appeals court in Chicago has ruled that gay marriage bans in Wisconsin and Indiana are unconstitutional.

Thursday’s decision by a three-judge panel at the U.S. 7th Circuit Court of Appeals bumps the number of states where gay marriage will be legal from 19 to 21. The decision was unanimous.

The Wisconsin and Indiana cases shifted to Chicago after their attorneys general appealed separate lower court rulings in June tossing the bans. The 7th Circuit stayed those rulings pending its own decision.

During oral arguments in August, one judge appointed by a Republican likened same-sex marriage bans to laws once barring interracial marriage. Judge Richard Posner said they derived from “hate … and savage discrimination” of gays.

The states argued the prohibitions helped foster a centuries-old tradition.

TIME Guns

NRA Ad Campaign Has Michael Bloomberg in Its Crosshairs

Multimillion-dollar campaign tells billionaire gun control advocate to go back to New York

In an effort to combat former New York City Mayor Michael Bloomberg’s high profile campaign for stronger gun control, the National Rifle Association is launching a national ad campaign to malign the business magnate and strike back at one of the gun advocacy group’s biggest detractors.

The campaign, which kicks off Wednesday, is part of a multimillion-dollar effort to stir up negative perceptions of the New York billionaire, USA Today reports. Chris Cox, the executive director of the NRA’s Institute for Legislative Action, called Bloomberg “an arrogant hypocrite who thinks he knows best how people should live their lives.”

Bloomberg plans to spend $50 million to build a grassroots network of gun control advocates in order to counter the NRA’s lobbying prowess, and has spent money across the country to defeat candidates who are staunch gun rights advocates.

Americans remain divided on gun control law, according to an Oct. 2013 Gallup poll. Public support for stricter laws covering the sale of firearms fell to 49% after reaching 58% in the wake of the Sandy Hook shootings in which 20 children were killed. A total of 37% say laws should be kept as they are, with 13% wanting more lax regulations.

Gun control advocates say Congress’s failure to pass a measure to restrict the sale of assault weapons and high capacity magazines after the Sandy Hook shootings reflects the NRA’s pervasive influence on Capitol Hill.

An NRA ad will run nationally on cable television and in digital ads in states with key Senate races including Colorado, Iowa, Louisiana, North Carolina and Georgia. “Hey, Bloomberg: Keep your politics in New York. And keep your hands off our guns and our freedom,” says the commercial.

[USA Today]

TIME Crime

Echoes of History Resound in Ferguson, Mo. Unrest

Frustration in a St Louis suburb after the shooting of a black teen has led to violence. But the roots of the anger are embedded in the past

The death of black teenager Michael Brown in Ferguson, Mo. has once again raised questions about the use of excessive force by law enforcement.

But on a broader scale, it’s brought back to America’s attention the often fraught relationship between law enforcement and the communities they police – especially when those communities are largely impoverished, and mostly African-American.

A look at the history of race-related unrest in the U.S. — from the 1919 race riots in Chicago to Los Angeles’ infamous Rodney King riots in 1992 — shows that what is playing out on the streets of Ferguson is just the latest chapter in a long and troubled story.

TIME justice

FBI to Probe Police Shooting of Unarmed Missouri Teen

Demonstrators took to the streets of a St. Louis suburb Monday to protest the fatal shooting Saturday of an unarmed black teenager by police, as officials appealed for calm after a night of riots and the FBI said it would investigate the incident.

“Ferguson police show us no respect,” Chanel Ruffin, 25, said during the protest. “They harass black people all the time.”

“This is a terrible tragedy,” Ferguson, Mo., police chief Tom Jackson said Monday on CNN as protesters marched. “Nobody wanted this to happen but what we want to do is we want to heal. We want to build trust with the community and part of that is to have a transparent, open investigation, conducted by outside party.”

The protests Monday remained mostly peaceful, in contrast to the looting that took place Sunday night. A spokesman for the family of the teenager, Michael Brown, told media outlets his family wants “justice.” A list of demands being circulated among protestors Monday was much broader, calling for, among other things, a more diverse police force and that the officer who shot Brown be identified, fired and charged with murder. In a sign of the growing national attention focused on the small town, the hacking group Anonymous hacked Ferguson’s website on Sunday night.

Jackson said he understood the concerns of demonstrators and that a full, independent probe would help the community move forward. The FBI informed Jackson on Monday that it will investigate, the Associated Press reports. Missouri Gov. Jay Nixon, among other officials, had called for such an investigation.

“It is vital that the facts about this case are gathered in a thorough, transparent and impartial manner, in which the public has complete confidence,” Nixon said.

Attorney General Eric Holder also released a statement saying the FBI and the Justice Department’s Civil Rights Division will work in conjunction with local officers to conduct a thorough investigation.

“The federal investigation will supplement, rather than supplant, the inquiry by local authorities,” Holder said in a statement. “At every step, we will work with the local investigators, who should be prepared to complete a thorough, fair investigation in their own right. “

It remains unclear—and a matter of hot dispute—what led to the shooting Saturday of Brown, 18. Police say the teenager had assaulted an officer and reached for his gun. Many in community are skeptical of that account.

In downtown historic Ferguson on Monday morning, hundreds of protestors gathered around the Ferguson Police Department demanding justice for Brown, who they called a “gentle giant.” With raised hands in the air, people shouted “Don’t shoot me.”

Some protesters were peaceful. Others got in police officers’ faces, screaming obscenities and crossing police lines. Officers shouted “cuff him” while arresting at least a half-dozen protesters. Most protesters dispersed after a couple hours.

As news of the protests spread via social media, people of all races came from the region—one more than 60 miles—for a second, impromptu anti-police protest, this time in front of a burnt and looted convenience store. Protesters said they heard about the earlier demonstrations and drove to Ferguson to support to Brown’s family and the community.

Police in riot gear stayed mostly quiet as protesters shouted “no justice, no peace” and drivers honked in support. Police were not addressing the crowds.

The shooting has come at a time of heightened scrutiny across the country over police tactics and racial disparities in the criminal justice system. But Jackson resisted comparisons to other cases.

“This case, I know it seems like it’s similar to others, but what I would really hope is that we could allow the investigation to play out,” said Jackson, who, in an indicator of how hot tensions are running, reported being shot at himself Sunday night in a local Walmart parking lot.

The riots Sunday night in Ferguson left a trail of destruction, with cars vandalized, stores looted and walls spray-painted. More than 30 rioters were arrested, and police said two of their officers were injured.

“People were acting out of emotions,” Ruffin said. “There are a lot of people hurting. I’m not saying what they did was right. … We want justice.”

Police were on hand Monday to keep order.

“We can’t have another night like last night,” Jackson said on CNN. “So we hope it doesn’t happen, but we’re prepared for the worst.

About 130 police formed a line, in riot gear, as protesters chanted “we’re not stupid. We ‘re not stupid.” Protesters formed their own line with their hands up, crying out things like “don’t shoot me” and “you got to stop killing our people.” Brice Johnson, 27, held up a sign that read: “Please do not shoot. My hands are up. RIP Mike Brown.”

Police arrested at least half-a-dozen people who did not disperse.

“This is a terrible tragedy,” Jackson, the police chief, said of the shooting. “Nobody wanted this to happen, but what we want to do is we want to heal, we want to build trust with the community. And part of that is to have a transparent, open investigation, conducted by outside party.”

-Kristina Sauerwein reported from Ferguson, Mo.

 

TIME Law

Somebody Crashed a Drone Into Yellowstone’s Biggest Hot Spring

187768499
Grand Prismatic Spring Quan Yuan—Getty Images/Flickr RM

Drones aren't allowed in the area, but one hit Grand Prismatic Spring on Saturday and sank, potentially damaging the geothermal feature, the largest hot spring in the park and third largest in the world, said a park official

An influx of camera-equipped drones illegally flying over national parks reached a new level this week when one crashed into—and possibly damaged—a famed hot spring at Wyoming’s Yellowstone National Park.

A park official told Reuters on Wednesday that the unmanned aerial vehicle crashed into the Grand Prismatic Spring on Saturday and sank, potentially damaging the geothermal feature, the largest hot spring in the park and third largest in the world.

The National Parks Service banned drone flights in June as parks are reporting a spike in drones, which are annoying some visitors and, at times, crashing. Earlier this summer, according to Reuters, a drone crashed into a marina at Yellowstone Lake.

Yellowstone National Park now has to decide whether–and how–to extract the doomed drone from the hot spring, which is more than 121 feet deep.

“What we have to determine is whether the presence of this radio-controlled recreational aircraft poses a threat to that unique resource,” park spokesperson Al Nash told Reuters.

[Reuters]

 

TIME Booze

New Hampshire Law May Deter D.C. Visitors From Buying Booze

Live free or die...sober?

New Hampshire’s alcohol law might at first look just like those around the country, in that one must be 21 to purchase booze. It differs, however, in its handling of how out-of-town visitors can buy booze.

Here’s the hitch: Because the law focuses on other states and countries, it excludes U.S. territories. Which means that anyone from Washington D.C. may run into some problems when dropping in to one of the Granite State’s fine package stores.

The Associated Press reports that the issue arose earlier this month, when a clerk refused to sell alcohol to a 25-year-old resident of the nation’s capital. After the incident was reported by the Concord Monitor, the New Hampshire Liquor Commission “told retailers they should accept Washington, D.C., driver’s licenses when determining a buyer’s age, even though state law does not explicitly include them,” the AP said.

Liquor Commission’s Executive Councilor, Colin Van Ostern’s statement is as follows:

Tourism is New Hampshire’s second-largest industry, and the state rakes in money from out-of-staters lured by its tax-free booze. It also prides itself on having the nation’s largest state Legislature and its first-in-the-nation presidential primary, which gives lesser-known candidates a fair shot and attracts political visitors from around the country.

Van Ostern said he believes new legislation will likely be needed to permanently fix the problem. As it stands, the commission’s clarification doesn’t take into account residents of U.S. territories, he noted.

“I have no doubt this was an oversight, and I do think a fair reading of legislative intent would be to allow all those IDs, but I don’t think we should be putting it on individual store clerks to be trying to decide what legislators meant 20 years ago when they passed a law,” he said.

As one might guess, the law on New Hampshire’s books regarding tobacco products contains the same wording as the alcohol law.

TIME Immigration

Migrant Girls Share Haunting Stories About Why They Fled

Central American Female Immigrants
Central American immigrants await transportation to a U.S. Border Patrol processing center on July 24, 2014 near Mission, Texas. John Moore—Getty Images

A recent UN report gives haunting accounts from some of the girls who fled

The number of young girls captured at the US-Mexico border has increased by 77 percent this year, according to Pew Research Center analysis released Friday.

The number of girls under the age of 18 apprehended at the border this fiscal year was 13,008 compared to last year’s 7,339, according to Pew. The number of boys under 18 apprehended is still much higher at 33,924, but that represents only an 8% increase from 2013.

The United Nations High Commissioner for Refugees released a report earlier this year that included haunting accounts from some of the young girls apprehended, in an analysis of 404 children from Mexico and Central America who had been detained at the border.

“The head of the gang that controlled her neighborhood wanted Josefina to be his girlfriend and threatened to kidnap her or to kill one of her family members if she didn’t comply,” the report writes, of one 16-year-old from El Salvador. “Josefina knew another girl from her community who had become the girlfriend of a gang member and had been forced to have sex with all the gang members.”

Two-thirds of the children from El Salvador, both male and female, reported threats of violence from organized crime as one reason for fleeing. “One of [the gang members] ‘liked’ me. Another gang member told my uncle that he should get me out of there because the guy who liked me was going to do me harm,” said 15-year-old Maritza. “In El Salvador they take young girls, rape them and throw them in plastic bags. My uncle told me it wasn’t safe for me to stay there.”

Other girls reported domestic violence as a reason for leaving. Lucia, a 16-year-old from Guatemala, escaped her abusive grandmother’s home only to move in with an abusive boyfriend. “He beat me almost every day,” Lucia said. “I stayed with him for four months. I left because he tried to kill me by strangling me. I left that same day.”

The increasing numbers of children from Mexico and Central America seeking refuge in the United States has prompted a legislative battle in Washington. It remains unresolved.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser