TIME Guns

Nebraska School OKs ‘Tasteful’ Senior Portraits With Guns

The school board unanimously passed the rule

A rural Nebraska school district decided Monday to allow graduating high school seniors to pose with guns in their senior portraits, the Omaha World-Herald reports.

Broken Bow school board members voted 6-0 to approve the rule, which permits only the “tasteful and appropriate” display of firearms, and prohibits pointing the weapons at the camera or displaying a hunted animal in distress, according to the policy.

“The board, I believe, felt they wanted to give students who are involved in those kinds of things the opportunity to take a senior picture with their hobby, with their sport, just like anybody with any other hobby or sport,” superintendent Mark Sievering told the World-Herald.

Nebraska has no age minimum for hunting, although hunters below 12 must be supervised by a licensed hunter, according to state law. It is illegal under Nebraska law to possess a firearm on school grounds, unless the holder is in an exempt category, such as the police force.

The issue of having guns in or around schools has been especially salient after the Dec. 2012 Sandy Hook Elementary school shooting, an event that prompted policymakers to question whether adequate gun safety laws were in place. Since that shooting, several organizations have argued that several gaps in gun laws still exist despite many states tightening background checks for firearm purchases. Yet Nebraska’s overall gun policies still lag behind other states, according to the Brady Center to Prevent Gun Violence, whose 2013 Gun Laws Scorecard gave the state a D.

[Omaha World-Herald]

TIME Law

Supreme Court to Decide Whether Felons Can Sell Their Guns

Case will determine how much "ownership" felons have over their guns

The Supreme Court said Monday that they would decide the case of a convicted felon who attempted to sell his guns or transfer ownership to his wife after he was forced to relinquish them under federal law.

Tony Henderson is a former U.S. Border Patrol Agent who was convicted of felony drug offenses and served six months in prison in 2007. When he was arrested, he gave the FBI his 19 firearms, because felons are not allowed to own weapons. He later attempted to transfer ownership of the guns to his wife or sell them to a third party, which prompted a legal debate as to whether convicted felons relinquish all ownership rights when they turn over their weapons.

A federal judge refused Henderson’s request to transfer ownership, as did an appeals court, which led him to take the case to the Supreme court.

Henderson’s attorney told Reuters that if he doesn’t get the appeal overturned, it would “effectively strip gun owners of their entire ownership interest in significant, lawful household assets following a conviction for an unrelated offense.”

U.S. Solicitor General Donald Verrilli, who will argue the government’s case in front of the Supreme Court, says that allowing a felon to sell or transfer ownership of their guns presents a “significant risk” that he or she could still have access to them.

[Reuters]

TIME Laws

The CDC Has Less Power Than You Think, and Likes it That Way

Ebola isn’t likely to lead to a widespread quarantine but it certainly raises interesting constitutional questions

Director of the Centers for Disease Control and Prevention Tom Frieden has come under fire in recent days for what some charge is the agency’s stumbling response to the appearance of Ebola in America. This week, reporters and lawmakers alike grilled Frieden over how two nurses in Texas contracted the virus and how one of them was able to board an airplane even after she reported a raised temperature.

Breakdowns in good practice notwithstanding, it’s important to remember that Ebola in the U.S. is largely contained and very unlikely to lead to any kind of significant outbreak. Still, the charges leveled against Frieden raise a question that leads to a surprisingly complicated answer: just what, exactly, can—and should—the CDC do?

Since time immemorial, public health officials’ main weapon against the outbreak of a disease as been to restrict the ability of people to interact with one another, also known as a quarantine. The term comes from the Latin “quadraginta,” meaning 40, and is derived from the 40-day period ships traveling from plague-stricken regions were kept at bay before being allowed to dock in medieval European ports.

Imposing a quarantine—effectively stripping innocent people of the most basic right to move freely in the world—is one of the most serious actions a government can take against its own citizenry. Partly for this reason, in the American federal system (designed from the outset to check the power of the national government) the power to quarantine resides largely with state and local authorities. Should Texas, or any other state, someday face the threat of a true epidemic, the states have broad authority to restrict the movement of people within their own borders. Public health codes granting the state power to impose quarantine orders vary from state to state, of course. Violating a quarantine order in Louisiana is punishable by a fine of up to $100 and up to a year in prison; in Mississippi the same infraction could cost a violator up to $5000 and up to five years in prison.

The federal government does have its own powers. The CDC, as the U.S.’s primary agency for taking action to stop the spread of disease, has broad authority under the Commerce Clause of the Constitution to restrict travel into the country and between states of an infected person or a person who has come in contact with an infected person, according to Laura Donohue, director of the Center on National Security and the Law at Georgetown Law School. Federal quarantine can be imposed, too, on federal property, like a military base or National Forest land. And as the preeminent employer of experts on public health crises, the CDC is always likely to get involved within any affected state in the event of a looming pandemic.

But its power to act is extremely restricted. The agency traditionally acts in an advisory role and can only take control from local authorities under two circumstances: if local authorities invite them to do so or under the authority outlined in the Insurrection Act in the event of a total breakdown of law and order.

And here the picture becomes murkier yet because authority does not always beget power.

“It’s not a massive regulatory agency,” said Wendy Parmet, a professor in public health law at Northeastern University in Boston. “They don’t have ground troops. They don’t have tons of regulators. They’re scientists. Even if the states asked them to do it it’s not clear how they would do it.

Even in the highly unlikely event that the CDC were called to respond to a—let’s reiterate: extremely-unlikely-to-occur—pandemic, quarantine and isolation would be imposed not by bespeckled CDC scientists but by local or federal law enforcement or troops. Most importantly, the CDC is extremely reluctant to be seen as a coercive government agency because it depends as much as any agency on the good will and acquiescence of citizens in order to respond effectively to a public health emergency. When the bright lights of the Ebola crisis are not on it, the CDC will still need people to get vaccinated, to go to the doctor when they get sick, and to call the authorities if they see trouble.

“Our public health system is built on voluntary compliance,” Donohue tells TIME. “If the CDC starts to become the enemy holding a gun to [someone’s] head and keeping them in their house, they lose insight.”

TIME LGBT

Federal Judge Strikes Down Arizona’s Same-Sex Marriage Ban

Arizona is the latest state where gay marriage is legal following an earlier Supreme Court move

Arizona is now the latest state with legalized same-sex marriage after a federal judge on Friday struck down the state’s ban on the practice and ordered that his decision take effect immediately.

In a concise four-page decision, U.S. District Court Judge John Sedwick cited rulings from higher courts to dismiss Arizona’s ban as unconstitutional.

“It is clear that an appeal to the Ninth Circuit would not succeed,” Sedwick wrote, referring to the higher court that has jurisdiction over a potential appeal in the case. The judge added that the United States Supreme Court has suggested that it would not hear an appeal in the Arizona case.

Arizona is the latest in a slew of states where same-sex marriage was effectively legalized after the Supreme Court earlier this month declined to hear cases addressing the issue. The court’s move effectively brought the total number of states with same-sex marriage to 30, while paving the way for legalization in other states as well.

TIME politics

We Should Treat Gun Violence the Way We Do Cancer and Heart Disease

YouTube / Elliot Rodger

Khawar Siddique, MD, MBA, is a spine surgeon and neurosurgeon.

The so-called “gun violence restraining order” recently passed in California is a commonsense policy that balances the need for public health and doesn't tread on the rights of gun owners

As a neurosurgeon and spine surgeon, I have had a front row seat to gun violence. Whether shot by someone else or self-inflicted, a bullet traveling through the brain and spine can cause extraordinary damage. You can’t put a brain back together. If we do succeed at preserving a life, the victim of gun violence is often left with tragic neurologic deficits such as paralysis, speech problems and cognitive issues.

Prevention is the key to improving health outcomes for gun injury victims. We’ve made great strides in diseases such as cancer, heart disease and diabetes by investing in preventative medicine. Our health care system and our citizens have benefited as a result. Gun violence is no different. We need tools to help prevent gun violence and gun suicide.

Last month, California Governor Jerry Brown signed into law a bill that will do just that. The so-called “gun violence restraining order” is a commonsense policy that balances the need for public health and safeguards the rights of gun owners. It answers a basic question we ask ourselves after horrific instances of mass violence or suicide: What could we have done to stop this?

AB-1014, now law, is modeled on the concept of domestic violence protection orders. Just as a woman can seek protection from her abuser in the courts, the law allows family members and key members of the community, like law enforcement, to petition a judge to provide temporary firearms prohibitions for those deemed to be in crisis. Whether protecting against suicide or mass violence, it will save lives.

This problem is not theoretical. There are far too many instances where individuals show signs of impending violence but there are no means to prevent them access to firearms. The recent shooting in Isla Vista is an example where limiting access to guns could have saved lives. Now, with the signing of this law, family members and police officers have legal means to prevent a potentially dangerous person from accessing firearms or ammunition.

Gun violence restraining order laws are wholly consistent with the Second Amendment. As a gun owner myself, I know the worst thing that can happen to our rights is to see irresponsible individuals use them to commit crimes. In the case of these new laws, there must be sufficient evidence for a judge to believe that an individual poses a danger to others or oneself before the gun violence restraining order can be issued. In California, the law would penalize anyone who files a petition with false information or uses a gun violence restraining order to harass another person.

You can pass this commonsense law in your state, too. Americans for Responsible Solutions, the gun violence prevention group headed by former Congresswoman Gabrielle Giffords and her husband, retired astronaut Mark Kelly, has toolkits for state legislators and advocates telling you exactly how to get it done. Gun violence restraining orders will help keep guns out of the hands of individuals proven to be a threat to the community or themselves while ensuring due process for all involved. As a physician, a citizen and a gun-owner, I encourage the legislatures and governors from across this great nation to act boldly and with common sense.

Khawar Siddique, MD, MBA, is a spine surgeon and neurosurgeon who resides in Los Angeles.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Crime

Judge Says Women Aren’t Entitled to Privacy in Public Places

The case of an alleged upskirt photographer was at issue

Correction appended, Oct. 15

Prosecutors have dropped a case against a man accused of taking photographs up women’s skirts at the Lincoln Memorial, after a local judge ruled the photographs inadmissible and said women in public places shouldn’t have an expectation of privacy.

Christopher Cleveland was arrested in June 2013 for allegedly taking photographs of the crotches and butts of women sitting on the steps to the national monument. On Aug. 28, D.C. Superior Court Judge Juliet McKenna ruled that the photographs would be inadmissible, leading the U.S. Attorney’s Office to drop the case against Cleveland last last month.

Prosecutors filed a motion to keep the photographs admissible, writing that women are entitled to a “reasonable expectation of privacy” while sitting on the steps of the Lincoln Memorial. In her ruling to suppress evidence, Judge McKenna wrote, “Some women are seated in such a way that their private areas, including the upper inches of their buttocks, are clearly visible. … This court finds that no individual clothed and positioned in such a manner in a public area in broad daylight in the presence of countless other individuals could have a reasonable expectation of privacy.”

Which means Christopher Cleveland, and other photographers like him, are free to snap away as they please.

Correction: The original version of this story misstated the date of Judge Juliet McKenna’s ruling.

TIME Crime

Report: Number of NYPD Chokehold Complaints Highest in Decade

People and relatives attend the funeral ceremony of Eric Garner who died after NYPD cops put him in a banned chokehold,
People and relatives attend the funeral ceremony of Eric Garner who died after NYPD cops put him in a banned chokehold. Anadolu Agency/Getty Images

Report says the illegal restraining method isn't always properly investigated

The number of complaints against the New York Police Department’s use of illegal chokeholds in the past year is the highest in a decade, according to a civilian-run agency tasked with investigating complaints against the NYPD.

The Civilian Complaint Review Board released a report Tuesday stating that between July 2013 and June 2014 it received 219 chokehold complaints, a number previously unseen since 2010. From 2006-2010, the agency received over 200 chokehold complaints per year.

“These findings demonstrate that, at least from the point of view of the particular experience of the complainants, police officers continue to use choke holds and the persistence of this practice puts civilians at physical risk,” the CCRB said in a statement.

The board added that these complaints were often not investigated, and that police were often not disciplined for using the restraining that has been prohibited by the NYPD Patrol Guide for over 20 years, the report said.

According to NYPD policy, a chokehold includes but is not limited to “any pressure to the throat or windpipe, which may prevent or hinder breathing or reduce intake of air.”

The definition has led some police officers to believe that a restraint is a chokehold only if breathing is restricted, which the CCRB maintains is a too limited interpretation. As a result, the number of chokehold incidents are likely underreported or misclassified, which has led to police officers bypassing discipline and investigation, said the report, which recommended the creation of a group involving both CCRB members and police officers to ensure that chokeholds are prohibited.

The NYPD’s use of force has drawn criticism in recent months after a bystander filmed a video of officers restraining in Eric Garner in July as he was arrested for selling untaxed cigarettes. Garner died while he was pinned down, which the medical examiner’s office ruled a homicide. A grand jury will be convened to decide whether or not to charge the officers.

TIME Law

Report: 168 Million Children Worldwide Were Laborers in 2013

86 million were working hazardous jobs

The U.S. Labor Department reported Tuesday that globally last year 168 million children between the ages of 5- and 17-years-old were laborers, with many performing hazardous jobs.

A whopping 85 million of the 168 million child laborers in 2013 were performing hazardous labor in which their health or safety was compromised, reports the 13th edition of the annual Findings on the Worst Forms of Child Labor. Hazardous labor is one of four categories in the report classified as the “worst forms” of child labor, a designation that also includes slavery, sexual exploitation and other illegal or illicit activities.

“Behind these numbers are children wielding machetes to crack open cocoa pods; girls working as domestic servants in third-party homes doing laundry from dawn until dusk; and boys wading in puddles of mercury sifting for gold,” the report said.

The report also estimated that within this 168 million, there are at least 6 million children in forced labor, including those chained to looms to weave carpets, sold to brothels or mandated to leave school and pick cotton.

The 958-page document has been mandated by the Trade and Development Act of 2000, a law that regulates how the Labor Department determines which countries are eligible for trade benefits from the U.S. The benefits are denied to countries with the “worst forms of child labor,” a practice that has fallen 30% between 2000 and 2012. Still, these forms of labor have left about 10% of the world’s children laboring instead of attending school, the report said.

“This report shines a light on the estimated 168 million children around the world who toil in the shadows — crawling underground in mine shafts, sewing in textile factories or serving in households as domestic workers,” said U.S. Secretary of Labor Thomas E. Perez in a press release. “We are seeing more countries take action to address the issue, but the world can and must do more to accelerate these efforts. When children are learning rather than working, families flourish, economies grow and nations prosper.”

TIME Law

Pet Owners Look to Muzzle Police Who Shoot Dogs

Brittany Preston

Bereaved owners argue that when police shoot dogs it a violates their Fourth Amendment rights

Correction appended, Sept. 26

Lexie, a Labrador mix, was barking in fear when the police arrived at her owner’s suburban Detroit house early in the morning last November. The officers, responding to a call about a dog roaming the area, arrived with dog-catching gear. Yet they didn’t help the one-year-old dog, who had been left outside the house, according to a lawsuit filed in federal court: Instead, they pulled out their guns and shot Lexie eight times.

“The only thing I’m gonna do is shoot it anyway,” the lawsuit quotes an officer saying. “I do not like dogs.”

Such a response, animal advocates say, is not uncommon among law enforcement officers in America who are often ill-equipped to deal with animals in the line of duty. And now bereaved owners like Brittany Preston, Lexie’s owner, are suing cities and police departments, expressing outrage at what they see as an abuse of power by police. Animal activists, meanwhile, are turning to state legislatures to combat the problem, with demands for better police training in dealing with pets.

There are no official tallies of dog killings by police, but media reports suggest there are, at minimum, dozens every year, and possibly many more. When it comes to Preston’s dog, officials from the city of St. Clair Shores and the dog owner agree on little. City police say the dog attacked, prompting officers to open fire in self-defense. But the lawsuit filed by Preston cites police audio recordings to argue that the November 2013 shooting was premeditated, prompted by officers eager to kill a dog. Preston is suing the city for violating her Fourth Amendment right to protection from unreasonable search and seizure.

“We want whatever it takes to make sure it doesn’t happen again,” said Christopher Olson, Preston’s lawyer. “Before this case I wasn’t a dog shooting lawyer, but I am now.”

St. Clair Shores defended the officers’ actions.

“The animal was only put down after a decision was made that it was in the best interest of the residents,” said city attorney Robert Ihrie, who is defending the city in the lawsuit. “Sometimes police officers are in a position where they need to make very quick decisions for the protection of themselves and others.”

The Fourth Amendment argument gained traction in 2005, when the San Jose chapter of the Hells Angels sued the city and the police department because officers had killed dogs during a gang raid in 1998. A federal appeals judge found that “the Fourth Amendment forbids the killing of a person’s dog… when that destruction is unnecessary,” and the Hells Angels ultimately won $1.8 million in damages. In addition to the St. Clair lawsuit, other lawsuits stemming from police shootings of dogs are being planned or filed in Idaho, California, and Nevada.

At the same time, animal-rights activists are lobbying police departments to implement pet training for all officers. Several states including Illinois and Colorado have enacted measures to reduce dog shootings, and others states are considering legislation. In 2011, the Department of Justice published a report on dog-related police incidents, which included advice on how to handle dogs without killing them.

“It’s much more likely that a cop is going to encounter a dog than a terrorist, yet there’s no training,” said Ledy Van Kavage, an attorney for the advocacy group Best Friends Animal Society. “If you have a fear or hatred of dogs, then you shouldn’t be a police officer, just like if you have a hatred of different social groups.”

Brian Kilcommons, a professional dog-trainer who has trained more than 40,000 dogs and published books on the subject, said some police officers accidentally antagonize dogs right from the start, without even trying. “Police officers go into a situation with full testosterone body language, trying to control the situation,” he said. “That’s exactly what will set a dog off.” Kilcommons is developing an app that could help police officers evaluate the best way to handle a dog, including tips on reading body language and non-lethal strategies for containing them. “A bag of treats goes a long way,” he said.

But Jim Crosby, a retired Lieutenant with the Jacksonville Sheriff’s Office in Florida who now works in dog training, said there are sometimes cases that require police force.

If you’re executing a high-risk, hard-going entry with an armed suspect, the officers don’t have time to play nice and throw cookies at the dog,” said Crosby, who was commenting on police handling of dogs in general and not any specific case. But he emphasized that such situations are few and far between: “Police absolutely have the right to protect themselves against a reasonable and viable threat—but the presence of a dog is not necessarily a reasonable or viable threat.”

Ronald Janota, a retired Lieutenant Colonel with the Illinois State Police who now serves as an expert witness on use of force, acknowledged that officers are often at “heightened awareness” when confronting dogs. “If you’re the first or second through the door, you don’t have time to put a collar on the dog if the dog is literally lunging at you,” he said. “If you’re entering the house legally, you have the right to protect yourself.”

Regardless of the circumstances, a dog’s death at the hands of police can be devastating to owners.

“People are getting married later, if at all, people are having children later, if at all, and pets are filling an emotional niche,” Kilcommons said. “Before, if you had a dog and it got killed, you got another one. Now dogs are in our homes and in our hearts. They’re not replaceable. So when they’re injured or killed, people are retaliating.”

In St. Clair Shores, where Lexie died, the city is fighting the lawsuit but the police department now requires its officers to undergo animal control training.

Van Kavage said that kind of training is crucial, even if just to instill a sense of trust in the police.

“If a cop shoots your pet, do you think you’re ever going to trust a cop again?” she said. “To control a dog, 99% of the time you don’t need a gun. You just need to yell ‘sit!’ ‘stay!’”

Correction: The original version of this story misidentified the person who said, “To control a dog, 99% of the time you don’t need a gun. You just need to yell ‘sit!’ ‘stay!’” It was Ledy Van Kavage.

TIME Innovation

America Needs a Federal Robotics Agency

Honda's latest version of the Asimo humanoid robot shakes hands during a presentation in Zaventem near Brussels
Honda's latest version of the Asimo humanoid robot shakes hands during a presentation in Zaventem near Brussels July 16, 2014. Honda introduced in Belgium an improved version of its Asimo humanoid robot that it says has enhanced intelligence and hand dexterity, and is able to run at a speed of some 9 kilometres per hour (5.6 miles per hour). Francois Lenoir—REUTERS

Ryan Calo is an assistant professor at the University of Washington School of Law.

A host of emerging technologies require a coordinated set of laws and regulations as society adapts

This piece originally appeared on Brookings.

The U.S. Department of Transportation had a problem: Toyota customers were alleging that their vehicle had accelerated unexpectedly, causing death or injury. The National Highway Traffic Safety Administration (NHTSA) found some mechanical problems that may have accounted for the accidents—specifically, a design flaw that enabled accelerator pedals to become trapped by floor mats—but other experts suspected a software issue was to blame. Like most contemporary vehicles, Toyotas rely on computers to control many elements of the car. Congress was worried enough at the prospect of glitches in millions of vehicles that it directed the DOT to look for electronic causes.

NHTSA lacked the expertise to disentangle the complex set of interactions between software and hardware “under the hood.” The agency struggled over what to do until it hit upon an idea: let’s ask NASA. The National Aeronautics and Space Administration builds semi-autonomous systems and sends them to other planets; it has deep expertise in complex software and hardware. Indeed, NASA was able to clear Toyota’s software in a February 2011 report. “We enlisted the best and brightest engineers to study Toyota’s electronics systems,” proudly stated U.S. Transportation Secretary Ray LaHood, “and the verdict is in. There is no electronic-based cause for unintended high-speed acceleration in Toyotas.”

Under extraordinary circumstances, the best and brightest at NASA can take a break from repairing space stations or building Mars robots to take a look at the occasional Toyota. But this is not a sustainable strategy in the long run. Physical systems that sense, process, and act upon the world—robots, in other words—are increasingly commonplace. Google, Tesla, and others contemplate widespread driverless cars using software far more complex than what runs in a 2010 sedan. Amazon would like to deliver packages to our homes using autonomous drones. Bill Gates predicts a robot in every home. By many accounts, robotics and artificial intelligence are poised to become the next transformative technology of our time.

I have argued in a series of papers that robotics enables novel forms of human experience and, as such, challenges prevailing assumptions of law and policy. My focus here is on a more specific question: whether robotics, collectively as a set of technologies, will or should occasion the establishment of a new federal agency to deal with the novel experiences and harms robotics enables.

New agencies do form from time to time. Although many of the household-name federal agencies have remained the same over the previous decades, there has also been considerable change. Agencies restructure, as we saw with the formation of the Department of Homeland Security. New agencies, such as the Consumer Financial Protection Bureau, arise to address new or newly acute challenges posed by big events or changes in behavior.

Technology has repeatedly played a meaningful part in the formation of new agencies. For instance, the advent of radio made it possible to reach thousands of people at once with entertainment, news, and emergency information. The need to manage the impact of radio on society in turn led to the formation in 1926 of the Federal Radio Commission. The FRC itself morphed into the Federal Communications Commission as forms of mass media proliferated and is today charged with a variety of tasks related to communications devices and networks.

The advent of the train required massive changes to national infrastructure, physically connected disparate communities, and consistently sparked, sometimes literally, harm to people and property. We formed the Federal Railroad Administration in response. This agency now lives within the U.S. Department of Transportation, though the DOT itself grew out of the ascendance of rail and later the highway. The introduction of the vaccine and the attendant need to organize massive outreach to Americans helped turn a modest U.S. Marine Hospital Service into the United States Centers for Disease Control and Prevention (CDC) and sowed the seeds for the Department of Health and Human Services. And, of course, there would be no Federal Aviation Administration without the experiences and challenges of human flight.

In this piece, I explore whether advances in robotics also call for a standalone body within the federal government. I tentatively conclude that the United States would benefit from an agency dedicated to the responsible integration of robotics technologies into American society. Robots, like radio or trains, make possible new human experiences and create distinct but related challenges that would benefit from being examined and treated together. They do require special expertise to understand and may require investment and coordination to thrive.

The institution I have in mind would not “regulate” robotics in the sense of fashioning rules regarding their use, at least not in any initial incarnation. Rather, the agency would advise on issues at all levels—state and federal, domestic and foreign, civil and criminal—that touch upon the unique aspects of robotics and artificial intelligence and the novel human experiences these technologies generate. The alternative, I fear, is that we will continue to address robotics policy questions piecemeal, perhaps indefinitely, with increasingly poor outcomes and slow accrual of knowledge. Meanwhile, other nations that are investing more heavily in robotics and, specifically, in developing a legal and policy infrastructure for emerging technology, will leapfrog the U.S. in innovation for the first time since the creation of steam power.

This piece proceeds as follows: The first section briefly describes some of the challenges robotics present, both specifically by technology, and in general, across technologies. The second describes what a federal robotics agency might look like in the near term. Section three addresses a handful of potential objections to the establishment of a federal robotics agency and a final section concludes. My hope for this piece is to give readers a sense of the challenges ahead, diagnose our potentially worrisome trajectory here in the United States, and perhaps open the door to a conversation about what to do next.

Law & Robotics

Robotics stands poised to transform our society. This set of technologies has seen massive investment by the military and industry, as well as sustained attention by the media and other social, cultural, and economic institutions. Law is already responding: several states have laws governing driverless cars. Other states have laws concerning the use of drones. In Virginia, there is a law that requires insurance to cover the costs of telerobotic care.

The federal government is also dealing with robotics. There have been repeated hearings on drones and, recently, on high speed trading algorithms (market robots) and other topics on the Hill. Congress charged the Federal Aviation Administration with creating a plan to integrate drones into the national airspace by 2015. The Food and Drug Administration approved, and is actively monitoring, robotic surgery. And the NHTSA, in addition to dealing with software glitches in manned vehicles, has looked extensively at the issue of driverless cars and even promulgated guidance.

This activity is interesting and important, but hopelessly piecemeal: agencies, states, courts, and others are not in conversation with one another. Even the same government entities fail to draw links across similar technologies; drones come up little in discussions of driverless cars despite presenting similar issues of safety, privacy, and psychological unease.

Much is lost in this patchwork approach. Robotics and artificial intelligence produce a distinct set of challenges with considerable overlap—an insight that gets lost when you treat each robot separately. Specifically, robotics combines, for the first time, the promiscuity of data with physical embodiment—robots are software that can touch you. For better or for worse, we have been very tolerant of the harms that come from interconnectivity and disruptive innovation—including privacy, security, and hate speech. We will have to strike a new balance when bones are on the line in addition to bits.

Robotics increasingly display emergent behavior, meaning behavior that is useful but cannot be anticipated in advance by operators. The value of these systems is that they accomplish a task that we did not realize was important, or they accomplish a known goal in a way that we did not realize was possible. Kiva Systems does not organize Amazon’s warehouses the way a human would, which is precisely why Amazon engaged and later purchased the company. Yet criminal, tort, and other types of law rely on human intent and foreseeability to apportion blame when things go wrong.

Take two real examples of software “bots” that exist today. The first, created by artist and programmer Darius Kazemi, buys random things on Amazon. Were Kazemi’s bot to purchase something legal in the jurisdiction of origin but unlawful where he lives, could he be prosecuted? Not under many statutes, which are formulated to require intent. Or consider the Twitter bot @RealHumanPraise. The brainchild of comedian Stephen Colbert, this account autonomously combines snippets from movie reviews from the website Rotten Tomatoes with the names of Fox News personalities. The prospect of accidental libel is hardly out of the question: The bot has, for instance, suggested that one personality got drunk on communion wine. But free speech principles require not only specific intent but “actual malice” when speaking about a public figure.

The stakes are even higher when systems not only display emergent properties but also cause physical harm. Without carefully revisiting certain doctrines, we may see increasing numbers of victims without perpetrators, i.e., people hurt by robots but with no one to take the blame. Putting on one’s law and economics hat for a moment, this could lead to suboptimal activity levels (too much or too little) for helpful but potentially dangerous activities. What I mean is that, under current law, people who use robotic systems may not be held accountable for the harm those systems do, and hence may deploy them more than they should. Or, alternatively, people might never deploy potentially helpful emergent systems for fear of uncertain and boundless legal liability.

Finally, robots have a unique social meaning to people: more than any previous technology, they feel social to us. There is an extensive literature to support the claim that people are “hardwired” to react to anthropomorphic technology such as robots as though a person were actually present. The tendency is so strong that soldiers have reportedly risked their own lives to “save” a military robot in the field. The law impliedly separates things or “res” from agents and people in a variety of contexts; law and legal institutions will have to revisit this dichotomy in light of the blurring distinction between the two in contexts as diverse as punishment, damages, and the law of agency.

The upshot for policy is twofold. First, robotics presents a distinct set of related challenges. And second, the bodies that are dealing with these challenges have little or no expertise in them, and accrue new expertise at a snail’s pace. It is time to start talking about whether a common institutional structure could help the law catch up, i.e., serve as a repository for expertise about a transformative technology of our time, helping lawmakers, jurists, the media, the public, and others prepare for the sea change that appears to be afoot.

Arguably we have already seen a need for a federal robotics agency or its equivalent based on these three properties of robotics and AI organized to act upon the world. I opened with an example of Toyota and sudden acceleration. But this is just one of the many issues that embodiment, emergence, and social meaning have already raised. Some issues, likes drones and driverless cars, are all over the news. Another, high-speed trading algorithms, is the subject of a best-selling book by Michael Lewis. Still others, however, concern high stake technologies you may have never heard of let alone experienced, in part because the problems they generate have yet to be resolved to stakeholder satisfaction and so they do not see the light of day.

Driverless cars

The state of Nevada passed the first driverless car law in 2011. It represented one of the first robot-specific laws in recent memory, as well as one of the first errors due to lack of expertise. Specifically, the Nevada legislature initially defined “autonomous vehicles” to refer to any substitution of artificial intelligence for a human operator. Various commentators pointed out that car functionality substitutes for people quite often, as when a crash avoidance system breaks to avoid an accident with a sudden obstacle. Nevada’s initial definition would have imposed hefty obligations on a variety of commercially available vehicles. The state had to repeal its new law and pass a new definition.

One of the most significant challenges posed by driverless cars, however defined, is human reaction to a robot being in control. Human error accounts for an enormous percentage of driving fatalities, which number in the tens of thousands. The promise of driverless cars is, in part, vastly to reduce these accidents. In a “perfect,” post-driver world, the circle of fatalities caused by vehicles would simply shrink. The resulting diagram would look something like this:

perfect world

But in reality, driverless cars are likely to create new kinds of accidents, even as they dramatically reduce accidents overall. Thus, the real diagram is more likely to look something like this:

real world

The addition of even a tiny new area of liability could have outsized repercussions. A robot may always be better than a human driver at avoiding a shopping cart. And it may always be better at avoiding a stroller. But what happens when a robot confronts a shopping cart and a stroller at the same time? You or I would plow right into a shopping cart—or even a wall—to avoid hitting a stroller. A driverless car might not. The first headline, meanwhile, to read “Robot Car Kills Baby To Avoid Groceries” could end autonomous driving in the United States—and, ironically, drive fatalities back up. This possibility will be hard for laws, insurance, or video clips to inoculate against, requiring instead a subtle understanding of how the public perceives autonomous technologies in their midst.

Drones

More immediate, because further along, is the case of domestic drones. Back in 2010, I predicted that drones would catalyze a national conversation around technology and privacy. Unlike the Internet and other vehicles of data collection and process, I reasoned, it is easy for people to form a mental model of drone surveillance: there is a flying, inscrutable metal object, one you associated with the theatre of war, looking down upon you. Lawmakers and the public (but not yet the courts) have indeed reacted viscerally to the prospect of commercial and governmental use of drones domestically. Initially, however, the FAA through its officials attempted to distance itself from the problem. The agency observed that its main expertise is in safety, not civil liberties. It was only following tremendous outside pressure that the FAA began formally to consider the privacy impact of drones. The agency missed this issue—continues to miss it, to a degree—because it has no little to no experience with social meaning.

Law that confronts drones also tends to be underinclusive. There is little reason to target robots that can fly and take pictures over those that, say, climb the side of buildings or can be thrown into a building or over a crime scene. Arguably there is no good reason even to exclude birds with cameras attached to them—an old technique that is seeing something of a renaissance with cheap and light digital video. And yet “drone” laws almost inevitably limits themselves to “unmanned aircraft systems” as defined by the FAA, which would leave law enforcement and private companies and individuals quite a few options for mobile surveillance.

Finally, FAA policy toward commercial drones has been roundly criticized for being arbitrary and non-transparent, including by an administrative law judge. Here, again, the agency’s lack of experience with robotics—including what should or should not be characterized as a robot—may be playing a role. On the one hand, operators of small, low-flying drones argue that the FAA should not bother to regulate them because they do not raise issues any different than a remote control airplane flown by a hobbyist. On the other, Amazon is concerned because the company would eventually like to deliver packages by drone autonomously and the recent FAA roadmap on drone interpretation seems to take autonomous navigation off of the table. These debates are ongoing before the agency itself and the courts.

Finance algorithms

I mentioned the prospect of emergent behavior and the challenges it might pose for law and legal institutions. Michael Lewis’ new book Flash Boys has raised awareness of the central role of one potential hazard, algorithmic trading, on Wall Street. The SEC has been looking at the issue of high-speed trading, and the market volatility it can create, for years. The Commission seems no closer today to a solution than it was in the immediate wake of the 2010 “flash crash” where the market lost a significant percentage of its overall value in just a few minutes.

But high-speed trading could be the tip of the iceberg. Imagine, for instance, a programmer that designs software capable of predicting when a stock will make sudden gains in value—surely a gold mine for traders. This software is comprised of a learning algorithm capable of processing large volumes of information, current and historical, to find patterns. Were this software successful, traders would not necessarily understand how it worked. They might feed it data they felt was innocuous but that, in combination with publicly available information, gave the algorithm what would otherwise be understood as forbidden insider information under the mosaic theory of insider trading. These traders or programmers might never be prosecuted, however, again because of the nature of criminal statutes.

Legal scholars such as Tom Lin and Daria Roithmayer are looking at how law can adapt to the new reality of computer-driven investment. Their insights and others in this space will inform not just high frequency trading, but any very fast and automated activity with real world repercussions. In the interim, the law is still unsure how to handle the prospect of emergent behavior that ranges from benign, to useful, to potentially catastrophic.

Cognitive radio

A contemporary example you may not have heard of is the “cognitive radio,” i.e., radios capable of “choosing” the frequency or power at which they will operate. Radios in general are locked down to one specific frequency so as not to interfere with other devices or networks. (A chief reason your cell phone has an FCC emblem on it is because the FCC is certifying non-interference.) But communications bandwidth is scarce, and many believe it is not being used efficiently. Cognitive radio has the capability to modulate various parameters, including frequency and power, intelligently and in real time. These systems could operate on, for instance, emergency frequencies while they are not being used, or put out power just short of interfering with the next broadcaster a few miles away.

The upside of such technology, which is already in use in limited contexts today, is large and clear: suddenly more devices can work at the same time and more efficiently. The downside is equally large. Communications companies pay enormous sums to lease spectrum to provide services to consumers. In the case of emergency frequencies used by first responders, interference could be literally life threatening. Cognitive radios can malfunction and they can be hacked, for instance, by convincing a device it is in the mountains of Colorado instead of the city of San Francisco. Thus, as the FCC has recognized, cognitive radios must have adequate security and there must be a mechanism by which to correct errors, i.e., where the radio uses a frequency or power it should not.

The FCC has been looking at cognitive radio for ten years; comments on how best to implement this technology remain open today. Current proposals include, for instance, a dual structure whereby meta algorithms identify and punish “bad” cognitive radios. Technologists at UC Berkeley, Microsoft, and elsewhere claim these solutions are feasible. But how does the FCC evaluate the potential, especially where incumbent providers or institutions such as the Navy tell the FCC that the risks of interference remain too high? It would be useful, arguably at least, for a single agency with deep expertise in emergent software phenomena to help the SEC and FCC evaluate what to do about these, and many other, artificial intelligence problems.

Surgical robots

I have outlined a few instances where technology policy lags behind or delays robotics and AI. We might be tempted to draw the lesson that agencies move too slowly in general. And yet, problems with robotics can as easily come from an agency moving too quickly. For instance, consider recent lawsuits involving surgical robotics. Some think the FDA moved too quickly to approve robotic surgery by analogizing it to laparoscopic surgery. The issues that arise, at least judging by lawsuits for medical malpractice, seem to stem from the differences between robotic and laparoscopic surgery. For instance, and setting aside allegations that improperly insulated wires burned some patients, robots can glitch. Glitches have not led to harm directly but rather require the surgical team to transition from robotic to manual and hence keep the patient under anesthetic longer.

FRC: A Thought Experiment

I have argued that present efforts to address robotics have been piecemeal in approach and, too often, unfortunate in outcome. Much of the problem turns on the lack of familiarity with robotics and the sorts of issues the mainstreaming of this technology occasions. The FAA does not know what to say to Amazon about delivering goods by drone, and so it says “no.” Even where one government body learns a lesson, the knowledge does not necessarily make its way to any other. Here I conduct a thought experiment: what if the United States were to address this problem, as it has addressed similar problems in the not-so-distant past, by creating a standalone entity—an agency with the purpose of fostering, learning about, and advising upon robotics and its impact on society?

Agencies look all kinds of ways; a Federal Robotics Commission would have to be configured in a manner appropriate to its task. Outside of the factory and military contexts, robotics is a fledgling industry. It should be approached as such. There are dangers, but nothing to suggest we need a series of specific rules about robots, let alone a default rule against their use in particular contexts (sometimes called the “precautionary principle”) as some commentators demand. Rather, we need a deep appreciation of the technology, of the relevant incentives of those who create and consume it, and of the unfolding and inevitable threat to privacy, labor, physical safety, and so on which robotics actually presents.

At least initially, then, a Federal Robotics Commission would be small and consist of a handful of engineers and others with backgrounds in mechanical and electrical engineering, computer science, and human-computer interaction, right alongside experts in law and policy. It would hardly be the first interdisciplinary agency: the FTC houses economists and technologists in addition to its many lawyers, for example. And, taking a page from NASA or the Defense Advanced Research Projects Agency (DARPA), the FRC should place conscious emphasis on getting the “best and brightest.” Such an emphasis, coupled with a decent salary and the undeniable cache of robotics in tech-savvy circles, could help populate the FRC with top talent otherwise likely to remain in industry or academia.25

What would an FRC do then? Here are some tasks for a properly envisioned Commission:

  • Channel federal dollars into basic robotics research in an attempt to solve the still considerable technical challenges this technology presents.
  • Attract highly skilled technologists who might be reticent to work for the government otherwise.
  • Advise other federal agencies on matters having to do with robotics, including the DOT on driverless cars, the SEC on high speed trading, the FDA on robotic medical devices, the FCC on cognitive radios, the FAA on drones and, eventually, the Federal Trade Commission on increasingly sophisticated consumer products.
  • Advise federal, state, and local lawmakers on robotics law and policy.
  • Convene domestic and international stakeholders from industry, government, academia, and NGOs to discuss the impact of robotics and artificial intelligence on society.
  • File amicus curiae (friend of the court) briefs, as many federal agencies do today, in matters involving highly complex interactions between software and hardware.

There is much room for disagreement over this list. One could question the organizational structure. The thought experiment is just that: an attempt to envision how the United States can be most competitive with respect to an emerging transformative technology. I address some deeper forms of skepticism in the next section.

Objections

Today many people appreciate that robotics is a serious and meaningful technology. But suggesting that we need an entirely new agency to deal with it may strike even the robot enthusiast as overmuch. This section addresses some of the pushback—perhaps correct, and regardless healthy—that a radical thought experiment like an FRC might occasion.

Do we really need another agency?

When I have outlined these ideas in public, reactions have varied, but criticism tended to take the following form: We need another federal agency? Really?

Agencies have their problems, of course. They can be inefficient and are subject to capture by those they regulate or other special interests. I have in this very piece criticized three agencies for their respective approaches or actions toward robotics. This question—whether agencies represent a good way to govern and, if so, what is the best design—is a worthwhile one. It is the subject of a robust and long-standing debate in administrative law that cannot be reproduced here. But it has little to do with robotics. As discussed, we have agencies devoted to technologies already and it would be odd and anomalous to think we are done creating them.

A more specific version of the “really?” question asks whether we really want to “regulate” robotics at this early stage. I am very sympathetic to this point and have myself argued that we ought to remove roadblocks to innovation in robotics. I went so far as to argue that manufacturers of open robotics systems ought to be immunized for what users do with these platforms, product liability being a kind of “regulation” of business activities that emanates from the courts.

Let me clarify again that I am using the term “regulate” rather broadly. And further, that there is nothing intrinsically anathematic between regulation and innovation. Copyright is regulation meant to promote creativity (arguably). Net neutrality is regulation meant to remove barriers to competition. Google—a poster child for innovation in business and, last I checked, a for-profit company—actively lobbied Nevada and other states to regulate driverless cars. One assumes they did this to avoid uncertainty around the legality of their technology and with the hopes that other legislatures would instruct their state’s Department of Motor Vehicles to pass rules as well.

Note also that agencies vary tremendously in their structure and duties; the FTC, FDA, and SEC are enforcement agencies, for instance. Contrast them to, say, the Department of Commerce, DARPA, the Office of Management and Budget, or NASA itself. My claim is not that we need an enforcement agency for robotics—indeed, I believe it would highly undesirable to subject robotics and artificial intelligence to a general enforcement regime at such an early place in its life cycle. My claim is that we need a repository of expertise so that other agencies, as well as lawmakers and courts, do not make avoidable errors in their characterization and regulation of this technology.

A possible further response is that we have bodies capable of providing input already—for instance, the National Institute of Standards and Technology, the White House Office of Science and Technology Policy, or the Congressional Research Service. I would concede that these and other institutions could serve as repositories for knowledge about complex software and hardware. OSTP had a very serious roboticist—Vijay Kumar at University of Pennsylvania—serve as its “assistant director of robotics and cyber physical systems” for a time, and the Office’s mandate overlaps here and there with the possible FRC tasks I outline in the previous section.

Yet the diffusion of expertise across multiple existing agencies would make less and less sense over time. If robotics takes on the importance of, for instance, cars, weather prediction, broadcast communications, or rail travel, we would want in place the kernel of an agency that could eventually coordinate and regulate the technology in earnest. Moreover, even in the short run, there would be oddness and discomfort in an institution that generally advises on a particular issue (e.g., standards), or to a particular constituency (e.g., Congress), suddenly acting as a general convener and broad advisor to all manner of institutions that have to grapple with robotics. Although I could see how existing institutions could manage in theory, in practice I believe we would be better off starting from scratch with a new mandate.

How are robots different from computers?

I will address one last critique briefly, inspired by the response science fiction writer Cory Doctorow had to my recent law review article on robotics. Writing for The Guardian, Doctorow expresses skepticism that there was any meaningful distinction at law or otherwise between robots and computers. As such, Doctorow does not see how the law could “regulate” robotics specifically, as opposed to computers and the networks that connect them. “For the life of me,” writes Doctorow, “I can’t figure out a legal principle that would apply to the robot that wouldn’t be useful for the computer (and vice versa).”

In my view, the difference between a computer and a robot has largely to do with the latter’s embodiment. Robots do not just sense, process, and relay data. Robots are organized to act upon the world physically, or at least directly. This turns out to have strong repercussions at law, and to pose unique challenges to law and to legal institutions that computers and the Internet did not.

Consider, for example, how tort law handles glitches in personal computers or how law in general handles unlawful posts on a social network. If Word freezes and eats your important white paper, you may not sue Microsoft or Dell. This is due to a very specific set of legal principles such as the economic loss doctrine. But the economic loss doctrine, by its terms, is not available where a glitch causes physical harm. Similarly, courts have limited liability for insurers for computer or software glitches on the basis that information is not a “tangible” good covered by a general policy. A federal law, meanwhile, immunizes platforms such as Facebook for much of what users do there. It does so rather specifically by disallowing any legal actor from characterizing Facebook as the “publisher” of “content” that a user posts. This includes apps Facebook might run or sell. The same result would not likely obtain were someone to be hurt by a drone app purchased from a robot app store.

In any event, Doctorow’s thesis does not necessarily cut against the idea of a Federal Robotics Commission. We might say that robots are just computers, but that computers today are more powerful and complex, and increasingly organized to act upon the world in a physical or direct manner without even the prospect of human intervention. Few in government, especially on the civilian side, understand this technology well. Accordingly, the latent need for a neutral government body with deep expertise on how to deal with cyber physical systems has become quite acute.

Conclusion

I was recently at a robotics conference at the University of California, Berkeley and a journalist, who is a long-time student of robotics and one of its most assiduous chroniclers, made a remark to a colleague that struck me. He said that in recent years robotics has felt like a tidal wave, looming somewhere in the distance. But in recent months, that wave seems to have touched down upon land; keeping up with developments in robotics today is a frantic exercise in treading water.

Our government has a responsibility to be prepared for the changes robotics already begins to bring. Being prepared means, at this stage, understanding the technology and the unique experiences robots allow. It means helping other institutions make sense of the problems the technology already creates. And it means removing hurdles to development of robotics which, if not addressed, could seriously compromise America’s relevance in robotics and the development of its technology sector.

There are a number of ways our government could go about achieving these goals. I have explored one: the establishment of a federal robotics agency. We have in the past formed formal institutions around specific technologies, for the obvious reason that understanding a technology or set of technologies requires a dedicated staff, and because it can be more efficient to coordinate oversight of a technology centrally. I do not argue we should go so far as to put into place, today, a full-fledged enforcement body capable of regulating anything that touches robotics. That would be deeply inadvisable. Rather, I believe on balance that we should consider creating an institutional repository of expertise around robotics as well as a formal mechanism to promote robotics and artificial intelligence as a research agenda and industry.

The time to think through the best legal and policy infrastructure for robotics is right now. Early decisions in the lifecycle of the Internet, such as the decision to apply the First Amendment there and to immunize platforms for what users do, allowed that technology to thrive. We were also able to be “hands off” about the Internet to a degree that will not be possible with robotics and systems like it that are organized not merely to relay information but to affect the world physically or directly. Decisions we make today about robotics and artificial intelligence will affect the trajectory of this technology and of our society. Please think of this piece, if you do, as a call to be thoughtful, knowledgeable, and deliberate in our dealings with this emerging technology.

This piece originally appeared on Brookings, as part of a Brookings series called The Robots Are Coming: The Project on Civilian Robotics.

Ryan Calo is an assistant professor at the University of Washington School of Law and a former research director at The Center for Internet and Society. A nationally recognized expert in law and emerging technology, Ryan’s work has appeared in the New York Times, the Wall Street Journal, NPR, Wired Magazine, and other news outlets.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser