Excerpts from literature on robotic/self-driving/autonomous cars with a focus on legal issues, lengthy, often tedious; some more SI work. See also Notes on Psychopathy.
Having read through all this material, my general feeling is: the near-term future (1 decade) for autonomous cars is not that great. What's been accomplished, legally speaking, is great but more limited than most people appreciate. And there are many serious problems with penetrating the elaborate ingrown rent-seeking tangle of law & politics & insurance. I expect the mid-future (+2 decades) to look more like autonomous cars completely taking over many odd niches and applications where the user can afford to ignore those issues (eg. on private land or in warehouses or factories), with highways and regular roads continuing to see many human drivers with some level of automated assistance. However, none of these problems seem fatal and all of them seem amenable to gradual accommodation and pressure, so I am now more confident that in the long run we will see autonomous cars become the norm and human driving ever more niche (and possibly lower-class). On none of these am I sure how to formulate a precise prediction, though, since I expect lots of boundary-crossing and tertium quids. We'll see.
0.1 Self-driving cars
The first success inaugurating the modern era can be considered the 2005 DARPA Grand Challenge where multiple vehicles completed the course. The first legislation of any kind addressing autonomous cars was Nevada’s 2011 approval. 5 states have passed legislation dealing with autonomous cars.
However, these laws are highly preliminary and all the analyses I can find agree that they punt on the real legal issues of liability; they permit relatively little.
0.1.1 Lobbying, Liability, and Insurance
(Warning: legal analysis quoted at length in some excerpts.)
“Toward Robotic Cars”, Thrun 2010 (pre-Google):
Junior’s behavior is governed by a finite state machine, which provides for the possibility that common traffic rules may leave a robot without a legal option as how to proceed. When that happens, the robot will eventually invoke its general-purpose path planner to find a solution, regardless of traffic rules. [Raising serious issues of liability related to potentially making people worse-off]
“Google Cars Drive Themselves, in Traffic” (PDF), NYT 2010:
But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would? And in the event of an accident, who would be liable - the person behind the wheel or the maker of the software?
“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.” The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.
“Calif. Greenlights Self-Driving Cars, But Legal Kinks Linger”:
For instance, if a self-driving car runs a red light and gets caught, who gets the ticket? “I don’t know - whoever owns the car, I would think. But we will work that out,” Gov. Brown said at the signing event for California’s bill to legalize and regulate the robotic cars. “That will be the easiest thing to work out.” Google co-founder Sergey Brin, who was also at the ceremony, jokingly said “self-driving cars don’t run red lights.” That may be true, but Bryant Walker Smith, who teaches a class at Stanford Law School this fall on the law supporting self-driving cars, says eventually one of these vehicles will get into an accident. When it does, he says, it’s not clear who will pay.
…Or is it the company that wrote the software? Or the automaker that built the car? When it came to assigning responsibility, California decided that a self-driving car would always have a human operator. Even if that operator wasn’t actually in the car, that person would be legally responsible. It sounds straightforward, but it’s not. Let’s say the operator of a self-driving car is inebriated; he or she is still legally the operator, but the car is driving itself. “That was a decision that department made - that the operator would be subject to the laws, including laws against driving while intoxicated, even if the operator wasn’t there,” Walker Smith says…Still, issues surrounding liability and who is ultimately responsible when robots take the wheel are likely to remain contentious. Already trial lawyers, insurers, automakers and software engineers are queuing up to lobby rule-makers in California’s capital.
“Google’s Driverless Car Draws Political Power: Internet Giant Hones Its Lobbying Skills in State Capitols; Giving Test Drives to Lawmakers”, WSJ, 12 October 2012:
Overall, Google spent nearly $9 million in the first half of 2012 lobbying in Washington for a wide variety of issues, including speaking to U.S. Department of Transportation officials and lawmakers about autonomous vehicle technology, according to federal records, nearing the $9.68 million it spent on lobbying in all of 2011. It is unclear how much Google has spent in total on lobbying state officials; the company doesn’t disclose such data.
…In most states, autonomous vehicles are neither prohibited nor permitted-a key reason why Google’s fleet of autonomous cars secretly drove more than 100,000 miles on the road before the company announced the initiative in fall 2010. Last month, Mr. Brin said he expects self-driving cars to be publicly available within five years.
In January 2011, Mr. Goldwater approached Ms. Dondero Loop and the Nevada assembly transportation committee about proposing a bill to direct the state’s department of motor vehicles to draft regulations around the self-driving vehicles. “We’re not saying, ‘Put this on the road,’” he said he told the lawmakers. “We’re saying, ‘This is legitimate technology,’ and we’re letting the DMV test it and certify it.” Following the Nevada bill’s passage, legislators from other states began showing interest in similar legislation. So Google repeated its original recipe and added an extra ingredient: giving lawmakers the chance to ride in one of its about a dozen self-driving cars…In California, an autonomous-vehicle bill became law last month despite opposition from the Alliance of Automobile Manufacturers, which includes 12 top auto makers such as GM, BMW and Toyota. The group had approved of the Florida bill. Dan Gage, a spokesman for the group, said the California legislation would allow companies and individuals to modify existing vehicles with self-driving technology that could be faulty, and that auto makers wouldn’t be legally protected from resulting lawsuits. “They’re not all Google, and they could convert our vehicles in a manner not intended,” Mr. Gage said. But Google helped push the bill through after spending about $140,000 over the past year to lobby legislators and California agencies, according to public records
As with California’s recently enacted law, Cheh’s [Washington D.C.] bill requires that a licensed driver be present in the driver’s seat of these vehicles. While seemingly inconsequential, this effectively outlaws one of the more promising functions of autonomous vehicle technology: allowing disabled people to enjoy the personal mobility that most people take for granted. Google highlighted this benefit when one of its driverless cars drove a legally blind man to a Taco Bell. Bizarrely, Cheh’s bill also requires that autonomous vehicles operate only on alternative fuels. While the Google Self-Driving Car may manifest itself as an eco-conscious Prius, self-driving vehicle technology has nothing to do with hybrids, plug-in electrics or vehicles fueled with natural gas. The technology does not depend on vehicle make or model, but Cheh is seeking to mandate as much. That could delay the technology’s widespread adoption for no good reason…Another flaw in Cheh’s bill is that it would impose a special tax on drivers of autonomous vehicles. Instead of paying fuel taxes, “Owners of autonomous vehicles shall pay a vehicle-miles travelled (VMT) fee of 1.875 cents per mile.” Administrative details aside, a VMT tax would require drivers to install a recording device to be periodically audited by the government. There may be good reasons to replace fuel taxes with VMT fees, but greatly restricting the use of a potentially revolutionary new technology by singling it out for a new tax system would be a mistake.
“Driverless cars are on the way. Here’s how not to regulate them.”
“How autonomous vehicle policy in California and Nevada addresses technological and non-technological liabilities”, Pinto 2012:
The State of Nevada has adopted one policy approach to dealing with these technical and policy issues. At the urging of Google, a new Nevada law directs the Nevada Department of Motor Vehicles (NDMV) to issue regulations for the testing and possible licensing of autonomous vehicles and for licensing the owners/drivers of these vehicles. There is also a similar law being proposed in California with details not covered by Nevada AB 511. This paper evaluates the strengths and weaknesses of the Nevada and California approaches
Another problem posed by the non-computer world is that human drivers frequently bend the rules by rolling through stop signs and driving above speed limits. How does a polite and law-abiding robot vehicle act in these situations? To solve this problem, the Google Car can be programmed for different driving personalities, mirroring the current conditions. On one end, it would be cautious, being more likely to yield to another car and strictly following the laws on the road. At the other end of the spectrum, the robocar would be aggressive, where it is more likely to go first at the stop sign. When going through a four-way intersection, for example, it yields to other vehicles based on road rules; but if other cars don’t reciprocate, it advances a bit to show to the other drivers its intention.
However, there is a time period between a problem being diagnosed and the car being fixed. In theory, one would disable the vehicle remotely and only start it back up when the problem is fixed. However in reality, this would be extremely disruptive to a person’s life as they would have to tow their vehicle to the nearest mechanic or autonomous vehicle equivalent to solve the issue. Google has not developed the technology to approach this problem, instead relying on the human driver to take control of the vehicle if there is ever a problem in their test vehicles.
[previous Lu quote about human-centric laws] …this can create particularly tricky situations such as deciding whether the police should have the right to pull over autonomous vehicles, a question yet to be answered. Even the chief counsel of the National Highway Traffic Safety Administration admits that the federal government does not have enough information to determine how to regulate driverless technologies. This can become a particularly thorny issue when there is the first accident between autonomous and self driving vehicles and how to go about assigning liability.
This question of liability arose during an [unpublished 11 Feb 2012] interview on the future of autonomous vehicles with Roger Noll. Although Professor Noll hasn’t read the current literature on this issue, he voiced concern over what the verdict of the first trial between an accident between an autonomous vehicle and normal car will be. He believes that the jury will almost certainly side with the human driver despite the details of the case, as he eloquently put in his husky Utah accent and subsequent laughter, “how are we going to defend the autonomous vehicle; can we ask it to testify for itself?” To answer Roger Noll’s question, Brad Templeton’s blog elaborates how he believes that liability reasons are a largely unimportant question for two reasons. First, in new technology, there is no question that any lawsuit over any incident involving the cars will include the vendor as the defendant so potential vendors must plan for liability. For the second reason, Brad Templeton makes an economic argument that the cost of accidents is borne by car buyers through higher insurance premiums. If the accidents are deemed the fault of the vehicle maker, this cost goes into the price of the car, and is paid for by the vehicle maker’s insurance or self- insurance. Instead, Brad Templeton believes that the big question is whether the liability assigned in any lawsuit will be significantly greater than it is in ordinary collisions because of punitive damages. In theory, robocars should drive the costs down because of the reductions in collisions, and that means savings for the car buyer and for society and thus cheaper auto insurance. However, if the cost per collision is much higher even though the number of collisions drops, there is uncertainty over whether autonomous vehicles will save money for both parties.
California’s Proposition 103 dictates that any insurance policy’s price must be based on weighted factors, and the top 3 weighted factors must be, 1. driving record, 2. number of miles driven and 3. number of years of experience. Other factors like the type of car someone has (i.e. autonomous vehicle) will be weighed lower. Subsequently, this law makes it very hard to get cheap insurance for a robocar.
Nevada Policy: AB 511 Section 8 This short piece of legislation accomplishes the goal of setting good standards for the DMV to follow. By setting general standards (part a), insurance requirements (part b), and safety standards (part c), this sets a precedent for these areas without being too limited with details, leaving them to be decided by the DMV instead of the politicians. …part b only discusses insurance briefly, saying the state must, “Set forth requirements for the insurance that is required to test or operate an autonomous vehicle on a highway within this State.” The definitions set in the second part of Section 8 are not specific enough. Following the open-ended standards set in the earlier part of the Section 8 is good for continuity, but not technically addressing the problem. According to Ryan Calo, Director of Privacy and Robotics for Stanford Law School’s Center for Internet and Society (CIS), the bill’s definition of “autonomous vehicles” is unclear and circular. In the context of this legislation, autonomous driving is seen as a binary system of existence, but in reality, it falls more under a spectrum.
Overall, AB 511 did not address either the technological liabilities and barely mentioned the non-technological liabilities that are necessary to overcome for future success of autonomous vehicles. Since it was the first type of legislation to ever approach the issue of autonomous vehicles, it is understandable that the policymakers did not want to go into specifics and instead rely on future regulation to determine the details.
California Policy: SB 1298…would require the adoption of safety standards and performance requirements to ensure the safe operation and testing of “autonomous vehicles” on California public roads. The bill would allow autonomous vehicles to be operated or tested on the public roads on the condition they meet safety standards and performance requirements of the bill. SB 1298’s 66 lines of text is also considerably longer than AB 511’s 12 lines of relevant text (the entirety of AB 511 is much longer but consists of irrelevant information for the purposes of autonomous cars). would require the adoption of safety standards and performance requirements to ensure the safe operation and testing of “autonomous vehicles” on California public roads. The bill would allow autonomous vehicles to be operated or tested on the public roads on the condition they meet safety standards and performance requirements of the bill. SB 1298’s 66 lines of text is also considerably longer than AB 511’s 12 lines of relevant text (the entirety of AB 511 is much longer but consists of irrelevant information for the purposes of autonomous cars).
SB 1298 has clear intentions to have company developed vehicles by saying in Section 2, Part B that, “autonomous vehicles have been operated safely on public roads in the state in recent years by companies developing and testing this technology” and how these companies have set the standard for what safety standards will be necessary for future testing by others. This part of the legislation implicitly supports Google’s autonomous vehicle because it has the most extensively tested fleet of vehicles out of all the companies, and all this testing has been nearly exclusively done in California. This bill is an improvement over AB 511 by putting more control in the hands of Google to focus on developing the technology, which is a signal by the policymakers to create a climate favorable for Google’s innovation within the constraints of keeping society safe.
To avoid setting a dangerous precedent for liability in accidents, policymakers can consider protecting the car companies from frivolous and malicious lawsuits. Without such legislation, future plaintiffs will be justified to sue Google and put full liability on them. There are also potential free riding effects of the economic moral hazard of putting the blame on the company that makes the technology, not the company that manufactures the vehicle. Since we are assuming that autonomous vehicle technology will all come from one source of Google, then any accident that occurs will pin the blame primarily on Google, the common denominator, not as much as on the car manufacturer…Policy that ensures the costs per accident remains close to today’s current cost will save money for both the insurer and customer. This could potentially mean putting a cap on rewards towards the recipients or punishments towards the company to limit shocks to the industry. Overall, a policymaker can choose to create a gradual limit on the amount of liability placed on the vendor based on certain technology or scaling issues that are met without accidents.
SB 1298 manages to cover some of the shortcomings of AB 511, such as how to improve upon the definition of an autonomous vehicle, as well as looking more towards the future by giving Google more responsibility and alleviating some of the non-technical liability by considering their product “under development”. However, both pieces of legislation fail to address the specific technical liabilities such as bugs in the code base or computer attacks, and non-technical liabilities such as insurance or accident liability.
“Can I See Your License, Registration and C.P.U.?”, Tyler Cowen; see also his “What do the laws against driverless cars look like?”:
The driverless car is illegal in all 50 states. Google, which has been at the forefront of this particular technology, is asking the Nevada legislature to relax restrictions on the cars so it can test some of them on roads there. Unfortunately, the very necessity for this lobbying is a sign of our ambivalence toward change. Ideally, politicians should be calling for accelerated safety trials and promising to pass liability caps if the cars meet acceptable standards, whether that be sooner or later. Yet no major public figure has taken up this cause.
Enabling the development of driverless cars will require squadrons of lawyers because a variety of state, local and federal laws presume that a human being is operating the automobiles on our roads. No state has anything close to a functioning system to inspect whether the computers in driverless cars are in good working order, much as we routinely test emissions and brake lights. Ordinary laws change only if legislators make those revisions a priority. Yet the mundane political issues of the day often appear quite pressing, not to mention politically safer than enabling a new product that is likely to engender controversy.
Politics, of course, is often geared toward preserving the status quo, which is highly visible, familiar in its risks, and lucrative for companies already making a profit from it. Some parts of government do foster innovation, such as Darpa, the Defense Advanced Research Projects Agency, which is part of the Defense Department. Darpa helped create the Internet and is supporting the development of the driverless car. It operates largely outside the public eye; the real problems come when its innovations start to enter everyday life and meet political resistance and disturbing press reports.
…In the meantime, transportation is one area where progress has been slow for decades. We’re still flying 747s, a plane designed in the 1960s. Many rail and bus networks have contracted. And traffic congestion is worse than ever. As I’ argued in a previous column, this is probably part of a broader slowdown of technological advances.
But it’s clear that in the early part of the 20th century, the original advent of the motor car was not impeded by anything like the current mélange of regulations, laws and lawsuits. Potentially major innovations need a path forward, through the current thicket of restrictions. That debate on this issue is so quiet shows the urgency of doing something now.
Ryan Calo of the CIS argues essentially that no specific law bans autonomous cars and the threat of the human-centric laws & regulations is overblown. (See the later Russian incident.)
“SCU conference on legal issues of robocars”, Brad Templeton:
Liability: After a technology introduction where Sven Bieker of Stanford outlined the challenges he saw which put fully autonomous robocars 2 decades away, the first session was on civil liability. The short message was that based on a number of related cases from the past, it will be hard for manufacturers to avoid liability for any safety problems with their robocars, even when the systems were built to provide the highest statistical safety result if it traded off one type of safety for another. In general when robocars come up as a subject of discussion in web threads, I frequently see “Who will be liable in a crash” as the first question. I think it’s a largely unimportant question for two reasons. First of all, when the technology is new, there is no question that any lawsuit over any incident involving the cars will include the vendor as the defendant, in many cases with justifiable reasons, but even if there is no easily seen reason why. So potential vendors can’t expect to not plan for liability. But most of all, the reality is that in the end, the cost of accidents is borne by car buyers. Normally, they do it by buying insurance. But if the accidents are deemed the fault of the vehicle maker, this cost goes into the price of the car, and is paid for by the vehicle maker’s insurance or self-insurance. It’s just a question of figuring out how the vehicle buyer will pay, and the market should be capable of that (though see below.) No, the big question in my mind is whether the liability assigned in any lawsuit will be significantly greater than it is in ordinary collisions where human error is at fault, because of punitive damages…Unfortunately, some liability history points to the latter scenario, though it is possible for statutes to modify this.
Insurance: …Because Prop 103 [specifying insurance by weighted factors, see previous] is a ballot proposition, it can’t easily be superseded by the legislature. It takes a 2/3 vote and a court agreeing the change matches the intent of the original ballot proposition. One would hope the courts would agree that cheaper insurance to encourage safer cars would match the voter intent, but this is a challenge.
Local and criminal laws: The session on criminal laws centered more on the traffic code (which isn’t really criminal law) and the fact it varies a lot from state to state. Indeed, any robocar that wants to operate in multiple states will have to deal with this, though fortunately there is a federal standard on traffic controls (signs and lights) to rely on. Some global standards are a concern - the Geneva convention on traffic laws requires every car has a driver who is in control of the vehicle. However, I think that governments will be able to quickly see - if they want to - that these are laws in need of updating. Some precedent in drunk driving can create problems - people have been convicted of DUI for being in their car, drunk, with the keys in their pocket, because they had clear intent to drive drunk. However, one would hope the possession of a robocar (of the sort that does not need human manual driving) would express an entirely different intent to the law.
“Definition of necessary vehicle and infrastructure systems for Automated Driving”, European Commission report 29 June 2011:
Yet another paramount aspect tightly related to automated driving at present and in the near future, and certainly related to autonomous driving in the long run, is the interpretation of the Vienna Convention. It will be shown in the report how this European legislation is commonly interpreted, how it creates the framework necessary to deploy on a large scale automated and cooperative driving systems, and what legal limitations are foreseen in making the new step toward autonomous driving. The report analyses in the same context other conventions and legislative acts, searches for gaps in the current legislation and makes an interesting link with the aviation industry where several lessons can be learnt from.
It seems appropriate to end this summary with a few remarks not directly related to the subject of this report, but worth in the process of thinking about automated driving, cooperative driving, and autonomous driving. The progress in the human history has systematically taken the path of the shortest resistance and has often bypassed governmental rules, business models, and the obvious thinking. At the end of the 1990s nobody was anticipating the prominent role the smart phone would have in 10 years, but scientists were busy planning journeys to Mars within the same timeframe. The latter has not happened and will probably not happen soon… One lesson humanity has learnt during its existence is that historical changes that followed the path of the minimum resistance triggered at a later stage fundamental changes in the society. “A car is a car” like David Strickland, administrator of the National Highway Traffic Safety Administration (NHTSA) in the U.S. said in his speech at the Telematics Update conference in Detroit, June 2011, but it may drive soon its progress along a historical path of minimum resistance.
An automated driving systems needs to meet the Vienna Convention (see Section 3, aspect 2). The private sector, especially those who are in the end responsible for the performance of the vehicle, should be involved in the discussion.
The Vienna Convention on Road Traffic is an international treaty designed to facilitate international road traffic and to increase road safety by standardizing the uniform traffic rules among the contracting parties. This convention was agreed upon at the United Nations Economic and Social Council’s Conference on Road Traffic (October 7, 1968 - November 8, 1968). It came into force on May 21 1977. Not all EU countries have ratified the treaty, see Figure 13 (e.g. Ireland, Spain and UK did not). It should be noted that in 1968, animals were still used for traction of vehicles and the concept of autonomous driving was considered to be science fiction. This is important when interpreting the text of the treaty: in a strict interpretation to the letter of the text, or interpretation of what is meant (at that time).
The common opinion of the expert panel is that the Vienna Convention will have only a limited effect on the successful deployment of automated driving systems due to several reasons:
- OEMs already deal with the situation that some of the Advanced Driver Assistance Systems touch the Vienna Convention today. For example, they provide an on/off switch for ADAS or allow an overriding of the functions by the driver. They develop their ADAS in line with the RESPONSE Code of Practice (2009)  following the principle that the driver is in control and remains responsible. In addition, the OEMs have a careful marketing strategy and they do not exaggerate and do not claim that an ADAS is working in all driving situations or that there is a solution to “all” safety problems.
- Automation is not black and white, automated or not automated, but much more complex, involving many design dimensions. A helpful model of automation is to consider different levels of assistance and automation that can e.g. be organized on a 1d- scale . Several levels could be within the Vienna Convention, while extreme levels are outside of today’s version of the Vienna Convention. For example, one partitioning could be to have levels of automation Manual, Assisted, Semi-Automated, Highly Automated, and Fully Automated driving, see Figure 14. In highly automated driving, the automation has the technical capabilities to drive almost autonomously, but the driver is still in the loop and able to take over control when it is necessary. Fully automated driving like PRT, where the driver is not required to monitor the automation and does not have the ability to take over control, seems not to be covered by the Vienna Convention.
Criteria for deciding if the automation is still in line with the Vienna Convention could be:
- the involvement of the driver in the driving task (vehicle control),
- the involvement of the driver in monitoring the automation and the traffic environment,
- the ability to take over control or to override the automation
- The Vienna Convention already contains openings, or is variable, or can be changed.
It contains a certain variability regarding the autonomy in the means of transportation, e.g. “to control the vehicle or guide the animals”. It is obvious that some of the current technological developments were not foreseen by the authors of the Vienna Convention. Issues like platooning are not addressed. The Vienna Convention already contains in Annex 5 (chapter 4, exemptions) an opening to be investigated with appropriate legal expertise:
“For domestic purposes, Contracting Parties may grant exemptions from the provisions of this Annex in respect of: (c) Vehicles used for experiments whose purpose is to keep up with technical progress and improve road safety; (d) Vehicles of a special form or type, or which are used for particular purposes under special conditions”. - In addition, the Vienna Convention can be changed. The last change was made in 2006. A new paragraph (paragraph 6) was added to Article 8 stating that the driver should minimize any activity other than driving.
…different understandings of the term “to control” with no clear consensus : 1. Control in a sense of influencing e.g. the driver controls the vehicle movements, the driver can override the automation and/or the driver can switch the automation off. 2. Control in a sense of monitoring e.g. the driver monitors the actions of the automation. Both interpretations allow the use of some form of automation in a vehicle as it can be seen in today’s cars where e.g. ACC or emergency brake assistance systems etc. are available.
The first interpretation allows automation that can be overridden by the driver or that reacts in emergency situations only when the driver cannot cope with the situation anymore. Forms of automation that cannot be overridden seem to be not in line with the first interpretation [45, p. 818]. The second interpretation is more flexible and would allow also forms of automation that cannot be overridden and are within the Vienna Convention as long as the driver monitors the automation . …In the literature, some other assistance and automation functions were appraised by juridical experts. For example,  postulates that automatic emergency braking systems are in line with the Vienna Convention as long as they react only when a crash is unavoidable (collision mitigation). Otherwise a conflict between the driver’s intention (here, steering) and the reaction of the automation (here, braking) cannot be excluded. Albrecht  concludes that an Intelligent Speed Adaptation (ISA) which cannot be overridden by the driver is not in line with the Vienna Convention because it is not consistent with Article 8 and Article 13 of the Vienna Convention.
…As soon as data from the vehicle is used for V2X-communication or is stored in the vehicle itself, data protection and privacy issues become relevant. Directives and documents that need to be checked include:
- Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data;
- Directive 2010/40/EU on the framework for the deployment of Intelligent Transport Systems in the field of road transport and for interfaces with other modes of transport;
- WP 29 Working document on data protection and privacy implications in the eCall initiative and the European Data Protection Supervisor (EDPS) opinion on ITS Action Plan and Directive.
The bottleneck is that at the current stage of development the risk related costs and benefits of viable deployment paths are unknown, combined with the fact that the deployment paths themselves are wide open because the possible deployment scenarios are not assessed and debated in a political environment. There is currently no consensus amongst stakeholders on which of the deployment scenarios proposed will eventually prevail…Changes in EU legislation might change the role of players and increase the risk for them. Any change in EU legislation will change the position of the players, and uncertainty in which direction this change (gap) would go adds to the risk. This prohibits players from having an outspoken opinion on the issue. If an update of existing legislation is considered, this should be European legislation, not national legislation. It would be better to go for a world-wide harmonized legislation, when it is decided to take that path.
A useful case study for understanding the issues associated with automated driving can be found in SAFESPOT  which can be viewed as a parallel to automated driving functions (for more details, see Appendix I. Related to aspect 3). SAFESPOT provided an in-depth analysis of the legal aspects of the service named ‘Speed Warning’, in two configurations V2I and V2V. It is performed against two fundamentally different law schemes, namely Dutch and English law. This analysis concluded that the concept of co-operative systems raises questions and might complicate legal disputes. This is for several reasons:
- There are more parties involved, all with their own responsibilities for the proper functioning of elements of a cooperative system.
- Growing technical interdependencies between vehicles, and between vehicles and the infrastructure, may also lead to system failure, including scenarios that may be characterised as an unlucky combination of events (“a freak accident”) or as a failure for which the exact cause simply cannot be traced back (because of the technical complexity).
- Risks that cannot be influenced by the people who suffer the consequences tend to be judged less acceptable by society and, likewise, from a legal point of view.
- The in-depth analysis of SAFESPOT concluded that (potential) participants such as system producers and road managers may well be exposed to liability risks. Even if the driver of the probe vehicle could not successfully claim a defense (towards other road users), based on a failure of a system, system providers and road managers may still remain (partially) responsible through the mechanism of subrogation and right of recourse.
- Current law states that the driver must be in control of his vehicle at all times. In general, EU drivers are prohibited to exhibit dangerous behaviour while driving. The police have prosecuted drivers in the UK for drinking and/or eating; i.e. only having one hand on the steering wheel. The use of a mobile phone while driving is prohibited in many European countries, only use of phones equipped for hands free operation are permitted. Liability still rests firmly with the driver for the safe operation of vehicles.
New legislation may be required for automated driving. It is highly unlikely that any OEM or supplier will risk introducing an automatic driving vehicle (where responsibility for safe driving is removed from the driver) without there being a framework of new legislation which clearly sets out where their responsibility and liability begins and ends. In some ways it could be seen as similar to warranty liability, the OEM warrants certain quality and performance levels, backed by reciprocal agreements within the supply chain. Civil (and possibly criminal) liability in the case of accidents involving automated driving vehicles is a major issue that can truly delay the introduction of these technologies…Since there are no statistical records of the effects of automated driving systems, the entrepreneurship of insurers should compensate for the issue of unknown risks…The following factors are regarded as hindering an optimal role to be played by the insurance industry in promoting new safety systems through their insurance policies:
- Premium-setting is based on statistical principles, resulting in a time-lag problem;
- Competition/sensitive relationships with clients;
- Investment costs (e.g. aftermarket installations);
- Administrative costs;
- Market regulation
No precedence lawsuits of liability with automated systems have happened to date. The Toyota malfunctions of their brake-by-wire system in 2010 did not end in a lawsuit. A system like parking assist is technically not redundant. What would happen if the driver claimed he/she could not override the brakes? For (premium) insurance a critical mass is required, so initially all stake-holders including governments should potentially play a role.
“Automotive Autonomy: Self-driving cars are inching closer to the assembly line, thanks to promising new projects from Google and the European Union”, Wright 2011:
The Google project has made important advances over its predecessor, consolidating down to one laser rangefinder from five and incorporating data from a broader range of sources to help the car make more informed decisions about how to respond to its external environment. “The threshold for error is minuscule,” says Thrun, who points out that regulators will likely set a much higher bar for safety with a self-driving car than for one driven by notoriously error-prone humans.
“The future of driving, Part III: hack my ride”, Lee 2008:
Of course, one reason that private investors might not want to invest in automotive technologies is the risk of excessive liability in the case of crashes. The tort system serves a valuable function by giving manufacturers a strong incentive to make safe, reliable products. But too much tort liability can have the perverse consequence of discouraging the introduction of even relatively safe products into the marketplace. Templeton tells Ars that the aviation industry once faced that problem. At one point, “all of the general aviation manufacturers stopped making planes because they couldn’t handle the liability. They were being found slightly liable in every plane crash, and it started to cost them more than the cost of manufacturing the plane.” Airplane manufacturers eventually convinced Congress to place limits on their liability. At the moment, crashes tend to lead to lawsuits against human drivers, who rarely have deep pockets. Unless there is evidence that a mechanical defect caused the crash, car manufacturers tend not to be the target of most accident-related lawsuits. That would change if cars were driven by software. And because car manufacturers have much deeper pockets than individual drivers do, plaintiffs are likely to seek much larger damages than they would against human drivers. That could lead to the perverse result that even safer self-driving cars would be more expensive to insure than human drivers. Since car manufacturers, rather than drivers, would be the first ones sued in the event of an accident, car companies are likely to protect themselves by buying their own insurance. And if insurance premiums get too high, they may take the route the aviation industry did and seek limits on liability. An added benefit for consumers is that most would never have to worry about auto insurance. Cars would come preinsured for the life of the vehicle (or at least the life of the warranty)…Self-driving vehicles will sit at the intersection of two industries that are currently subject to very different regulatory regimes. The automobile industry is heavily regulated, while the software industry is largely unregulated at all. The most fundamental decision regulators will need to make is whether one of these existing regulatory regimes will be suitable for self-driving technologies, or whether an entirely new regulatory framework will be needed to accommodate them.
It’s inevitable that at some point, a self-driving vehicle will be involved in a fatal crash which generates worldwide publicity. Unfortunately, even if self-driving vehicles have amassed an overall safety record that’s superior to that of human drivers, the first crash is likely to prompt calls for drastic restrictions on the use of self-driving technologies. It will therefore be important for business leaders and elected officials to lay the groundwork by both educating the public about the benefits of self-driving technologies and managing expectations so that the public isn’t too surprised when crashes happen. Of course, if the first self-driving cars turn out to be significantly less safe than the average human driver, then they should be pulled off the streets and re-tooled. But this seems unlikely to happen. A company that introduced self-driving technology into the marketplace before it was ready would not only have trouble convincing regulators that its cars are safe, but it would be risking ruinous lawsuits, as well. The far greater danger is that the combination of liability fears and red tape will cause the United States to lose the initiative in self-driving technologies. Countries such as China, India, and Singapore that have more autocratic regimes or less-developed economies may seize the initiative and introduce self-driving cars while American policymakers are still debating how to regulate them. Eventually, the specter of other countries using technologies that aren’t available in the United States will spur American politicians into action, but only after several thousand Americans lose their lives unnecessarily at the hands of human drivers.
…One likely area of dispute is whether people will be allowed to modify the software on their own cars. The United States has a long tradition of people tinkering with both their cars and their computers. No doubt, there will be many people who are interested in modifying the software on their self-driving cars. But there is likely to be significant pressure for legislation criminalizing unauthorized tinkering with self-driving car software. Both car manufacturers and (as we’ll see shortly) the law enforcement community are likely to be in favor of criminalizing the modification of car software. And they’ll have a plausible safety argument: buggy car software would be dangerous not only to the car owner but to others on the road. The obvious analogy is to the DMCA, which criminalized unauthorized tinkering with copy protection schemes. But there are also important differences. One is that car manufacturers will be much more motivated to prevent tinkering than Apple or Microsoft are. If manufacturers are liable for the damage done by their vehicles, then tinkering not only endangers lives, but their bottom lines as well. It’s unlikely that Apple would ever sue people caught jailbreaking their iPhones. But car manufacturers probably will contractually prohibit tinkering and then sue those caught doing it for breach of contract.
The more stalwart advocate of locked-down cars is likely to be the government, because self-driving car software promises to be a fantastic tool for social control. Consider, for example, how useful locked-down cars could be to law enforcement. Rather than physically driving to a suspect’s house, knocking on his door (or not), and forcibly restraining, handcuffing, and escorting a suspect to the station, police will be able to simply seize a suspect’s self-driving car remotely and order it to drive to the nearest police station. And that’s just the beginning. Locked-down car software could be used to enforce traffic laws, to track and log peoples’ movements for later review by law enforcement, to enforce curfews, to clear the way for emergency vehicles, and dozens of other purposes. Some of these functions are innocuous. Others will be very controversial. But all of them depend on restricting user control over their own vehicles. If users were free to swap in custom software, they might disable the government’s “back door” and re-program it to ignore government requirements. So the government is likely to push hard for laws mandating that only government-approved software run self-driving cars.
…It’s too early to say exactly what the car-related civil liberties fights will be about, or how they will be resolved. But one thing we can say for certain is that the technical decisions made by today’s computer scientists will be important for setting the stage for those battles. Advocates for online free speech and anonymity have been helped tremendously by the fact that the Internet was designed with an open, decentralized architecture. The self-driving cars of the future are likely to be built on top of software tools that are being developed in today’s academic labs. By thinking carefully about the ways these systems are designed, today’s computer scientists can give tomorrow’s civil liberties their best shot at preserving automotive freedom.
In our interview with him, Congressman Adam Schiff described the public’s perception of autonomous driving technologies as a reflection of his own reaction to the idea: one that is a mixture of both fascination and skepticism. Schiff explained that the public’s fascination comes from amazement at how advanced this technology already has become, plus with Google’s sponsorship and endorsement it becomes even more alluring.
Skepticism of autonomous vehicle technologies comes from a missing element of trust. According to Clifford Nass, a professor of communications and sociology at Stanford University, this trust is an aspect of public opinion that must be earned through demonstration more so than through use. When people see a technology in action, they will begin to trust it. Professor Nass specializes in studying the way in which human beings relate to technology, and he has published several books on the topic including The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships. In our interview with him, Professor Nass explained that societal comfort with technology is gained through experience, and acceptance occurs when people have seen a technology work enough times collectively. He also pointed out that it took a long time for people to develop trust in air transportation, something that we almost take for granted now. It is certainly not the case that autonomous cars need to be equivalent in safety to plane flight before the public would adopt them. However, as Noel du Toit pointed out, we have a higher expectation for autonomous cars than we do for ourselves. Simply put, if we are willing to relinquish the “control” over our vehicles to an autonomous power, it will likely have to be under the condition that the technology drives more adeptly than we ever possibly could. Otherwise, there will simply be no trusting it. Interestingly, du Toit brought up a recent botched safety demonstration by Volvo in May of 2010. In the demonstration, Volvo showcased to the press how its emergency braking system works as part of an “adaptive cruise control” system. These systems allow a driver to set both a top speed and a following distance, which the vehicle then automatically maintains. As a consequence, if the preceding vehicle stops short, the system acts as the foundation for an emergency-braking maneuver. However, In Volvo’s demonstration the car smashed directly into a trailer13. Even though the system worked fine in several cases during the day’s worth of demonstrations, video of that one mishap went viral and did little to help the public gain trust in the technology.
Calo pointed out at that future issues related to autonomous vehicles would be approached from a standpoint of “negative liabilities”, meaning that we can assume something is legal unless there exist explicit laws against it. This discussion also led to the concept of what a driverless car would look like to bystanders, and the kind of panic that might garner. A real-life example of this occurred in Moscow during the VisLab van trek to Shanghai11. In this case, an autonomous electric van was stopped by Russian authorities due to its apparent lack of a driver behind the wheel. Thankfully, engineers present were able to convince the Russian officer who stopped the vehicle not to issue a ticket. The above [Nevadan] legislation fits in well with the information that we collected from Congressman Schiff about potential federal involvement in autonomous vehicle technology. Basically, Schiff relayed the idea that a strong governmental role expected for this technology would come in the form of regulating safety. Furthermore, he called attention to hefty governmental requirements for crash testing that every new vehicle must meet before it is allowed on the road.
In autonomous driving, liability concerns can be inferred through a couple of examples. In one example, Noel du Toit described DARPA’s use of hired stunt drivers to share the testing grounds with driverless vehicle entries in the 2007 Urban Challenge. This behavior clearly illustrates the level of precaution that the DARPA officials felt it was necessary to take. In another example, Dmitri Dolgov expounded on how Google’s cars are never driving by themselves; whenever they are operated on public roads, there are at least two well-trained operators in the car. Dolgov went on to say that these operators “are in control at all times”, which helps illustrate Google’s position-they are not taking any chances when it comes to liabilities. Kent Kresa, former CEO of Northrup Grumman and interim chairman of GM in 2009, was also concerned about the liability issues presented by autonomous vehicles. Kresa felt that a future with driverless cars piloting the streets was somewhat unimaginable at present, especially when one considers the possibility of a pedestrian getting hit. In the case of such a collision it is still very unclear who would be at fault. Whether or not the company that made the vehicle would be responsible is at present unknown.
A conversation we had with Bruce Gillman, the public information officer for the Los Angeles Department of Transportation (DOT), revealed that the department is very busy putting out many other fires. Gillman noted that DOT is focused on getting people out of their cars and onto bikes or into buses. Thus, autonomous vehicles are not on their radar. Moreover, Gillman was adamant that DOT would wait until autonomous vehicles were being manufactured commercially before addressing any issues concerning them. His viewpoint certainly reinforces that idea that supportive infrastructure updates coming form a city government level would be unlikely. No matter what adoption pathway is used, federal government financial support could come in the form of incentives and subsidies like those seen during the initial rollout of hybrid vehicles. However, Brian Thomas explained that this would only be possible if the federal government was willing to do a cost-benefit valuation for the mainstream introduction of autonomous vehicles.
http://www.pickar.caltech.edu/e103/Final%20Exams/Autonomous%20Vehicles%20for%20Personal%20Transport.pdf [shades of Amara’s law: we always overestimate in the short run & underestimate in the long run]
Car manufacturers might be held liable for a larger share of the accidents-a responsibility they are certain to resist. (A legal analysis by Nidhi Kalra and her colleagues at the RAND Corporation suggests this problem is not insuperable.) –“Leave the Driving to It”, Brian Hayes American Scientist 2011
The RAND report: “Liability and Regulation of Autonomous Vehicle Technologies”, Kalra et al 2009:
In this work, we first evaluate how the existing liability regime would likely assign responsibility in crashes involving autonomous vehicle technologies. We identify the controlling legal principles for crashes involving these technologies and examine the implications for their further development and adoption. We anticipate that consumer education will play an important role in reducing consumer overreliance on nascent autonomous vehicle technologies and minimizing liability risk. We also discuss the possibility that the existing liability regime will slow the adoption of these socially desirable technologies because they are likely to increase liability for manufacturers while reducing liability for drivers. Finally, we discuss the possibility of federal preemption of state tort suits if the U.S. Department of Transportation (US DOT) promulgates regulations and some of the implications of eliminating state tort liability. Second, we review the existing literature on the regulatory environment for autonomous vehicle technologies. To date, there are no government regulations for these technologies, but work is being done to develop initial industry standards.
…Additionally, for some systems, the driver is expected to intervene when the system cannot control the vehicle completely. For example, if a very rapid stop is required, ACC may depend on the driver to provide braking beyond its own capabilities. ACC also does not respond to driving hazards, such as debris on the road or potholes-the driver is expected to intervene. Simultaneously, research suggests that drivers using these conveniences often become complacent and slow to intervene when necessary; this behavioral adaptation means drivers are less responsive and responsible than if they were fully in control (Rudin-Brown and Parker, 2004). Does such evidence suggest that manufacturers may be responsible for monitoring driver behavior as well as vehicle behavior? Some manufacturers have already taken a step toward ensuring that the driver assumes responsibility and is attentive, by requiring the driver to periodically depress a button or by monitoring the driver by sensing eye movements and grip on the steering wheel. As discussed later, litigation may occur around the issue of driver monitoring and the danger of the driver relying on the technology for something that it is not designed to accomplish.
…Ayers (1994) surveyed a range of emerging autonomous vehicle technologies and automated highways, evaluated the likelihood of a shift in liability occurring, discussed the appropriateness of government intervention, and highlighted the most-promising interventions for different technologies. Ayers found that collision warning and collision-avoidance systems “are likely to generate a host of negligence suits against auto manufacturers” and that liability disclaimers and federal regulations may be the most effective methods of dealing with the liability concerns (p. 21). The report was written before many of these technologies appeared on the market, and Ayers further speculated that “the liability for almost all accidents in cars equipped with collision-avoidance systems would conceivably fall on the manufacturer” (p. 22), which could “delay or even prevent the deployment of collision warning systems that are cost-effective in terms of accident reduction” (p. 25). Syverud (1992) examines the legal cases stemming from the introduction of air bags, antilock brakes, cruise control, and cellular telephones to provide some general lessons for the liability concerns for autonomous vehicle technologies. In another report, Syverud (1993) examines the legal barriers to a wide range of IVHSs and finds that liability poses a significant barrier particularly to autonomous vehicle technologies that take control of the vehicle. In this work, Syverud’s interviews with manufacturers reveal that liability concerns had already adversely affected research and development in these technologies in several companies. One interviewee is quoted as saying that “IVHS will essentially remain ‘information technology and a few pie-in-the sky pork barrel control technology demonstrations, at least in this country, until you lawyers do something about products liability law’” (1993, p. 25).
…While the victims in these circumstances could presumably sue the vehicle manufacturer, products liability lawsuits are more expensive to bring and take more time to resolve than run-of-the-mill automobile-crash litigation. This shift in responsibility from the driver to the manufacturer may make no-fault automobile-insurance regimes more attractive. They are designed to provide compensation to victims relatively quickly, and they do not depend upon the identification of an “at-fault” party
…Suppose that autonomous vehicle technologies are remarkably effective at virtually eliminating minor crashes caused by human error. But it may be that the comparatively few crashes that do occur usually result in very serious injuries or fatalities (e.g., because autonomous vehicles are operating at much higher speeds or densities). This change in the distribution of crashes may affect the economics of insuring against them. Actuarially, it is much easier for an insurance company to calculate the expected costs of somewhat common small crashes than of rarer, much larger events. This may limit the downward trend in automobile-insurance costs that we would otherwise expect.
…Suppose that most cars brake automatically when they sense a pedestrian in their path. As more cars with this feature come to be on the road, pedestrians may expect that cars will stop, in the same way that people stick their limbs in elevator doors confident that the door will automatically reopen. The general level of pedestrian care may decline as people become accustomed to this common safety feature. But if there were a few models of cars that did not stop in the same way, a new category of crashes could emerge. In this case, should pedestrians who wrongly assume that a car would automatically stop and are then injured be able to recover? To allow recovery in this instance would seem to undermine incentives for pedestrians to take efficient care. On the other hand, allowing the injured pedestrian to recover may encourage the universal adoption of this safety feature. Since negligence is defined by unreasonableness, the evolving set of shared assumptions about the operation of the roadways-what counts as “reasonable”-will determine liability. Fourth, we think that it is not likely that operators of partially or fully autonomous vehicles will be found strictly liable with driving such vehicles as an ultrahazardous activity. As explained earlier, these technologies will be introduced incrementally and will initially serve merely to aid the driver rather than take full control of the vehicle. This will give the public and courts time to become familiar with the capabilities and limits of the technology. As a result, it seems unlikely that courts will consider its gradual introduction and use to be ultrahazardous. On the other hand, this would not be true if a person attempted to operate a car fully autonomously before the technology adequately matured. Suppose, for example, that a home hobbyist put together his own autonomous vehicle and attempted to operate it on public roads. Victims of any crashes that resulted may well be successful in convincing a court to find the operator strictly liable on the grounds that such activity was ultrahazardous.
…Product-liability law can be divided into theories of liability and kinds of defect. Theories of liability include negligence, misrepresentation, warranty, and strict liability.22 Types of defect include manufacturing defects, design defects, and warning defects. A product-liability lawsuit will involve one or more theories of manufacturer liability attached to a specific allegation of a type of defect. In practice, the legal tests for the theories of liability often overlap and, depending on the jurisdiction, may be identical. … While it is difficult to generalize, automobile (and subsystem) manufacturers may fare well under a negligence standard that uses a cost-benefit analysis that includes crashes avoided from the use of autonomous vehicle technologies. Automakers can argue that the overall benefits from the use of a particular technology outweigh the risks. The number of crashes avoided by the use of these technologies is probably large. …Unfortunately, the socially optimal liability rule is unclear. Permitting the defendant to include the long-run benefits in the cost-benefit analysis may encourage the adoption of technology that can indeed save many lives. On the other hand, it may shield the manufacturer from liability for shorter-run decisions that were inefficiently dangerous. Suppose, for example, that a crash-prevention system operates successfully 70% of the time but that, with additional time and work, it could have been designed to operate successfully 90% of the time. Then suppose that a victim is injured in a crash that would have been prevented had the system worked 90% of the time. Assume that the adoption of the 70-percent technology is socially desirable but the adoption of the 90-percent technology would be even more socially desirable. How should the cost-benefit analysis be conducted? Is the manufacturer permitted to cite the 70% of crashes that were prevented in arguing for the benefits of the technology? Or should the cost-benefit analysis focus on the manufacturer’s failure to design the product to function at 90-percent effectiveness? If the latter, the manufacturer might not employ the technology, thereby leading to many preventable crashes. In calculating the marginal cost of the 90-percent technology, should the manufacturer be able to count the lives lost in the delay in implementation as compared to possible release of the 70-percent technology? …Tortious misrepresentation may play a role in litigation involving crashes that result from autonomous vehicle technologies. If advertising overpromises the benefits of these technologies, consumers may misuse them. Consider the following hypothetical scenario. Suppose that an automaker touts the “autopilot-like” features of its ACC and lane-keeping function. In fact, the technologies are intended to be used by an alert driver supervising their operation. After activating the ACC and lane-keeping function, a consumer assumes that the car is in control and falls asleep. Due to road resurfacing, the lane-keeping function fails, and the automobile leaves the roadway and crashes into a tree. The consumer then sues the automaker for tortious misrepresentation based on the advertising that suggested that the car was able to control itself.
…Finally, it is also possible that auto manufacturers will be sued for failing to incorporate autonomous vehicle technologies in their vehicles. While absence of available safety technology is a common basis for design- defect lawsuits (e.g., Camacho v. Honda Motor Co., 741 P.2d 1240, 1987, overturning summary dismissal of suit alleging that Honda could easily have added crash bars to its motorcycles, which would have prevented the plaintiff’s leg injuries), this theory has met with little success in the automotive field because manufacturers have successfully argued that state tort remedies were preempted by federal regulation (Geier v. American Honda Motor Co., 529 U.S. 861, 2000, finding that the plaintiff’s claim that the manufacturer was negligent for failing to include air bags was implicitly preempted by the National Traffic and Motor Vehicle Safety Act). We discuss preemption and the relationship between regulation and tort in Section 4.3.
…Preemption has arisen in the automotive context in litigation over a manufacturer’s failure to install air bags. In Geier v. American Honda Motor Co. (2000), the U.S. Supreme Court found that state tort litigation over a manufacturer’s failure to install air bags was preempted by the National Traffic and Motor Vehicle Safety Act (Pub. L. No. 89-563). More specifically, the Court found that the Federal Motor Vehicle Safety Standard (FMVSS) 208, promulgated by the US DOT, required manufacturers to equip some but not all of their 1987 vehicle-year vehicles with passive restraints. Because the plaintiffs’ theory that the defendants were negligent under state tort law for failing to include air bags was inconsistent with the objectives of this regulation (FMVSS 208), the Court held that the state lawsuits were preempted. Presently, there has been very little regulation promulgated by the US DOT with respect to autonomous vehicle technologies. Should the US DOT promulgate such regulation, it is likely that state tort law claims that were found to be inconsistent with the objective of the regulation would be held to be preempted under the analysis used in Geier. Substantial litigation might be expected as to whether particular state-law claims are, in fact, inconsistent with the objectives of the regulation. Resolution of those claims will depend on the specific state tort law claims, the specific regulation, and the court’s analysis of whether they are “inconsistent.” …Our analysis necessarily raises a more general question: Why should we be concerned about liability issues raised by a new technology? The answer is the same as for why we care about tort law at all: that a tort regime must balance economic incentives, victim compensation, and corrective justice. Any new technology has the potential to change the sets of risks, benefits, and expectations that tort law must reconcile. …Congress could consider creating a comprehensive regulatory regime to govern the use of these technologies. If it does so, it should also consider preempting inconsistent state-court tort remedies. This may minimize the number of inconsistent legal regimes that manufacturers face and simplify and speed the introduction of this technology. While federal preemption has important disadvantages, it might speed the development and utilization of this technology and should be considered, if accompanied by a comprehensive federal regulatory regime.
…This tension produced “a standoff between airbag proponents and the automakers that resulted in contentious debates, several court cases, and very few airbags” (Wetmore, 2004, p. 391). In 1984, the US DOT passed a ruling requiring vehicles manufactured after 1990 to be equipped with some type of passive restraint system (e.g., air bags or automatic seat belts) (Wetmore, 2004); in 1991, this regulation was amended to require air bags in particular in all automobiles by 1999 (Pub. L. No. 102-240). The mandatory performance standards in the FMVSS further required air bags to protect an unbelted adult male passenger in a head-on, 30 mph crash. Additionally, by 1990, the situation had changed dramatically, and air bags were being installed in millions of cars. Wetmore attributes this development to three factors: First, technology had advanced to enable air-bag deployment with high reliability; second, public attitude shifted, and safety features became important factors for consumers; and, third, air bags were no longer being promoted as replacements but as supplements to seat belts, which resulted in a sharing of responsibility between manufacturers and passengers and lessened manufacturers’ potential liability (Wetmore, 2004). While air bags have certainly saved many lives, they have not lived up to original expectations: In 1977, NHTSA estimated that air bags would save on the order of 9,000 lives per year and based its regulations on these expectations (Thompson, Segui-Gomez, and Graham, 2002). Today, by contrast, NHTSA calculates that air bags saved 8,369 lives in the 14 years between 1987 and 2001 (Glassbrenner, undated). Simultaneously, however, it has become evident that air bags pose a risk to many passengers, particularly smaller passengers, such as women of small stature, the elderly, and children. NHTSA (2008a) determined that 291 deaths were caused by air bags between 1990 and July 2008, primarily due to the extreme force that is necessary to meet the performance standard of protecting the unbelted adult male passenger. Houston and Richardson (2000) describe the strong reaction to these losses and a backlash against air bags, despite their benefits. The unintended consequences of air bags have led to technology developments and changes to standards and regulations. Between 1997 and 2000, NHTSA developed a number of interim solutions designed to reduce the risks of air bags, including on-off switches and deployment with less force (Ho, 2006). Simultaneously, safer air bags, called advanced air bags, were developed that deploy with a force tailored to the occupant by taking into account the seat position, belt usage, occupant weight, and other factors. In 2000, NHTSA mandated that the introduction of these advanced air bags begin in 2003 and that, by 2006, every new passenger vehicle would include these safety measures (NHTSA, 2000). What lessons does this experience offer for regulation of autonomous vehicle technologies? We suggest that modesty and flexibility are necessary. The early air-bag regulators envisioned air bags as being a substitute for seat belts because the rates of seat-belt usage were so low and appeared intractable. Few anticipated that seat-belt usage would rise as much over time as it has and that air bags would eventually be used primarily as a supplement rather than a substitute for seat belts. Similarly unexpected developments are likely to arise in the context of autonomous vehicle technologies. In 2006, for example, Honda introduced its Accord model in the UK with a combined lane-keeping and ACC system that allows the vehicle to drive itself under the driver’s watch; this combination of features has yet to be introduced in the United States (Miller, 2006). Ho (2006, p. 27) observes a general trend that “the U.S. market trails Europe, and the European market trails Japan by 2 to 3 years.” What is the extent of these differences? What aspects of the liability and regulatory rules in those countries have enabled accelerated deployment? What other factors are at play (e.g., differences in consumers’ sensitivity to price)?
“New Technology - Old Law: Autonomous Vehicles and California’s Insurance Framework”, Peterson 2012:
This Article will address this issue and propose ways in which auto insurance might change to accommodate the use of AVs. Part I briefly reviews the background of insurance regulation nationally and in California. Part II discusses general insurance and liability issues related to AVs. Part III discusses some challenges that insurers and regulators may face when setting rates for AVs, both generally and under California’s more idiosyncratic regulatory structure. Part IV discusses challenges faced by California insurers who may want to reduce rates in a timely way when technological improvements rapidly reduce risk.
…When working within the context of a file-and-use or use-and-file environment, AVs will present only modest challenges to an insurer that wants to write these policies. The main challenge will arise from the fact that the policy must be rated for a new technology that may have an inadequate base of experience for an actuary to estimate future losses.21 “Prior approval” states, like California, require that automobile rates be approved prior to their use in the marketplace.22 These states rely more on regulation than on competition to modulate insurance rates.23 In California, automobile insurance rates are approved in a two-step process. The first step is the creation of a “rate plan.”24 The rate plan considers the insurer’s entire book of business in the relative line of insurance and asks the question: How much total premium must the insurer collect in order to cover the projected risks, overhead and permitted profit for that line?25 The insurer then creates a “class plan.” The class plan asks the question: How should different policyholders’ premiums be adjusted up or down based on the risks presented by different groups or classes of policyholders?26 Among other factors, the Department of Insurance requires that the rating factors comply with California law and be justified by the loss experience for the group.27 Rating a new technology with an unproven track record may include a considerable amount of guesswork. …California is the largest insurance market in the United States, and it is the sixth largest among the countries of the world.28 Cars are culture in this most populous state. There are far more insured automobiles in California than any other state.29
…Although adopted by the barest majority, [California’s] Proposition 103 [see previous discussion of its 3-part requirement for rating insurance premiums] may be amended by the legislature only by a two-thirds vote, and then only if the legislation “further[s] [the] purposes” of Proposition 103.68 Thus, Proposition 103 and the regulations adopted by the Department of Insurance are the matrix in which most (but not all) insurance is sold and regulated in California.69 …The most sensible approach to this dilemma, at least with respect to AVs, would be to abolish or substantially re-order the three mandatory rating factors. However, this is more easily said than done. As noted above, amending Proposition 103 requires a two-thirds vote of the legislature.160 Moreover, section 8(b) of the Proposition provides: “The provisions of this act shall not be amended by the Legislature except to further its purposes.”161 Both of these requirements can be formidable hurdles. Persistency discounts serve as an example. Most are aware that their insurer discounts their rates if they have been with the insurer for a period of time.162 This is called the “persistency discount.” The discount is usually justified on the basis that persistency saves the insurer the producing expenses associated with finding a new insured. If one wants to change insurers, Proposition 103 does not permit the subsequent insurer to match the persistency discount offered by the insured’s current insurer.163 Thus, the second insurer could not compete by offering the same discount. Changing insurers, then, was somewhat like a taxable event. The “tax” is the loss of the persistency discount when purchasing the new policy. The California legislature concluded that this both undermined competition and drove up the cost of insurance by discouraging the ability to shop for lower rates. …Despite these legislative findings, the Court of Appeal held the amendment invalid because, in the Court’s view, it did not further the purposes of Proposition 103.165 The Court also held that Proposition 103 vests only the Insurance Commissioner with the power to set optional rating factors.166 Thus, the legislature, even by a super majority, may not be authorized to adopt rating factors for auto insurance. Following this defeat in the courts, promoters of “portable persistency” qualified a ballot initiative to amend this aspect of Proposition 103. With a vote of 51.9% to 48.1%, the initiative failed in the June 8, 2010 election.167
…The State of Nevada recently adopted regulations for licensing the testing of AVs in the state. The regulations would require insurance in the minimum amounts required for other cars “for the payment of tort liabilities arising from the maintenance or use of the motor vehicle.”73 The regulation, however, does not suggest how the tort liability may arise. If there is no fault on the part of the operator or owner, then liability may arise, if at all, only for the manufacturer or supplier. Manufacturers and suppliers are not “insureds” under the standard automobile policy-at least so far. Thus, for the reasons stated above, owners, manufacturers and suppliers may fall outside the coverage of the policy.
…One possible approach would be to invoke the various doctrines of products liability law. This would attach the major liability to sellers and manufacturers of the vehicle. However, it is doubtful that this is an acceptable approach for several reasons. For example, while some accidents are catastrophic, fortunately most accidents cause only modest damages. By contrast, products liability lawsuits tend to be complex and expensive. Indeed, they may require the translation of hundreds or thousands of engineering documents-perhaps written in Japanese, Chinese or Korean…See In re Puerto Rico Electric Power Authority, 687 F.2d 501, 505 (1st Cir. 1982) (stating each party to bear translation costs of documents requested by it but cost possibly taxable to prevailing party). Translation costs of Japanese documents in range of $250,000, and translation costs of additional Spanish documents may exceed that amount.
…Commercial insurers of manufacturers and suppliers are not encumbered with Proposition 103’s unique automobile provisions,197 therefore they need not offer a GDD, nor need they conform to the ranking of the mandatory rating factors. To the extent that the risks of AVs are transferred to them, the insurance burden passed to consumers in the price of the car can reflect the actual, and presumably lower, risk presented by AVs. As noted above, however, for practical reasons some rating factors, such as annual miles driven and territory, cannot properly be reflected in the automobile price. Moving from the awkward and arbitrary results mandated by Proposition 103’s rating factors to a commercial insurance setting that cannot properly reflect some other rating factors is also an awkward trade-off. At best, it may be a choice of the least worst. Another viable solution might to be to amend the California Insurance Code section 660(a) to exclude from the definition of “policy” those policies covering liability for AVs (at least when operated in autonomous mode). Since Proposition 103 incorporates section 660(a), this would likely require a two-thirds vote of the legislature and the amendment would have to “further the purposes” of Proposition 103. Assuming a two-thirds vote could be mustered, the issue would then be whether the amendment furthers the purposes of the Proposition. To the extent that liability moves from fault-based driving to defect-based products liability, the purposes underlying the mandatory rating factors and the GDD simply cannot be accomplished. Manufacturers will pass these costs through to automobile buyers free of the Proposition’s restraints. Since the purposes of the Proposition, at least with respect to liability coverage,199 simply cannot be accomplished when dealing with self-driving cars, amending section 660(a) would not frustrate the purposes of Proposition 103.
…Filing a “complete rate application with the commissioner” is a substantial impediment to reducing rates. A complete rate application is an expensive, ponderous and time-consuming process. A typical filing may take three to five months before approval. Some applications have even been delayed for a year.205 In 2009, when insurers filed many new rate plans in order to comply with the new territorial rating regulations, delays among the top twenty private passenger auto insurers ranged from a low of 54 days (Viking) to a high of 558 days (USAA and USAA Casualty). Many took over 300 days (e.g., State Farm Mutual, Farmers Insurance Exchange, Progressive Choice).206 …n addition, once an application to lower rates is filed, the Commissioner, consumer groups, and others can intervene and ask that the rates be lowered even further.207 Thus, an application to lower a rate by 6% may invite pressure to lower it even further.208 If they “substantially contributed, as a whole” to the decision, a consumer group can also bill the insurance company for its legal, advocacy, and witness fees.209
…Unless ways can be found to conform Proposition 103 to this new reality, insurance for AVs is likely to migrate to a statutory and regulatory environment untrammeled by Proposition 103-commercial policies carried by manufacturers and suppliers. This migration presents its own set of problems. While the safety of AVs could be more fairly rated, other important rating factors, such as annual miles driven and territory, must be compromised. Whether this migration occurs will also depend on how liability rules do or do not adjust to a world in which people will nevertheless suffer injuries from AVs, but in which it is unlikely our present fault rules will adequately address compensation. If concepts of non-delegable duty, agency, or strict liability attach initial liability to owners of faulty cars with faultless drivers, the insurance burden will first be filtered through automobile insurance governed by Proposition 103. These insurers will then pass the losses up the distribution line to the insurers of suppliers and manufacturers that are not governed by Proposition 103. Manufacturers and suppliers will then pass the insurance cost back to AV owners in the cost of the vehicle. The insurance load reflected in the price of the car will pass through to automobile owners free of any of the restrictions imposed by Proposition 103. There will be no GDD, such as it is, no mandatory rating factors, and, depending on where the suppliers’ or manufacturers’ insurers are located, more flexible rating. One may ask: What is gained by this merry-go-round?
“‘Look Ma, No Hands!’: Wrinkles and Wrecks in the Age of Autonomous Vehicles”, Garza 2012
The benefits of these systems cannot be overestimated given that one-third of drivers admit to having fallen asleep at the wheel within the previous thirty days.31 …If the driver fails to react in time, it applies 40% of the full braking power to reduce the severity of the collision.39 In the most advanced version, the CMBS performs all of the functions described above, and it will also stop the car automatically to avoid a collision when traveling under ten miles-per-hour.40 Car companies are hesitant to push the automatic braking threshold too far out of fear that ‚fully ‘automatic’ braking systems will shift the responsibility of avoiding an accident from the vehicle’s driver to the vehicle’s manufacturer.’41…See Larry Carley, Active Safety Technology: Adaptive Cruise Control, Lane Departure Warning & Collision Mitigation Braking, IMPORT CAR (June 16, 2009), http://www.import-car.com/Article/58867/active_safety_technology_adaptive_cruise_control_lane_departure_warning__collision_mitigation_braking.aspx
…Automobile products liability cases are typically divided into two categories: ‚(1) accidents caused by automotive defects, and (2) aggravated injuries caused by a vehicle’s failure to be sufficiently ‘crashworthy’ to protect its occupants in an accident.‘79 …For example, a car suffers from a design defect when a malfunction in the steering wheel causes a crash. 81 Additionally, plaintiffs have alleged and prevailed on manufacturing- defect claims in cases where ‚unintended, sudden and uncontrollable acceleration’ causes an accident.82 In such cases, plaintiffs have been able to recover under a ‚malfunction theory.’83 Under a malfunction theory, plaintiffs use a ‚res ipsa loquitur like inference to infer defectiveness in strict liability where there was no independent proof of a defect in the product.’84 Plaintiffs have also prevailed where design defects cause injury. 85 For example, there was a proliferation of litigation in the 1970s and 1980s as a result of vehicles that were designed with a high center of gravity, which increased their propensity to roll over.86 Additionally, many design-defect cases arose in response to faulty transmissions that could inadvertently slip into gear, causing crashes and occupants to be run over in some cases. 87 The two primary tests that courts use to assess the defectiveness of a product’s design are the consumer-expectations test and the risk-utility test.88 The consumer-expectations test focuses on whether ‚the danger posed by the design is greater than an ordinary consumer would expect when using the product in an intended or reasonably foreseeable manner.’89 …Thus, while an ordinary consumer can have expectations that a car will not explode at a stoplight or catch fire in a two-mile-per-hour collision, they may not be able to have expectations about how a truck should handle after striking a five- or six-inch rock at thirty-five miles-per-hour.92 Perhaps because the consumer-expectations test is difficult to apply to complex products, and we live in a world where technological growth increases complexity, the risk-utility test has become the dominant test in design-defect cases.93 …Litigation can also arise where a plaintiff alleges that a vehicle is not sufficiently ‚crashworthy.’104 Crashworthiness claims are a type of design- defect claim.105
…Since their advent and incorporation, seat belts have resulted in litigation-much of which has involved crashworthiness claims. 136 In Jackson v. General Motors Corp., for example, the plaintiff alleged that as a result of a defectively designed seat belt, his injuries were enhanced. 137 The defendant manufacturer argued that the complexity of seat belts foreclosed any consumer expectation,138 but the Tennessee Supreme Court noted that seat belts are ‘familiar products for which consumers’ expectations of safety have had an opportunity to develop,’ and permitted the plaintiff to recover under the consumer-expectations test.139 Although manufacturers have been sued where seat belts render a car insufficiently crashworthy- as in cases where they fail to perform as intended or enhance injury-the incorporation of seat belts has reduced liability as well.140 This reduction comes in the form of the ‚seat belt defense.‘141 The ’seat belt defense’ allows a defendant to present evidence about an occupant’s nonuse of a seat belt to mitigate damages or to defend against an enhanced-injury claim.142 Because seat belts are capable of reducing the number of lives lost and the overall severity of injuries sustained in crashes, it is argued that nonuse should protect a manufacturer from some claims.143 Although the majority rule is to prevent the admission of such evidence in enhanced-injury litigation, there is a growing trend toward admission.144
…Since their incorporation, consumers have sued manufacturers for defective cruise control systems that lead to injury. 171 Because of the complexity of cruise control technology, courts may not allow a plaintiff to use the consumer-expectations test.172 Despite the complexity of the technology, other courts allow plaintiffs to establish a defect using either the risk-utility test or the consumer-expectations test.173
…Under the consumer-expectations test, manufacturers will likely argue-as they historically have-that OAV technology is too complicated for the average consumer to have appropriate expectations about its capabilities.182 Commentators have stated that ‚consumers may have unrealistic expectations about the capabilities of these technologies . . . . Technologies that are engineered to assist the driver may be overly relied on to replace the need for independent vigilance on the part of the vehicle operator.’183 Plaintiffs will argue that, while the workings of the technology are concededly complex, the overall concept of autonomous driving is not.184 Like the car exploding at a stoplight or the car that catches fire in a two- mile-per-hour collision, the average consumer would expect autonomous vehicles to drive themselves without incident.185 This means that components that are meant to keep the car within a lane will do just that, and others will stop the vehicle at traffic lights. 186 Where incidents occur, OAVs will not have performed as the average consumer would expect.187 …plaintiffs who purchase OAVs at the cusp of availability, and attempt to prove defect under the consumer- expectations test, are likely to face an up-hill battle.194 But the unavailability of the consumer-expectations test will not be a significant detriment as plaintiffs can fall back on the risk-utility test.195 And as OAVs are increasingly incorporated, and users become more familiar with their capabilities, the consumer-expectations test will become more accessible to plaintiffs.196 Given the modern trend, plaintiffs are likely to face the risk- utility test.197
…Additionally, the extent to which injuries are ‚enhanced’ by OAVs will be debated.228 Because the majority of drivers fail to fully apply their brakes prior to a collision,229 where an OAV only partially applies brakes, or fails to apply brakes at all, manufacturers and plaintiffs will disagree about the extent of enhancement.230 Manufacturers will argue that, absent the OAV, the result would have been the same or worse-thus, the extent to which the injuries of the plaintiff are ‚enhanced’ is minimal.231 Plaintiffs will argue that, just like the presentation of crash statistics in a risk-utility analysis, this is a false choice.232 Like no-fire air bag claims, plaintiffs will contend that but for the malfunction of the OAV, their injuries would have been greatly reduced or nonexistent. 233 As a result, any injuries sustained above that threshold should serve as a basis for recovery. 234
…In products liability cases the ’use of expert witnesses has grown in both importance and expense.’301 Because of the extraordinary cost of experts in products liability litigation, many plaintiffs are turned away because, even if they were to recover, the prospective award would not cover the expense of litigating the claim. 302
…Although complex, OAVs function much like the cruise control that exists in modern cars. As we have seen with seat belts, air bags, and cruise control, manufacturers have always been hesitant to adopt safety technologies. Despite concerns, products liability law is capable of handling OAVs just as it has these past technologies. While the novelty and complexity of OAVs are likely to preclude plaintiffs from proving defect under the consumer-expectation test, as implementation increases this likelihood may decrease. Under a risk-utility analysis, manufacturers will stress the extraordinary safety benefits of OAVs, while consumers will allege that designs can be improved. In the end, OAV adoption will benefit manufacturers. Although liability will fall on manufacturers when vehicles fail, decreased incidences and severity of crashes will result in a net decrease in liability. Further, the combination of LDWS cameras and EDRs will drastically reduce the cost of litigation. By reducing reliance on experts for complex causation determinations, both manufacturers and plaintiffs will benefit. In the end, obstacles to OAV implementation are more likely to be psychological than legal, and the sooner that courts, manufacturers, and the motoring public prepare to confront these issues, the sooner lives can be saved.
“Self-driving cars can navigate the road, but can they navigate the law? Google’s lobbying hard for its self-driving technology, but some features may never be legal”, The Verge 14 December 2012
Google says that on a given day, they have a dozen autonomous cars on the road. This August, they passed 300,000 driver-hours. In Spain this summer, Volvo drove a convoy of three cars through 200 kilometers of desert highway with just one driver and a police escort.
…Bryant Walker Smith teaches a class on autonomous vehicles at Stanford Law School. At a workshop this summer, he put forward this thought experiment: the year is 2020, and a number of companies offer “advanced driver assistance systems” with their high-end model. Over 100,000 units have been sold. The owner’s manual states that the driver must remain alert at all times, but one night a driver - we’ll call him “Paul” - falls asleep while driving over a foggy bridge. The car tries to rouse him with alarms and vibrations but he’s a deep sleeper, so the car turns on the hazard lights and pulls over to the side of the road where another driver (let’s say Julie) rear-ends him. He’s injured, angry, and prone to litigation. So is Julie. That would be tricky enough by itself, but then Smith starts layering on complications. Another model of auto-driver would have driven to the end of the bridge before pulling over. If Paul had updated his software, it would have braced his seatbelt for the crash, mitigating his injuries, but he didn’t. The company could have pushed the update automatically, but management chose not to. Now, Smith asks the workshop, who gets sued? Or for a shorter list, who doesn’t?
…The financial stakes are high. According to the Insurance Research Council, auto liability claims paid out roughly $215 for each insured car, between bodily injury and property damage claims. With 250 million cars on the road, that’s $54 billion a year in liability. If even a tiny portion of those lawsuits are directed towards technologists, the business would become unprofitable fast.
…Changing the laws in Europe would take a replay of the internationally ratified Vienna Convention (passed in 1968) as well as pushing through a hodgepodge of national and regional laws. As Google proved, it’s not impossible, but it leaves SARTRE facing an unusually tricky adoption problem. Lawmakers won’t care about the project unless they think consumers really want it, but it’s hard to get consumers excited about a product that doesn’t exist yet. Projects like this usually rely on a core of early adopters to demonstrate their usefulness - a hard enough task, as most startups can tell you - but in this case, SARTRE has to bring auto regulators along for the ride. Optimistically, Volvo told us they expect the technology to be ready “towards the end of this decade,” but that may depend entirely on how quickly the law moves. The less optimistic prediction is that it never arrives at all. Steve Shladover is the program manager of mobility at California’s PATH program, where they’ve been trying to make convoy technology happen for 25 years, lured by the prospect of fitting three times as many cars on the freeway. They were showing off a working version as early as 1997 (powered by a single Pentium processor), before falling into the same gap between prototype and final product. “It’s a solvable problem once people can see the benefits,” he told The Verge, “but I think a lot of the current activity is wildly optimistic in terms of what can be achieved.” When I asked him when we’d see a self-driving car, Shladover told me what he says at the many auto conferences he’s been to: “I don’t expect to see the fully-automated, autonomous vehicle out on the road in the lifetime of anyone in this room.”
…Many of Google’s planned features may simply never be legal. One difficult feature is the “come pick me up” button that Larry Page has pushed as a solution to parking congestion. Instead of wasting energy and space on urban parking lots, why not have cars drop us off and then drive themselves to park somewhere more remote, like an automated valet?It’s a genuinely good idea, and one Google seems passionate about, but it’s extremely difficult to square with most vehicle codes. The Geneva Convention on Road Traffic (1949) requires that drivers “shall at all times be able to control their vehicles,” and provisions against reckless driving usually require “the conscious and intentional operation of a motor vehicle.” Some of that is simple semantics, but other concerns are harder to dismiss. After a crash, drivers are legally obligated to stop and help the injured - a difficult task if there’s no one in the car. As a result, most experts predict drivers will be legally required to have a person in the car at all times, ready to take over if the automatic system fails. If they’re right, the self-parking car may never be legal.
“Automated Vehicles are Probably Legal in the United States”, Bryant Walker Smith 2012
The short answer is that the computer direction of a motor vehicle’s steering, braking, and accelerating without real-time human input is probably legal….The paper’s largely descriptive analysis, which begins with the principle that everything is permitted unless prohibited, covers three key legal regimes: the 1949 Geneva Convention on Road Traffic, regulations enacted by the National Highway Traffic Safety Administration (NHTSA), and the vehicle codes of all fifty US states.
The Geneva Convention, to which the United States is a party, probably does not prohibit automated driving. The treaty promotes road safety by establishing uniform rules, one of which requires every vehicle or combination thereof to have a driver who is “at all times … able to control” it. However, this requirement is likely satisfied if a human is able to intervene in the automated vehicle’s operation.
NHTSA’s regulations, which include the Federal Motor Vehicle Safety Standards to which new vehicles must be certified, do not generally prohibit or uniquely burden automated vehicles, with the possible exception of one rule regarding emergency flashers. State vehicle codes probably do not prohibit-but may complicate-automated driving. These codes assume the presence of licensed human drivers who are able to exercise human judgment, and particular rules may functionally require that presence. New York somewhat uniquely directs a driver to keep one hand on the wheel at all times. In addition, far more common rules mandating reasonable, prudent, practicable, and safe driving have uncertain application to automated vehicles and their users. Following distance requirements may also restrict the lawful operation of tightly spaced vehicle platoons. Many of these issues arise even in the three states that expressly regulate automated vehicles.
…This paper does not consider how the rules of tort could or should apply to automated vehicles-that is, the extent to which tort liability might shift upstream to companies responsible for the design, manufacture, sale, operation, or provision of data or other services to an automated vehicle. 6
…Because of the broad way in which the term and others like it are defined, an automated vehicle probably has a human “driver.” 295 Obligations imposed on that person may limit the independence with which the vehicle may lawfully operate. 296 In addition, the automated vehicle itself must meet numerous requirements, some of which may also complicate its operation. 297 Although three states have expressly established the legality of automated vehicles under certain conditions, their respective laws do not resolve many of the questions raised in this section. 298
…A brief but important aside: To varying degrees, states impose criminal or quasicriminal liability on owners who permit others to drive their vehicles. 359 In Washington, “[b]oth a person operating a vehicle with the express or implied permission of the owner and the owner of the vehicle are responsible for any act or omission that is declared unlawful in this chapter. The primary responsibility is the owner’s.” 360 Some states permit an inference that the owner of a vehicle was its operator for certain offenses; 361 Wisconsin provides what is by far the most detailed statutory set of rebuttable presumptions. 362 Many others punish owners who knowingly permit their vehicles to be driven unlawfully. 363 Although these owners are not drivers, they are assumed to exercise some judgment or control with respect to those drivers-an instance of vicarious liability that suggests an owner of an automated vehicle might be liable for merely permitting its automated operation. 364
…On the human side, physical presence would likely continue to provide a proxy for or presumption of driving. 366 In other words, an individual who is physically positioned to provide real-time input to a motor vehicle may well be treated as its driver. This is particularly likely at levels of automation that involve human input for certain portions of a trip. In addition, an individual who starts or dispatches an automated vehicle, who initiates the automated operation of that vehicle, or who specifies certain parameters of operation probably qualifies as a driver under existing law. That individual may use some device-anything from a physical key to the click of a mouse to the sound of her voice-to activate the vehicle by herself. She may likewise deliberately request that the vehicle assume the active driving task. And she may set the vehicle’s maximum speed or level of assertiveness. This working definition is unclear in the same ways that existing law is likely to be unclear. Relevant acts might occur at any level of the primary driving task, from a decision to take a particular trip to a decision to exceed any speed limit by ten miles per hour. 367 A tactical decision like speeding is closely connected with the consequences-whether a moving violation or an injury-that may result. But treating an individual who dispatches her fully automated vehicle as the driver for the entirety of the trip could attenuate the relationship between legal responsibility and legal fault. 368 Nonetheless, strict liability of this sort is accepted within tort law 369 and present, however controversially, in US criminal law. 370
On the corporate side, a firm that designs or supplies a vehicle’s automated functionality or that provides data or other digital services might qualify as a driver under existing law. The key element, as provided in the working definition, may be the lack of a human intermediary: A human who provides some input may still seem a better fit for a human-centered vehicle code than a company with other relevant legal exposure. However, as noted above, public outrage is another element that may motivate new uses of existing laws. 377
…The mechanism by which someone other than a human would obtain a driving license is unclear. For example, some companies may possess great vision, but “a test of the applicant’s eyesight” may nonetheless be difficult. 395 And while General Motors may (or may not) 396 meet a state’s minimum age requirement, Google would not. [See Google, Google’s mission is to organize the world’s information and make it universally accessible and useful, www.google.com/intl/en/about/company/. In some states, Google might be allowed to drive itself to school. See, e.g., Nev. Rev. Stat. § 483.270; Nev. Admin. Code § 483.200.]
And people say lawyers have no sense of humor.
One thing that hasn't been been mentioned is what kind of security the car's operating system has. Image what will happen after the first major autonomous car-virus, especially if the virus is malicious rather than merely incidentally introducing bugs. Keep in mind it's not to hard for a virus to be very malicious since autonomous cars need to know what pedestrians are in order to avoid hitting them.
About the red light, I'm not even sure if it's necessary for anyone to pay. If you think about it, fines are there to discourage human drivers from breaking the rules. But in a robotic car, running a red light is due to faulty programming or bugs. Robotic cars will try not to run red lights even if there is no fine - they will not be allowed on the road unless they already obey the rules.
If some company happens to produce robotic cars that run red lights a lot, then of course it would be necessary to, for example, place a temporary ban on those car models ... (read more)
"Exclusive: In boost to self-driving cars, U.S. tells Google computers can qualify as drivers":... (read more)
Keep in mind that these developments will not be occurring in a vacuum, but in the context of other types of autonomous drones being developed.
I don't really understand the legal problem.
Why can't the law just be, if you're behind the wheel of an autonomous car in possession of immediate over-ride, then you're exactly as liable as a normal driver?
Now, in practice, if something goes wrong you're going to be in a terrible position to stop it, because you're going to not be paying attention. But the law can just be "Well, you have to be paying attention or you're liable!" --- even if that really just amounts to the fact that you're taking on different risks driving this autonomous car.
The... (read more)
How do you get from the second sentence to the first sentence?
Isn't it premature to make predictions about car use? Shouldn't you start with predictions about further legal change? (of course there is positive feedback, so they aren't completely independent)
Or maybe the legal barrier isn't the first one to look at. When you predict that even niches that can ignore public road law will no... (read more)
Are you confident in your short-term pessimism, gwern? It seems like there are many ways for the technology to potentially quickly gain acceptance. Mainly, it's extremely convenient that states are able to regulate driving, as that provides 50 avenues for starting rolling out cars (in addition to the 190 other international opportunities). Once one place adopts, my mental model of how things will probably happen says that many will follow within a year, and most (70%) will follow within seven years (and probably within three years if the early adopter has ... (read more)
The world will see autonomous passenger trains and autonomous commercial planes before it has to get used to autonomous cars.
And once autonomous cars routinely win races against human drivers, the laws will change quickly enough.
New paper: https://www.enotrans.org/wp-content/uploads/wpsc/downloadables/AV-paper.pdf
"Of Frightened Horses and Autonomous Vehicles: Tort Law and its Assimilation of Innovations", Graham 2012 http://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=1170&context=facpubs
http://www.volokh.com/2013/05/05/self-driving-vehicles-how-soon-and-who-will-bear-the-liability-costs/ with a potential pointer to more detailed legal work:
A legal parallel illustrating my concerns about the burden of insurance, the decidedly non-robotic ride-sharing sector; The Economist, "All eyes on the sharing economy - Collaborative consumption: Technology makes it easier for people to rent items to each other. But as it grows, the “sharing economy” is hitting roadblocks":... (read more)
If I would be google I would start by going to small nations like Singapore. Get the necessary laws to operate the technology in Singapore. Singapore has no problem with making laws that simply issues like that.
Afterwards let your lobbyists go to other countries and propose that they give you the same laws.
It'll be interesting to see how medical robot (protected by the FDA) lawsuits turn out: http://climateerinvest.blogspot.se/2013/01/first-they-came-for-robot-surgeons.html
Hello, I'm looking for the comment section and got lost, is this it?
The legal status quo is secondary to public perception, which - other than some technophile aficionados - is quite reserved. There's too much male identity attached to driving, not only are cars used to show off status, but so is the driving style you use them with. As is often the case, people confuse a "autonomous cars are not for me" with "autonomous cars - what nonsense, should not be allowed!", in part because they feel threatened their identity-generating toy coul... (read more)