The world might not see legitimate artificial intelligence for quite some time, but that doesn’t mean developers aren’t taking baby steps in the right direction. A current example of this is Google and its automated vehicles. If Google is capable of successfully making a self-driving car that’s aware of its surroundings, we might see them on the roads as early as 2020.
The National Highway Traffic Safety Administration has declared that Google’s self-driving vehicles can be considered a driver. As reported by ZDNet, Google sought to clarify what it would take to make their driverless cars highway-safe. In order for Google’s cars to be seen as compliant with the Federal Motor Vehicle Safety Standards, all they had to do was change the position of the brake pedal and sensors, after which the vehicles were declared “safe enough.”
For an example of how one of these automated cars views its surroundings, watch this video:
While this might seem of little consequence, it’s actually a huge step forward for the development of artificial intelligence. Of course, with such a huge push, there will be complications; in this case, they come in the form of accident liability.
If a driverless vehicle were to be involved in an accident, who’s to blame? You can’t sue a vehicle, unless you want to go directly to the manufacturer and claim that it’s their fault for producing a faulty product. Unfortunately, the manufacturer could simply blame the user for failing to set the device up properly. Also, would insurance companies require new coverage to accommodate the presence of automated vehicles on the roads?
These questions aren’t easily answered, so liability will likely remain a major issue for any autonomous vehicles. Since a driverless car doesn’t have anyone behind the wheel, it’s difficult to place the blame on any one entity in particular. As the feds claimed in their letter to Google, “If no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the ‘driver’ as whatever (as opposed to whoever) is doing the driving.” So, in the case of the autonomous vehicle, finding out who or what is at fault is challenging at best.
Another critical issue is how well Google’s autonomous cars can fit into the current Federal Motor Vehicle Safety Standards. Will regulations regarding specific human anatomy have to be changed in order to accommodate motor vehicles? As reported by WIRED:
The rule regarding the car’s braking system, for example, says it “shall be activated by means of a foot control.” The rules around headlights and turn signals refer to hands. NHTSA can easily change how it interprets those rules, but there’s no reasonable way to define Google’s software—capable as it is—as having body parts. All of which means, the feds “would need to commence a rulemaking to consider how FMVSS No. 135 [the rule governing braking] might be amended in response to ‘changed circumstances,'” the letter says. Getting an exemption to one of these rules is a long and difficult process, Walker Smith says. But “the regular rulemaking process is even more onerous.”
Even if liability remains a problem for autonomous cars, the fact that authorities can refer to computers as “drivers” means that they can be considered humans (or human-like), at least behind the wheel of a car. Developers of artificial intelligence will likely face similar issues, but this advancement gives them hope that their efforts will not be in vain. Though Google has slated its automated cars to be available to the public by 2020, we might have to wait just a little bit longer, even for the most basic form of artificial intelligence.
What are your thoughts on AI and autonomous cars? Would you feel comfortable riding in one, let alone driving on a road populated by them? Let us know in the comments.