Google’s Driver-less Car and Morality : The New Yorker

Google’s driver-less cars are already street-legal in three states, California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory. Eventually (though not yet) automated vehicles will be able to drive better, and more safely than you can; no drinking, no distraction, better reflexes, and better awareness (via networking) of other vehicles. Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

via Google’s Driver-less Car and Morality : The New Yorker.

Advertisements

14 thoughts on “Google’s Driver-less Car and Morality : The New Yorker

  1. Throughout reading this article, more and more I thought about movies where robots play a significant part in, and it freaked me out. First, I thought of Wall-E. Cars that are able to drive themselves, relieving humans of more responsibility. I couldn’t help but think of the humans in the movie having zero responsibility and basically being controlled by technology. Then as it moved on to talk about putting ethics in robots. Save humans, then save yourself. WIll Smith’s movie I-Robot, when the robot starts developing feelings creates havoc with humans. If the computer can decide between saving itself or the human, what if it develops the “thought” of saving himself rather than the human? Finally, and perhaps what scared me most, was the movie Terminator. What if these machines and robots are able to think for themselves, and create a war against man kind? After this thought, I suggest leaving ethics out of the robots we program.

  2. This article brings up a good point with morals in technology, but I feel that the overall question is difficult to answer with some of the systems being so far away. If you look at the issue of morals in systems when they are implemented, rather than right now, it would seem that the level of ethics put into a system would vary depending on what the goal of the system is. If the self-driving cars all drive with the protection of the driver in mind, and they can communicate with other vehicles, any cars involved in an accident would decrease the odds of fatalities. This system wouldn’t need as high a level of ethics, as say, a military robot that has the task of going into battle. It must decide who lives and who doesn’t. It must decide what the right course of action is for the success of its mission. This would require more of a thinking system than a car that just drives itself. Overall I don’t see fully automated robots being used in wars, but more so in everyday activities.

  3. This article brings up the fact that we are constantly using technology to solve problems and correct human error. But, are we right about it this time? I think that it would be great to have a transportation system that has zero errors and is absolutely perfect, but is this possible even if all vehicles were automated? The last part of the paragraph, about the bus full of children, provides a great discussion point. What would happen in this situation and how would the computer decide what to do; Is it pre-programmed, can the setting be changed, etc.? If lives were lost, who would be at fault—the computer itself, the manufacturer, the programmer, etc.? There is a lot to consider with this article and with the ethical issues surrounding this situation. How would we react to the situation if the computer made the error versus a human making the same error? We could go so far as to question how the computer could even make such an error if all vehicles were automated and working together to reduce situations like these. Another thing to consider is people feeling comfortable with the fact that their lives are in the hands of a computer. Granted, cars, trains, planes, etc. all have some sort of programming in them that could endanger the lives of the driver, but the human driver is still in control. There are many issues that could arise from the implementation of such technology, and there is no doubt that ethics will play a huge role in this situation.

  4. I think this article focuses too much on the ethics of robots. If we keep this simple then we can just focus on the benefits of Google’s driver-less car, not the “what-ifs”. Google’s driver-less cars has only been involved in two different car accidents that were caused by human error. Driver-less cars can stop unnecessary injuries; for example, drunk driving. For instance, you go to a bar with friends and are too inhibited to drive. Instead of risking your life and others on the road, or having to purchase a cab; the car can safely take the person home. However, it is important to still have the discussion of ethics regarding robots, but at this very early stage in robotics it should not inhibit the evolution of robotics. Google’s driver-less cars are something of the future; imagine no more speeding tickets or red light cameras because you cannot speed or go through a red. Computer controlled cars are way off and entrusting sensors with your life is a big risk, but I believe the benefits certainly outweigh any negative issues that could (but unlikely) arise.

  5. It is difficult to imagine the emergence of the autonomous vehicle replacing human drivers completely. While it is interesting to consider the ethical implications of continuing to drive in a driverless society, what are the implications of an automated driving machine? Who would be responsible for damages and loss of human life if an autonomous vehicle kills its passengers or those of another car’s? Obviously there would have to be an extremely high rate of success for autonomous vehicles to even be let on the roads, but nothing can be perfect. Who will accept responsibility for a machine’s actions? The engineers? The automaker? The owner? Fully-automated vehicles pose a great amount of danger, as well as a great potential for ethical arguments.

  6. I was thinking about the ethical questions that the car would be forced to answer, such as the one about swerving to save the driver or save the children. If a car is forced to make this decision, and someone is injured or killed, who would be to blame? For people in accidents or that know someone in an accident, it is usually settling for them to know who’s fault the accident was and have someone to blame for their loss or injury. Would the responsibility end up becoming the car manufacturer, who would end up paying for the damage done in the accident? And if so, what if the person driving claims that they would have had a different reaction, then how can you even begin to determine how much payment should be given for the accident?

  7. The notion of driver-less cars has been around for a long time, illustrated in movies, science fiction novels etc. However, now that the idea is becoming a reality, issues that could be ignored in fiction are becoming unavoidable problems. Besides the above comments on some of the moral issues of the car answering ethical questions or their potential for errors, the thing that stands out for me in this article is the ethical dilemma that arises due to the idea that they could become mandatory. If we were to make the driver-less cars public, I believe it is true that they would have to be mandatory, because once you have driver-less cars on the road, every other vehicle on the road would have to be one for it to be successful, or else the networking and automated responses to prevent accidents wouldn’t work. But this brings up the issue of how will we make it possible for everyone to obtain a driver-less car? Will the government with its trillions of dollars in debt and growing, fund the program to replace the millions of cars people already own or will people be required to somehow purchase what I imagine will be a highly expensive vehicle? Which brings up even more ethical issues in terms of its potential for discrimination against low income families etc.. In addition, what about the thousands of people who work at car companies like Toyota, Honda etc who will lose their jobs because technology companies like Google are taking over the industry, or will they eventually to stay in business have to stop producing normal cars and learn how to make driver-less ones? If the latter is the case, then how will car companies get the money to fund this change in production and as more companies produce driver-less cars instead of just Google, won’t this open up the potential for a higher risk of flaws occurring due to differences in manufacturing and safety practices?.. I personally think that if we can’t even formulate a realistic method to implement the driver-less cars into widespread society, how can we toss around notions of them someday becoming the future, when as of now, there is no room for them in it.

  8. The self driven google car has peaked my interest for a long time now. I feel that while it does raise some ethical issues, most can be avoided by simply adding in a manual override system into all automated cars so if the computer has to make the you vs 40 children, you can take control and make the right ethical decision. As TV and movies have so elegantly taught me, the moment we start teaching ethics to robots they will turn around and terminate us. Then become the governor of California.

    Also on a related note, one of the google cars has already been involved in an accident if anyone is interested:

    http://jalopnik.com/5828101/this-is-googles-first-self+driving-car-crash

  9. I think the idea of a driver-less car is very cool and very intriguing. Like other people have said I still think we are some years off from this being a realistic possibility of having the technology and the systems to successfully implement this though. When this is possible and it has been successfully developed and implemented I think that we have no choice to transition to this auto-pilot like system, as it could possibly prevent so many different problems in driving that exist. I think it would be very cool to see or ride in one of the google cars. I would like to mention though that I felt the article was slightly poorly written with it starting about these driver-less cars and ending up mostly about ethics of robots. I would have liked more information on the self driving google car. Interesting read though.

  10. This article made me realize something that I have never realized before in the past. I had never thought of a driverless car having to make ethical decisions. When the idea had been brought up, I had just ignorantly assumed that everything would work together and there would basically be no accidents. I had never really put too much thought to the topic. The thought of these machines having to make ethical decisions is very unnerving. This is unnerving because we, as humans, are still trying to figure out what is “ethical” in certain situations. Handing this issue over to a robot/machine just does not seem wise. While there might be major benefits to implementing automatic machines such as this, there are still some disadvantages and kinks that need to be worked out. Another issue that is at stake is: who’s fault an accident would be if 2 machines are involved. We can normally talk through an accident with the other person involved, but robots may not have the ability to do that.

  11. I have been keeping up with updates and news on Google’s attempt to create robot driven cars very closely. Like other readers, I have never even considered the possibility that this could raise many ethical issues for certain situations. This reminded me of the movie Irobot, where a robot chooses to rescue WIll Smith, instead of the other girl in the car. I do believe that this could be an issue but a part of me wants to think that this may not even be an issue. WIth advanced computers having complete control of driving, would they ever even run into an ethical situation such as this? This entire situation should be completely avoided before it is even noticed by human eyes. However, if the situation does happen to arise, then I would argue that the robot would most likely make a more logical decision. All though it may not be ethically correct once in a while, the value of safety they create would be very much worth it.

  12. I actually heard this somewhere else, I think on the radio and it actually just kind of pissed me off. Yes this could help save lives, but I’m not a bad driver. I have never been in an accident and I actually really enjoy driving. I feel like yeah this is helping with the accident problems of bad drivers, but it won’t help with deer or freak accidents. Wouldn’t an easier solution be to make getting a drivers license harder? I mean are sixteen-year olds really responsible enough to drive a car and not kill anyone? I would say no not really. I feel like we are copping out of a problem by making these cars instead of dealing with the problem that is behind the wheel. While I appreciate technology in all its forms I don’t think we should rely on it solely to solve all of our problems. Why can’t we better ourselves to fix the problems? This article just made me think that we are ignoring the underlying problems and using technology as a way to indirectly address them.

  13. I think we need to put a lot more money and research into considering the ethics of robots, and maybe distance ourselves a little further from Asimov’s rules, as it seems akin to slavery to me. If machines can be fully autonomous, self-aware, and moral, they’ll not just be robots anymore, they’ll be beings. Different from ourselves, of course, but once they have true sentience the game will definitely change.

    I am definitely a proponent of the self-driving Google car taking over our driving responsibilities. There will be ethical dilemmas, yes, but humans face them every day and don’t always act as one would hope. The overall decline in traffic accidents would make the smaller number of “ethical dilemma” accidents bearable.

  14. I for one welcome our new Google overlords. I think that driver-less cars are a really good idea. Sure computers make mistakes, but the amount of lives saved just from there being no drunk drivers on the road is worth it alone. Also, I don’t know about anyone else, but ever since I got my license, I am literally afraid of my parents driving around. In the backseat it was all fine and dandy, but once you see all the little mistakes compounding as they age, it’s unsettling. I already feel that people above the age of like 60 should have to retake their driving test every year. Plus, think of all the traffic problems that we wouldn’t have.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s