Respond in ONLY 200 WORDS MAXIMUM 3 HOURS

The big human factors and ethical issue with autonomous cars is who is at fault when an autonomous car kills someone? Is it the owner (who may not be driving), is it the manufacturer? Here is a common case used in this discussion. It is likely to be rare, but nonetheless, it is a possibility. Imagine a driverless car with the owner in the front, not driving. The car sensor detects the possibility of a hazard ahead. It calculates the probabilities of different situations. One is to crash head on, killing the passenger (owner), but only the passenger (owner) dies. The other option, is to swerve the car and avoid the hazard that is directly ahead. This means skidding off the road, which will kill 5 pedestrians that are standing on the side of the road, but will save the passenger (owner). What should the car be programmed to do? Should it be programmed to save the most number of lives, or save the passenger, regardless of numbers? If it is programmed to kill the least number of people, would you buy a car that in this situation, is basically programmed to kill you if the situation arises? If it is programmed to kill as many as needed to save the passengers, could the car manufacturer be exposed to a manslaughter lawsuit from the families of the people on the side of the road?