Societal agreements in traffic (fatalities, autonomous cars, connected cars, etc.)

Continuing the discussion from Getting, and using, a Fairphone 2 as a non European.

Well, road accidents are societally accepted. Otherwise a society would ban road traffic. Or it would impose speed limits of walking speed.

What has not been thoroughly discussed by societies are autonomous cars. Do we want autonomous cars? There should be open expert discussions about this topic rather than decisions made by profit maximisation.

Also, I’d probably have a bad conscience if a car that I programmed had killed someone.


I absolutely agree; I just have the impression, that it’s not a continuation of the discussion, but a totally different topic.
Tesla is not primarily about autonomous driving. So far they have taken the driving assistance systems - sold as security enhancements - offered by almost all car manufacturers just one small step further.
BMW for example calls it Driving Assistant Plus” (in the brochure for the 5 series)

I haven’t seen any discussion on that kind of assistance systems like distance, spreed and lane departure control.

And when it comes to accidents, it still remains to be seen, if there are more or less accidents, caused by autonomous or person driven cars. I have the feeling, the human being driving might not be superior in driving perfomance to the one programming a car.
With planes, trains and ships autonomous driving/flying is quite common, as far as I am aware.
It always depends on the situation of course. As no plane does take off or landing by autopilot so far, autonomous cars sure will have limits in crowded areas, cities etc. At least for the time being.

It’s not, that I am a geek and really into that connected or autonomous driving. It’s just, that it seems to be the way we are headed and that humans have really proved for decades their limits, when it comes to driving cars. :wink:

I am not really a car guy either, so my perception of Tesla etc. might be totally wrong and I happily will stand corrected.

1 Like

I’m a geek, totally.
But I’m not quite sure what to think of autonomous driving. On the one hand I’m pretty much convinced that accidents are less likely to happen. On the other hand there is a trend to just see this advantage and to ignore what could be done for worse with autonomous cars. The horror scenario is someone hacking your car - which seems not too far off because autonomous cars are designed to communicate a lot with the outside. Feeding them false information seems a pretty easy way to make them behave inappropriate. And, of course, the manufacturers are not very eager to open the sources of their programs which in my opinion makes hacking easier (there’s not much effort from honest people to detect and close exploits).

From a technical perspective I’d love to try and experiment with an autonomous car - and its source code. And I’m also kind of a lazy driver - whenever I don’t need to drive I don’t. But I’m probably too afraid of what an autonomous car can do without my knowledge and consent to ever ride one.

I have to admit, though, that there are a lot too many accidents on the road. We’re just so accustomed to it that we do not think about it too much - “accidents happen” and “to err is human”. In this sense I think it’s good to have whole companies standing out and pointing to that problem. (That’s also why I’m a fairphoner since the early beginnings :smile:)


I fully agree.

As far as I know, that does not take autonomous cars, as all those connected drives etc in combination with the assistance systems are vulnerable enough… And it has been exercised with a Jeep alread ;)y:

And another video “Hacking cars & traffic lights at Def Con - BBC Click”):


That’s the default text when you start a new topic from an existing post: :see_no_evil:

@BertG Thank’s for posting those videos, I hadn’t known the second one! :slight_smile: :+1:

1 Like


See, you totally got the point. They are moving in the direction of autonomous driving, but have yet only an advanced driving/steering assistand build in. This tiny but essential difference most other people simply miss or missunderstand. And that is all Tesla actually is promoting. An assistance system. The driving assistant reminds the driver to stay alert at any time to take over/action in case of an (unforseen) situation it may not be able to handle.

Few Tesla drivers in the past whether missunderstood the system, missed or even ignored the specific warning in the display every time you activate the steering assistand. Also there is a clear accoustic notification too. On my test ride with a P90 modell S it did not take over on the first try, but the second was successfull. Anyway I always kept my hands close to the steering wheel and of course watched out what the car was actually doing. For few other drivers it is too late to learn this lesson.

Potentially someone could say mobile (smart)phones are dangerous as there are many accidents caused while texting or doing other operations without proper accessory in the car or also as pedestrian. Learning from history I tend to believe there are “highly educated” individuals out there having the idea to sue phone or car manufacturers because of their personal damage in case.

I believe there aren´t many things out there which are idiot-proof. Even hot coffee to go and microwaves turned out to be quite challenging for their safe use.


Sorry, I wasn’t aware of that.
My bad.

Agreed 100%. (I just did not happen to have a Tesla test drive so far. :wink: )

Ps (edit): Although the possible dangers by hacking the device are more severe with cars than with smartphones, but less than with nuclear power plants.

AFAIK whenever autonomous cars are accepted to be driven somewhere it is a pilot.

Waag Society (Fairphone’s incubator) also covers the topic contains many links. I actually volunteered for one of their demos during a demo day, the autonomous car one. We covered the subject of decision making including ethical decisions as well, and allowed visitors to game a manual driving toy car versus an autonomous one (the latter used a light sensor to drive straight).

The series The Good Place also covers the subject in one of the episodes in season 2 where the choice has to be made between killing 5 strangers or 1 friend. The involved example is a railroad switch for a tram where the driver has to pick one or the other.

Another ethical example to consider is who’s responsible? The driver? The writer of the AI? The vendor? The manufacturer of the sensors? The government who allowed this technology in public space? Is there only one of these 100% responsible? And how does this work with cruise control?

I do get it that people are worried. Look at the computer industry. Companies leak private data due to security being breached, yet they are not held accountable. We need to establish a debate and ultimately a consensus on the accountability question before we even start with the pilots; not after.

So I went to look into this cause I felt like skeptical African kid, and I found the following:

“In 1947 a US Air Force C-54 made a transatlantic flight, including takeoff and landing, completely under the control of an autopilot.” (Source)

For more details about current autolanding implementations, see also:

And at:

Check CAT IIIa (b and c include a): “This category permits pilots to land with a decision height as low as 50 ft (15 m) and a RVR of 200 m. It needs a fail-passive autopilot. There must be only a 10−6 probability of landing outside the prescribed area.”

Which explains why heavy fog hampers autopilots. However in such severe conditions without an autopilot it is going to be also very difficult for a human as well.

That’s all about landing. Take-off is a different story. See this discussion on Stackexchange:

Do not forget about UAVs. Some are RC, but some likely can after initialisation.

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.