Self-Driving Cars And The Hindenburg Phenomenon

In Tempe, Arizona, a homeless woman was pushing a bicycle carrying plastic shopping bags and walked from a center median into a lane of traffic. She was immediately  struck by a self-driving Uber car operating in autonomous mode.

The car was traveling 38 mph in a 35 mph zone, and never braked. Police say the tragedy wasn’t the car’s fault, but it doesn’t matter. Uber has suspended use of the self-driving cars, and history tells us that the devices may be on a road to oblivion due to an unavoidable collapse of public trust.

I’ve been expecting this. To be precise, I’ve been expecting the first fatality inside a self-driving car, and that will happen soon enough. When it does, I think it is a close call whether self-driving cars ever recover, especially if the fatal accident is especially gory, or involves children.

All it took, remember, to end airship travel forever was one spectacular accident, when the Hindenburg burst into flames and was captured in photographs and newsreels. Before that, airships had a good safety record. Another vivid example was the 1933 Dymaxion, a streamlined car on three wheels created by visionary Buckminster Fuller. All three wheels turned, giving  the Dymaxion the ability to pull into parking spaces in one move. But the design was unstable. Three were built, hailed by investors, the media and celebrities as a break-through, and then one crashed, killing the driver. And that was the end of the Dymaxion. It sure was cool, though…

I suspect that self-driving cars will be the Dymaxion all over again, worse, in fact, because of the 24-hour news cycle. The vast, vast majority of Americans have never used a self-driving car, and are naturally suspicious and dubious of them, as we are when anything is new. If the first time they notice the technology is when it kills someone, it won’t matter that the accident was an anomaly, or not the technology’s fault, or that the new cars have great potential and the bugs will be worked out. We already have cars that work just fine, thanks.

This is risk-reward thinking, and in this case, I don’t think it’s necessarily irresponsible or irrational. When something we view as essential kills people, we accept the risk. When something we see as a luxury or something we have no use for personally is involved in a tragedy, it is easy to say, “The hell with that!” This is why it is so easy for the response to a mass shooting to turn into “Who needs guns?” With guns, however, there is a large portion of the public that doesn’t think of guns as strange, that uses and trust them, and that does not see any acceptable substitute. (There is also no constitutional right to self-driving cars.)There was no such segment of the public to fight for zeppelins, or silver, zeppelin-shaped three-wheeled cars, or, I fear, self-driving cars.

 

49 Comments

Filed under Business & Commercial, History, Science & Technology, U.S. Society

49 responses to “Self-Driving Cars And The Hindenburg Phenomenon

  1. charlesgreen

    I think this is a very right framing of the issue. However, I do believe there’s a much more substantial economic constituency for self-driving cars – the trucking industry, the car rental and driver industries, and a lot of folks who want them too.

    I’d bet this one ends up in the “acceptable risk” category.

    • The thing is, it depends on chance. We’ll see. I tend to think that resistance will make it financially unpalatable to investors.

    • Alexander Cheezem

      There’s also a comparative safety issue: It doesn’t really matter whether self-driving cars are safe in the absolute sense — what matters is whether they’re safer than human-driven ones.

      • Phlinn

        I’m afraid that’s wishful thinking on your part. Bastiat wrote That which is Seen, and That Which is Unseen in 1850 and people still fail to properly account for things which don’t happen that would have. They will see the deaths caused and not acknowledge any hypothetical deaths prevented. In some cases, their prior beliefs will prevent them from even considering the possibility that it’s causing a net decline. My belief in the general public’s ability to evaluate risk or compare costs to benefits is low.

        You’re right about what should matter.

      • Agreed. Can self=driving cars compensate or account for the driving habits or mistakes or driver-driven cars?

        jvb

    • Junkmailfolder

      No question. I always drive on long family trips, and the idea of sitting comfortably and reading, or sleeping, or whatever, instead of focusing on driving for 10+ hours straight…

    • joed68

      I agree. There’s a great deal of momentum behind this, especially since it promises to both alleviate traffic and make automobile travel safer by several orders of magnitude.

      • I’ll believe it when I see it. The problems seem too deep to me. Wait til the first rider is killed by a hacker….and a lot of people have monetary reasons to see Uber at al. fail.

        • joed68

          Now THAT would be bad!

          • dragin_dragon

            But here’s the real problem, Joe. The technology has not yet been perfected. The idea of self-driving cars has been around since the ’40’s. Ford designed a concept-car that responded to cables buried under the road. Worked like a charm. It was even suggested that Eisenhower’s Interstate Highway web might contain those cables. May be that they do (tongue-in-cheek). Science fiction writers have talked about them since the ’20’s. So right now, we are using radar and, possibly, lidar, to run the things and there are some things neither will pick up. This is an excellent example of putting something on the market whose “time has not yet come”. If Uber will stop being so greedy and back off for a bit, I’m virtually certain self-drivers will eventually work out. If they do not, who knows…scify fans will always love ’em though.

      • It promises to alleviate traffic, but you have to wonder. People could decide to use their cars more often or over longer distances if relieved of most of the stress of driving.

  2. Eventually, this will become routine with self-driving cars being the mode of transportation. The idea will probably relieve congestion, be cheaper with drive share and be acceptable for the masses.

    I can just picture Will Smith in “I Robot” and his tunnel driving. I own a Corvette as a “toy” so my sympathies lie with personal control (and irresponsibility).

    Imagine “driving” and getting a call from “Heather” at Card Holders Services. “I’ll drive you into a bridge abutment unless you (fill in the blanks).” My Corvette and my control over machine will be an anachronism in a few generations.

  3. Alexander Cheezem

    Okay, this is a blatant case of a lie by omission:

    “… But the design was unstable. Three were built, hailed by investors, the media and celebrities as a break-through, and then one crashed, killing the driver. And that was the end of the Dymaxion. It sure was cool, though…”

    What you’re neglecting to mention here is that it was a two-car accident. A Chicago South Park commissioner who wanted a closer look at the prototype got, shall we say, a bit too close — in the sense that he hit it with enough force to roll/flip it over and kill the driver.

    The politician’s car was quickly — and illegally — removed from the scene, and the press made no mention of it (a lie, by any journalistic ethics standard) in the immediate coverage.

    The subsequent inquest found that the Dymaxion’s design was not a factor in the accident.

    Or, in other words, the Dymaxion car was a scapegoat rather than a straightforward example of the phenomenon you described. This doesn’t effect your central thesis that much, mind, but a stronger argument would have at least taken the above into account.

    • Wait, who was lying? I never heard that part of the story, and I didn’t run across it in any research. And that part of the event bolsters my argument regarding the Uber car. It doesn’t matter if the car was at fault: it’s new and someone died.

      • Alexander Cheezem

        I didn’t claim that you were lying, but rather that the statement was a case of lie of omission. It ultimately doesn’t matter whether you were lying or simply repeating a lie that you heard: it’s still a lie.

        The matter is also well-known enough to be in the Dymaxion car’s Wikipedia article (with four-five citations, as I recall), so it took a trivial amount of research to find.

        As for it bolstering your argument, it could, yes, and I acknowledged that — hence me referring to an argument which took that into account as “stronger”.

        • I don’t want to bicker about this, but it’s not a lie of omission. The Times story is about the fact of the car and that it crashed and failed. How and why it crashed doesn’t add anything to the basic story. There was no intent to deceive, certainly not by me. Nor was the information germane to the post, which was about fatal accidents killing technology in transportation. There was a car, it was new, it crashed, and that was the ballgame.

          I try not to use Wikipedia unless I have time to check it’s sources—I’ve found too many errors.

          “Or, in other words, the Dymaxion car was a scapegoat rather than a straightforward example of the phenomenon you described. This doesn’t effect your central thesis that much, mind, but a stronger argument would have at least taken the above into account” has numerous problems. Since the public wasn’t aware that it was a scapegoat, it IS a straightforward example, since public perception, not liability, is the topic. Nor would going into the minutiae of the Fuller car’s demise added to the parallel I was making. It wouldn’t make the argument “stronger”, since the core fact isn’t changed. It doesn’t undermine the post, and is consistent with it. It can’t strengthen the argument, because the argument couldn’t be stated any more directly.

  4. Arthur in Maine

    The Dymaxion was competing against a reasonably proven technology and didn’t represent a remarkable improvement over it. Remember, the first fatal automobile accidents likely created similar concerns, but the car proved a success anyway, because it was a significant improvement over the horse and buggy in so many ways.

    It has long been noted that the single most dangerous component in an automobile is the nut behind the wheel. This is an emerging technology, and problems and accidents are going to occur. Long-term, however, these vehicles are highly likely to prove safer and much more efficient with regard to traffic and speed of travel than what we have today.

    Yes, there will be short term caution, and there should be. But I think your obituary for autonomous vehicles isn’t just premature – it’s an obituary that will never be written.

    • Now, now, it is just a speculative obituary.

    • Rich in CT

      I would tend to a agree with the relative improvement concept.

      The Hindenburg has several things going against it:
      * It was a terrible design (Oily canvas and Hydrogen)
      * Nazis
      * Airplanes were becoming reliable and were significantly faster

      Airplane crashes are arguably more terrible, and have been caught on film repeatedly, but they still offer significant benefit over both air ships and sea ships.

      With self-driving cars, it is really the same idea. People already accept self-driving trains, so the mere autonomy is not terrifying (granted, trains can have limited places to go).

      Early deaths should slow progress. There are only a handful of self-driving cars; deaths per total vehicles will be a very high statistic for some time, and the technology should NOT be expanded until the reliability is very high. I am not convinced the technology is stable yet for everyday use, and mass deployment would be premature.

      I think it is inevitable that computer technology will make autonomous vehicles a viable technology. Any bias against the technology from early failures would not necessarily be permanent. The technology will certainly serve certain niche markets such as trucking and perhaps taxi service; whether it becomes adopted widely remains to be seen.

  5. Alex

    This was unfortunate, but bound to happen.
    The passenger death will happen. And it will be terrible.
    The media will make it look worse than it is, it’s already happening.
    But the self-driving car companies are also doing it wrong.
    I was recently offered a job with an Uber competitor building their own self-driving cars. A five minute chat on the phone with one their lead engineers demonstrated very little experience with safety-critical coding. I work in aerospace, there are development processes that focus on creating safe software. They had no idea that they existed, what they tried to achieve or why they were needed. You could not have paid me enough money to join a project that was going to crash and burn spectacularly, and literally. My free consulting session during that call did not seem to make a difference.
    Google’s cars are no better. They are using Machine Learning, which is shorthand for “throw lots of data at the problem and hope it sticks”. You cannot do proper safety or error analysis of that kind of code.
    You want to build self-driving cars? You better have a ton of money and be willing to poach engineers from Boeing and Airbus. Or build it in Florida and get a bunch of NASA contractors on board. Take the long-term view. A truly safe self-driving car is a decade away and it will work very differently from the current ones. Because no self-respecting insurer is going to take the current state of affairs as anything reliable.
    The problem is that Uber and Google and others will rush to do the thing. Run fast, break things, etc. Self-driving cars will paint themselves in a regulatory corner and the thing will take even longer to actually gain traction. My suspicion is that Elon Musk’s Boring Company, with its underground rails and platforms, is setting itself up to save us from the burning dumpster that self-driving cars will be.

    • Sue Dunim

      ” A five minute chat on the phone with one their lead engineers demonstrated very little experience with safety-critical coding. I work in aerospace, there are development processes that focus on creating safe software. They had no idea that they existed, what they tried to achieve or why they were needed. You could not have paid me enough money to join a project that was going to crash and burn spectacularly, and literally. ”

      And such things as formal probability of correctness, RAVENSCAR profile, DO-178b etc are well above their head.

      However… When it comes to machine learning, there is no stochastic method of proving correctness, only empirical. The basics, the ‘reflexes’ if you like, press button A and B happens, that you can prove. Deciding when to press button A, that’s a different matter.

      Use of genetic algorithms to optimise rule based systems is good enough under some circumstances. So are neural networks. The former can be formally proved, but only after development, not during, as you don’t know a priori what the requirements are other than at the very top level.

      That was the approach used in some anti missile defence systems as long as 20 years ago. Systems that survived various attack scenarios in modelling got to breed, until some arbitrary degree of success was achieved, whereupon the best rules based system modelled was reified. I don’t know if this work has been declassified and put in the public domain though.

  6. While I believe this and related incidents will slow adoption, I don’t think it will prevent it in the long run. Something that has a good autonomous mode partakes in both the speedy sit back and ride of trains, with the independent schedule and destination of cars. AI just is not really there yet, the roads are far too chaotic. And autonomous with pedestrians is way too soon. I could see it being okayed sooner for some roads, basically interstates and other separated roads where only motor vehicles travel,

    (as a side note, can some education or enforcement work on pedestrians crossing divided highways? Those dividers are supposed to be a big clue about speeds and use a crosswalk. That crossing is playing russian roulette and not the gun’s fault)

    So many was a car like Kitt on Knightrider or the autopilot of the Jetsons for it to iasily die. But really spohisticated AI that can handle random events better than a person is still on the horizon. Roombas should not decide city traffic or try to understand a scooter or horse and buggy. Nowhere with pedestrians.

  7. I think these cars will eventually become an accepted part of the mix, since, as others have noted, they could be a boon in many ways. They would also enable the infirm and those past their driving “prime” to extend their mobility and independence (Maybe even allow younger drivers?). However, I don’t think this will occur as soon as some are predicting, as there are just too many minor details inherent in everyday driving that need to be sorted out, some that might rely on acquired knowledge as well as sensory and mechanical input/output. Do you need to slow down because your neighbor’s cat tends to sit in the road just around this corner…?

    Future ethical questions will probably arise when there comes a point when there is a push to force people to give up their old cars and transition to self-driving vehicles. Another would be whether to limit their use, out of environmental concerns, if people opt to use their self-driving cars more often for commuting or long-distance travel instead of planes or trains.

  8. Mrs. Q

    The nice thing about autonomous cars is drive-by gang shootings won’t have that pesky problem of the driver having to focus on not crashing while firing off shots at the same time. #progress

    • “Warning. This vehicle’s Intelligent Agent ™ has detected a violent breach of Federal, State, or local statute. As required by the Making Road Safe Again Act of 2024, this vehicle is proceeding to the nearest police and/or detention center for procedural adjudication. Please do not attempt to leave the vehicle. As Microsoft is concerned for your safety, all exits are locked, the central traffic computer has been alerted to clear a path, and an oneirogenic general anaesthetic has been introduced to vehicle ventilation. Enjoy your Trip, and please consider using an Intelligent Agent ™ driven conveyance in the future!”

      This is not the snark it appears to be.

  9. Other Bill

    Weren’t large airships doomed by their inability to deal with violent weather while airborne? Hadn’t the U.S. Navy lost two rigid airships in storms? Didn’t the Hindenburg go down in flames because the Germans were unable to make helium in sufficient quantities and were using volatile hydrogen instead? I don’t know what will happen with self driving cars (I suspect they will prevail) but I’m just not sure the Hindenburg resulted in a phenomenon. Other than people liking to say, for fun, “Oh, the humanity!” when something undersized goes wrong.

    • John Billingsley

      The Germans actually wanted to use helium but at the time the United States was the only source of any significant amount. The US refused to sell any to Germany.

  10. Sarah B.

    Personally, I hate the idea of self-driving cars. I live 100 miles from nowhere, and that’s not exaggeration. Going to see family or specialists or even a shopping mall is a massive endeavor. So you would think that I’d be for these cars, but I have never seen a GPS be able to find my mom’s (or grandparents house). Going to see my dad involves either knowing where you are going, or having the GPS send you on dirt roads where it doesn’t have full data or two-tracks, where it gets very lost. Another trip had a GPS tell me to turn left, which would have sent me straight to the lake bottom when it really meant “turn right.” Last time I went to the big city for a specialist visit, the GPS tried to send me through a dangerous part of town, and only my dad’s advice kept me on a less dangerous route. If a GPS can’t find a safe route, then why would I want a self-driving car, as I assume you would need a GPS to have a functional self-driving car? Now, if self-driving cars were optional, and mainly marketed to city drivers, maybe that would be a viable option, but my friends in related industries seem to think that if you have both human-driven and self-driven cars, there will be no significant safety increases. I do not wish to have required autonomous cars, especially since there has been no change in the GPS (that I have seen) in nine years.

  11. On point I think the discussion is missing: the development of smart road technology is proceeding rapidly.

    AT&T, Verizon, and all of the major telecom carriers are spending trillions (wit a ‘T’) on developing 5G wireless technology, and it is not intended to allow Junior to stream YouTube. They are acquiring tower site rights along major trucking arteries, to enable remote driver trucks, and eventually driverless trucks.

    Imagine, if you will, being able to drive a truck from a VR station, where driver breaks do not stop the truck (just hand over control, like long flight drone operations do now) and allow a Semi to only need to stop for fuel cross country. No need for Federal Driver Logs, no more mandatory rest stops, no need for a driver’s seat.

    AT&T is not only imagining it, they are betting on it.

    Once that is accepted, driverless cars are not far behind. The monetary benefit is astounding.

  12. What is needed is a good person with a car.

  13. Jeff

    One element that I think will have a big impact on adoption of autonomous cars is liability, and how the courts decide the first few wrongful death lawsuits that these cars generate. If the “driver” of the car isn’t in control of the vehicle, who is liable for damages caused when the software fails? If courts decide that Toyota or Ford or Google are responsible for damages caused by their cars driving into things they shouldn’t, that will be a major shift from the current model, where once the car leaves the dealership (barring known and unrecalled defects), the driver of the car is held responsible for its safe operation.

    I also imagine that insurance companies will be a bit leery of them for the first few years of adoption, as they will have no real-world data on how safe they really are.

    A couple years ago, Google was testing their prototype self-driving cars in my neighborhood. They seem to have had a fixed route that they would take the cars through multiple times per day, where presumably the software was learning and adapting. I live on a somewhat winding street with wide bike lanes on either side and was once treated to the sight of the Google car slamming on its brakes as it came around a curve and saw an oncoming bicyclist across the road (in his bike lane). I guess the geometry of the encounter was such that the car concluded that the bike was going to collide with the car, which no human driver would have thought. I know they’re going to get much better, and much smarter, but that little incident was a reminder that driving a car is an incredibly complex task, with massive amounts of data being processed by the brain, much of it subconsciously. It’s going to take a lot of work to replicate that in software. Certainly not impossible, but not as easy as the software guys might think it will be.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.