I have a confession to make: when I was in San Francisco in 2024, for the BART legacy fleet retirement, one of the things I did while I was there was ride in a Waymo. I had no reason to do so other than the novelty of it, other than wanting to experience the technology. The journey I took was more expensive than if I’d just ridden Muni, and the trip on Muni would have been only a little longer than the Waymo ride.

I will also admit that, as a non-driver, the Waymo ride was sheer fun. I sat in the front passenger seat, the only time I’ve ever been the only person in a moving vehicle, and I probably spent most of the trip grinning like a maniac until the car stopped at a crosswalk and I realized the pedestrians crossing in front of the car could see me and I probably looked like a massive dork.

Purely from an urban planning perspective, autonomous vehicles don’t belong in cities any more than other cars belong in our cities. We need fewer vehicles on the roads, and huge fleets of AVs roaming around don’t help that.

I genuinely do believe there are use cases where autonomous vehicles are a good fit. A few years ago, I went on an excursion up to Woodcliff Lake on New Jersey Transit, and then I had to walk about three miles to my destination, and the same three miles back. (At least I got a good meal out of it.) It was strenuous, but not all that bad of a walk, and fortunately I’d picked a nice day for it, but what if it had been too hot, or raining, or if I’d had parcels with me? If there had been a fleet of autonomous shuttle pods roaming around the station, I’d have been happy to jump in one. Of course, if I’d been able to ride a bike, this would also have been an ideal trip for a folding bike.

I was in San Francisco again recently, and I saw hordes and hordes of Waymos on the street. I even witnessed one whip around a corner into an occupied crosswalk, coming within a foot or two of pedestrians in the crosswalk who had the right-of-way.

And so, as I’ve thought about this more, I’ve realized that there is a strong parallel between autonomous vehicles and nuclear power. By way of background, I’ve long been an ardent supporter of nuclear power. Two decades ago, in a first-year undergraduate course, I proclaimed that I considered CANDU reactors to be so safe that I would happily camp outside the containment building of one—all my interlocutor had to do was bring the sleeping bag.

I am immensely motivated by the concepts behind modern fourth-generation reactor designs. We have to get off of fossil fuels, and renewables just don’t pencil out. Degrowth is not a practical option, and so that leaves nuclear. It is, on paper, a great solution.

But even though fourth-generation reactors may be designed to be “intrinsically safe”, that doesn’t mean that they are completely safe. They still have failure modes—some more complex than their earlier-generation predecessors—and we are going to have to learn to operate these complex systems safely. Unfortunately, the nuclear industry hasn’t been the most confidence-inspiring in this regard.

To be clear, I’m not trying to slam all nuclear operators. There are many reactors all around the world at this very moment that have excellent safety records. But we know that there are some operators who have fallen for the perils of the profit motive, and so this gives me pause when I want to say “well, we should just deploy fourth-generation reactors everywhere and our energy problems will be solved”.

The same risk exists with autonomous vehicles. “Our passengers want the car to be more aggressive, to get them where they’re going faster.” “Our competitors are doing this, and we should too.” It’s not hard to imagine how these conversations would play out among AV operators.

It is entirely possible to make an autonomous vehicle the safest vehicle on the road. It really should be possible for AVs to be safer than human drivers. I work in rail automation; I am well familiar with techniques for the design and implementation of safety-critical systems and software. But one thing I know quite clearly (and have seen the effect of) is that those systems are only as good as the human inputs provided to them. “Garbage in, garbage out” applies even to safety-critical systems.

So, if you have a tunable parameter that sets, for example, how aggressive the car will be when entering crosswalks, then there will always be that temptation to crank it up just a bit more.

I don’t think we can practically say that there should never be any autonomous vehicles on the road, ever. And I think it would be an enormous loss for humanity if we say “well, after Chernobyl and Three Mile Island and Fukushima Dai-ichi, we just can’t take the risk”. The German retreat from nuclear power, for example, is a loss for all of us.

We need to be able to have transparent conversations about these things. We need to be able to have transparent conversations about where, and when, and how AVs belong on our roads, just as we need to be able to have transparent conversations about the future of energy production and what it would look like for us to live in a world where fourth-generation reactors have become ubiquitous. We need to have a conversation about who operates these potentially-dangerous technologies, and whether it is dangerous to have a profit motive in play as decisions are being made. At the same time, public ownership is not, by itself, a panacea. We have also seen that political pressure can be just as dangerous.

I don’t claim to have all the answers. The emergence of these technologies raises thorny social, political, and commercial questions which we haven’t always navigated well. I do think that greater transparency is an inherent part of the solution, along with some form of public ownership or operation, but even at that we can’t allow political pressure to cloud operators’ judgement. Perhaps, in the end, this is nothing more than a story about human fallibility. Perhaps, in the end, the only answer is that dangerous things will always be dangerous, and while we can take steps to mitigate that danger, we cannot eliminate it, and so we must be ready to have transparent conversations about what risks we are willing to accept (e.g. whether AVs should operate in cities, and how aggressively they should do so) and who will bear those risks.

I find the technology behind autonomous vehicles fascinating. I remember following the DARPA Grand Challenge back in the day; I remember being excited to see Stanley at the Smithsonian. And when Waymo announced their existence back in 2016, it seemed like a net positive.

Public transportation can’t and won’t go everywhere, and not everyone can ride a bike, and not everyone can drive. The idea of someone who can’t drive riding along in a cute little autonomous Firefly vehicle is a captivating one, and I got sucked in.

Unfortunately, what we got…wasn’t that. What we got was hordes of Waymos driving around in circles and honking at each other, menacing pedestrians and blocking traffic when they couldn’t figure out which way to go. What we got was a Waymo being pulled over at a DUI checkpoint after making an illegal U-turn. And that’s just for starters.

You might point out that at least the Waymo vehicles are EVs; okay, fine, so they’re not burning fossil fuels, but they are still taking up road space, and EVs, by their heavier weight, pose very real collision risks and increase road wear over lighter vehicles.

In much the same way, I worry that saying today “well, small modular reactors will solve everything, let’s put them everywhere” may leave us with a long-term burden of contaminated sites and nuclear waste (and, perhaps even accident cleanup) when we realize that the future isn’t quite as rosy as we’d thought. Complex systems fail in complex and hard-to-anticipate ways.

SMRs may be part of the energy mix of the future just as much as AVs may be part of the transportation mix of the future, but we can’t fall prey to the techno-futuristic solutionism trap of “oh, this new thing solves all the problems of the old thing and has no new risks of its own!”. It never actually works out that way.