Wrong numbers and why they survive, with Aaron Brown
Patrick McKenzie (patio11) is joined by Aaron Brown, author of Wrong Number, to examine why institutions that produce bad statistics face so few consequences for doing so.
Presenting Sponsors: Mercury & Granola
Complex Systems is presented by Mercury—radically better banking for founders. Mercury offers the best wire experience anywhere: fast, reliable, and free for domestic U.S. wires, so you can stay focused on growing your business. Apply online in minutes at mercury.com.
If meetings consistently leave you with hazy action items and lost context, Granola handles the transcription so you can actually participate and gives you searchable notes afterward. Try it free at granola.ai/complexsystems with code COMPLEXSYSTEMS
Timestamps:
(00:00) Intro
(01:12) The agricultural demand curve discrepancy
(04:06) Why experts prioritize teaching over learning
(05:17) Institutional indifference to error
(06:26) The brand halo of high-status institutions
(08:34) Lessons from COVID-era decision making
(10:19) Financial statements versus scientific rigor
(14:53) Sponsors: Mercury | Granola
(18:19) The difficulty of auditing and replicating research
(22:12) The CDC eviction moratorium and its justification
(23:34) The NTSB curbside carrier safety study
(26:41) Conspiracy versus incompetence in data manipulation
(30:05) Error correction in financial markets
(32:52) The culture of the advantage gambler versus the academic
(35:28) Betting as a tax on bullshit
(38:44) Using market pricing to evaluate risks
(41:04) The track record of scary predictions
(43:34) Environmental success stories and technological optimism
(48:21) Energy efficiency and the path to global wealth
(54:10) Wrap and where to find Wrong Number
Transcript
Patrick: Welcome to Complex Systems, where we discuss the technical, organizational, and human factors underpinning why the world works the way it does. Hi everybody, my name is Patrick McKenzie, better known as patio11 on the Internet, and I'm here with Aaron Brown, who is the author of Wrong Number and the adjacent series of videos on Reason about various anecdotes of misuse of statistics in publishing and other places. Aaron, thanks very much for coming on the program today.
Aaron: Thank you for having me, Patrick.
Patrick: So I usually don't take pitches for CS; I find my own guests out in the universe. But you had an extremely interesting motivational anecdote in my inbox about how two organizations that you would expect to understand the demand curve for diesel fuel both equally confidently advanced two very, very different hypotheses of what that demand curve looked like. Can you let the rest of the world know about that? Because it's one of those things where it's like, clearly one of the two can't be right.
The agricultural demand curve discrepancy
Aaron: Yeah, and I use this in the book because it was sort of my foundational moment — it set me on the path so that fifty years later, I ended up writing this book. I'm a freshman and I get introduced to Fischer Black. Most of your listeners probably know of him as the famous finance professor of the Black-Scholes model. I had no idea at that time who he was, and he wasn't known for anything famous then. I was the closest thing he ever had to a student. He was not a nurturing, mentoring kind of guy. I ended up having a long career working with him.
[Patrick notes: The Black-Scholes model is the standard way to value an option. If you’re not in finance, a few things to note: one, this was for many years a way to make money simply by having better mathematics, because prior to Black-Scholes and for many years afterwards many counterparties valued options by gut feelings or far less useful models. Two, Black-Scholes is reflexive, in that after almost everyone in the market understands it to be the baseline way to value an option, values of options tend to very quickly converge on the Black-Scholes model valuations in most circumstances. I feel like these are both useful observations to make in an episode which repeatedly describes bad math as having impacts on material reality. Sometimes good math has impacts on material reality, too. This should be unsurprising, but people often treat it as surprising, or assume that no good math was invented in living memory. Black-Scholes is only about as old as Apple.]
Anyway, I'm a freshman and I need a summer job, and he recommended me for the National Standby Gas Rationing Project. This is 1975, and it's a real possibility that we're going to have to ration gasoline, so the federal government wanted a standby plan they could put in for us. Fischer was totally skeptical that this plan would ever do anything good. Him recommending me for it probably said more about what he thought about the plan than what he thought about me, but I did get the job. It was actually supposed to go to a senior graduate student, but he figured, "Why waste somebody like that on it?"
I get on this project and I am assigned to worry about fuel for agriculture. If we're going to ration gasoline, we want to make sure that the crops get harvested and delivered. And I'm a city boy. I have no idea whether tractors run on gasoline or diesel fuel. This is the first thing I have to figure out. I call the Department of Agriculture and get passed around. Eventually, I find a guy who just finished a big model about it, but he's out of the office. So we leave a message for him to call me back.
I call Ford because Ford makes most of the tractors. I get passed around again and again, and I get to a guy who says, "Yeah, I just finished a comprehensive economic model and 75% of the tractors in the country are diesel." Great. Thank you. That's what I need. I actually talked to him a bit and got a lot of detail: how do you know this, where are they, what's the life cycle, and so on.
Then I get a call back from the Ag guy that afternoon. I want to be polite; I don't want to just tell him I already got it. He says to me, "Yeah, I just finished my model, and 75% of the tractors in the country are gasoline." I said, "Oh, you mean diesel, don't you?" He said, "No, no, no, gasoline." I said, "Well, the Ford guy told me diesel." "Oh," he said, "that—he's talking about new tractors being produced. He's not talking about the existing stock." And that's just not true, by the way. I went with the Ford guy. The Ford guy later said, "Oh, the Agriculture Department, they're talking about tractors rusting in barns or shipped to Mexico decades ago." Also not true. Both of these people had data.
Why experts prioritize teaching over learning
Aaron: What hit me about it wasn't just that they disagreed about such a basic thing, but that both of them wanted to teach. Neither one of them was interested in learning. They didn't want to reconcile their numbers. They didn't want to subject it to skepticism. They just wanted to be an expert. But that's not the end of the story. Believe it or not, that's not the worst part.
The worst part is I told this story to the senior economists, and universally everyone had the same reaction. They laughed, and they topped it with an even more outrageous story. So I said, "Well, how do we have any idea that our project is going to be successful? If we can't establish whether tractors are diesel or gasoline, how are we going to design a plan for the whole country?" And none of them... I don't know. It just didn't get through to them. The question just sort of whooshed by.
I said, "We should talk to some of these Soviet Union Gosplan people because they've got fifty years' experience actually doing this. Maybe they should tell us some things." And they said no, communists are idiots, they don't understand economics, we can't learn from them. I said, "Okay, well, let's talk to the people who did rationing in World War II." Oh, no, they weren't economists; they were just amateurs. They don't know anything that could help us.
[Patrick notes: Gosplan was the central planning arm of the Soviet Union. I would not default to assuming a libertarian-leaning individual invoking Gosplan is doing so seriously, but I was also nineteen once, and so will leave each listener to their own understanding of the anecdote.
WWII was, of course, the high-water mark for U.S. state capacity. The war rationing people were tasked with civilization-scale central management of complex supply chains and largely succeeded in this task. For much, much more detail, see any history of the War Production Board and similar. Freedom’s Forge is another good entry point if you want to read true stories of terrifying competence with relatively few academic economists involved.]
Institutional indifference to error
Aaron: So the plan got written, and I am very confident the plan would have been a total disaster had it ever been implemented. I want to focus on the thing about the book: what I'm really interested in is not just why people make mistakes—how did the Ford guy get 75 and how did the Ag guy get his number—it's that there's an entire institutional structure that ignores this problem. I understand why a researcher makes a mistake and gets wedded to a false number. What I don't understand is why nobody cares.
I've got 31 chapters in the book that go through case studies of this sort to explain how people got the wrong number. But my main point is: why doesn't somebody stop this? Why doesn't somebody filter this out? Why don't journalists ask skeptical questions? I do hope I came up with some useful advice for people, but I don't really have a clear answer of why we don't do a better job of evaluating these things, because it's not that hard.
The brand halo of high-status institutions
Patrick: So I'm only a few chapters into the book at the moment, but one of the things that is popping out at me via both the book and synthesis with other experiences is that Ford and the Department of Agriculture are both high-status institutions. These are not jokers in their pajamas on the Internet. You would assume that they have some amount of expertise and that assumption for expertise has been earned in part by doing many decades of actual work that causes a result in the physical universe. Ford really does make tractors. The Department of Agriculture really does, in some ways, mitigate crop failures.
That kind of brand halo attaches to all the artifacts of the institution and is sometimes maybe over-relied upon. It cannot be the case that we exist in a physical universe where simultaneously 75% of the tractors run on diesel and 75% run on gasoline. Minimally, one of the two is wrong. And we do not have institutional design which even contemplates that. We contemplate the notion of a peer review process—you will send out your paper and they will poke at methodological holes—but after the point that the paper is published, absent gross malfeasance, we do not contemplate that an institution could say something which just fails to cohere with reality.
We in fact have many cases of institutions very confidently advancing claims that fail to cohere with reality. I think many of us have had in the last couple of years this experience where, due to the rise of social media, you can see in real time the claim being made, the obvious responses to the claim being made, and then there's deafening silence. Sometimes there is a walk-back of the original claim with no attempt at admitting they were wrong.
Lessons from COVID-era decision making
Patrick: Not to pick on people for COVID-era decisions, but to pick on people for COVID-era decisions: the whole "we have no evidence that this is airborne" when anyone who looked at the available evidence—even non-specialists in, for example, software marketing departments—could look for two hours and conclude, confidently: "Yep, this is airborne. What are we pretending at here?”
[Patrick notes: Two hours would have given you enough time to learn what Diamond Princess was and review one office plan in South Korea, after which you would have none of the institutional gravitas of WHO, but all the facts needed. We pretended this was harder than it was. Mostly to save face, as far as I can tell.]
And then later public health authorities said, "Well, the scientific consensus is now it was airborne, but that consensus did not exist a few months ago." What do you think your job is in this complex system? Either you were right, or you expressed no opinion, or you were wrong. You were not right, and you did not express no opinion.
Sorry, I feel a little emotionally invested in that one.
Aaron: We know that Ford makes spare parts, and the guys who are making spare parts for Ford know how many tractors are diesel and how many are gasoline because they know what orders they're getting. And the Department of Agriculture has people who go out to farms and see the tractors. Lots of people in these organizations know that at least one and probably both numbers are wrong, but it doesn't seem to go anywhere. On COVID, by the way, I would recommend to everyone David Zweig's book, An Abundance of Caution. It focuses very strongly on COVID and specifically on school closures. Again, it's exactly the same kind of things I show in my book. COVID absolutely put this problem in the limelight.
Financial statements versus scientific rigor
Patrick: There's another book that listeners might not be familiar with, Financial Shenanigans, which I think is broadly of a similar genre. The universe of companies that have defrauded investors based on partially falsified financial statements is quite large. Knowing that fact pattern exists is somewhat difficult to operationalize. Someone could be lying to me on this thing that looks persuasive, but how would I know?
One of the ways that you develop a feel for this is through prior art—which numbers have very intelligent, determined fraudsters tried to poke on before because they are difficult to maneuver, and which numbers are very soft? On a financial statement, you learn which line items are where the bodies are buried. One of the things I like about your book is in showing various ways to bury the body using a combination of very standard statistical techniques and also the "new hotness" in certain fields. You develop a feel for it: if the headline result doesn't sound credible, where in the methodology is "hinkiness" most likely to develop?
In chapter two, you have one example which is just brutal about a particular study addressing the efficacy of USAID. You mention that the study, for no reason that the rest of us can determine, looks at a 21-year period of history and then decides to exclude five of the years for reasons that the margins of the study were too small to contain. You mention quite persuasively in a hypothetical conversation between someone selling an investment and a sophisticated consumer that if you say, "Yes, I have back-tested my investment over the last 20 years, I've just excluded five of them," and when asked why, you say "no reason," then that is the end of that conversation. There are at least five things in that chapter that are at least that anomalous.
Aaron: Yeah. One of my messages is: apply the same skepticism to a scientific result you see reported in a paper as you would to somebody trying to sell you a financial product. If you do that, you're a long way home. I also want to hedge back your point about financial statements. Corporate financial statements are kept in controlled systems with a "maker" and "checker." Somebody makes it and somebody checks it. There is an audit trail, so you know exactly who put in the number, when, and why. There are rigorous internal and external audits. There are severe penalties if you get something wrong or if you lie.
A scientific paper has none of those controls. The data can be kept on somebody's laptop that anybody has access to. It can be changed by anybody for any reason. There's no audit trail and no penalty if it turns out you're wrong. We have lots of accounting fraud, so everybody knows that despite those controls, you still get error. Why should we expect scientific papers to be any better? There's at least as much incentive for a researcher to fudge a result—which might result in a promotion, a grant, or career success—as there is for a CEO to inflate earnings by a couple pennies.
Patrick: I think there are some epistemic issues there. Honestly, having seen the sausage made at various firms with regards to financial statements, I will say aspirationally all of that is true with respect to the rigor of systems.
The difficulty of auditing and replicating research
Patrick: It is useful to be able to say there is traceability in financial numbers. All of the numbers that sum to a line item are signed off by an accountant who has actually done a selection of invoices to check if the material actually left the factory. You mentioned routinely doing attempts at reconstruction of various papers and it being difficult, verging on impossible, to reconstruct the analytic approach taken. Both because papers don't make their data accessible, and because there is a series of adjustments made between raw data and the analyzed data that are very difficult to audit. In some cases, this is hand-waved away with "we applied adjustments for A, B, C, and D" where the nature of the adjustment is underspecified.
Aaron: Yeah, because I always try to reconstruct a paper. I try to redo the work. I have never successfully gotten exactly the same result as the authors. When I work with authors, I find out that a good portion of the time it's my error, or there's just an ambiguity. Most authors won't work with me, and many are actively hostile.
There's even a case in the book about authors who claimed to get data that didn't exist, and Duke University and the National Bureau of Economic Research backed them up. All the institutions closed ranks, even though this was a major paper that supported eviction moratoriums and was cited in Congress and courts. Even the best researchers typically do not write papers so that somebody can replicate it just from the paper, which is the scientific ideal. In the opening anecdote to the book, the National Transportation Safety Board found certain bus companies were seven times as dangerous as others. It was secret which ones those companies were. That's just baffling to me. Why isn't this a scandal?
The CDC eviction moratorium and its justification
Patrick: Responding to two things you just said. You phrased it in a way which is direct and true: you said that the CDC (Center for Disease Control) ordered a moratorium on evictions during COVID. When you say it out loud, many might think, "I didn't realize the CDC could do that." But the CDC did in fact do that, and it represented a multi-billion dollar taking from a distributed class of property owners across the United States. It was largely justified on the basis of a relatively small number of artifacts. We had a paper which didn't say "I have multi-billion dollar consequences," but which did have multi-billion dollar consequences attached to it.
The NTSB curbside carrier safety study
Patrick: On the bus thing, listeners might have ridden on "Chinatown buses" in the Northeast corridor. They came out of alternative intercity transport arrangements made by a class of entrepreneurs that was heavily immigrant in nature. The main thing that distinguished them was that rather than departing from fixed depots, they would pick up from the curbside at high-demand locations. These became popular with students and immigrants.
Then there was the NTSB study used to advance a political conclusion that this class of neo-transit operator is more dangerous and we should clamp down on it. Someone needs to have a spreadsheet at some point to say if an operator is "neo" or traditional. Incredibly, firms like Greyhound—the gold standard for traditional buses—got somehow classified as being a neo-transit operator. Meanwhile, casino shuttle buses, which are definitely not the area under discussion, get lumped in with the neo-transit operators. It looks like incompetence or repeated errors on both sides favoring a specific conclusion.
Conspiracy versus incompetence in data manipulation
Aaron: Well, that is actually the overarching arc of the book. I spend a lot of my time deconstructing incompetence and deconstructing conspiracy or fraud, but neither one is an adequate explanation. You have to go beyond the two.
Here's the conspiracy story: these upstart businesses run by immigrants are challenging the established carriers. The established carriers go to their "tame" congresspeople who had been trying to shut these down for years. There's a horrific accident by a casino bus—not a Chinatown bus—and they use that as an excuse to demand an NTSB study. The NTSB comes up with the conclusion they want by stuffing the ballot box. They had thirty-seven fatal accidents; thirty were by Greyhound and Peter Pan (traditional carriers) and only seven were by curbside carriers. But if you stick those thirty in with the others, the curbside carriers look more dangerous. Then the NTSB goes down and closes 26 of them, including Fung Wah, the original one, which had a perfect safety record.
But if it were a conspiracy, they would make one error. Once they had stuffed the ballot box, they didn't have to do all this other stuff. They did seven or eight other major statistical errors. So you say it can't be a conspiracy, it's just too incompetent. But then you ask: is it incompetence? All eight of these errors went in the direction of making the curbside thing look more dangerous. So it can't be pure incompetence.
Most importantly, this was widely reported in the press. All of these places reported the NTSB study without any skepticism. When it was debunked, none of them went back. The only outlet that even ran a story about it was Bloomberg, and they did a "he said, she said" squabbling statisticians story. They didn't say the study was complete junk. We know these reporters aren't in on the conspiracy—they want to report the truth—but they're not mad that somebody lied to them.
Error correction in financial markets
Patrick: I think this is partly about the epistemology of our institutions where the consequences for saying "we were had" and having egg on our faces for multiple weeks are just unthinkable. Of course, the comms department says they stand by the original work because if they didn't, a lot of heads would roll. Retractions are not survivable.
One thing we do let financial institutions do: you can easily, and without major risk, own up to mistakes in a trade. You just enter the offsetting trade, the asset price moves, and life moves on. That keeps financial markets more error-correcting and truth-seeking without constantly causing career risk. That doesn't seem to happen in research labs, government, or newspapers.
The culture of the advantage gambler versus the academic
Aaron: Yeah, and there's another episode in the book. Back in the 1970s, Bill Eadington started a conference on gambling and risk-taking. It was unusual because it attracted casino executives, advantage gamblers, mathematicians, and regulators. One of the chronic debates was between professors who had systems for gambling and people who were actually making their living making bets. The classic summary of the debate was one of the advantage gamblers—and I was clearly in that camp—would stand up and say, "Well then, why are you poor?" and the academic rejoinder was, "Well then, why aren't you rich?"
To the academic, asking why he didn't bet on his system was irrelevant and rude. To the advantage gambler, it was: "If you believe what you're saying, why haven't you taken advantage of it?" There's a real disconnect between fields where "wanna bet?" is an honest question—financial traders, advantage gamblers, infantry commanders—and fields where it's a rude question that nobody answers. I have difficulty trusting any answer that people aren't willing to put their own money on.
Betting as a tax on bullshit
Patrick: I think Nate Silver calls this the "River" versus the "Village."
Aaron, agreeing: As somebody said about Nate Silver, betting is a tax on bullshit. [Patrick notes: I associate that line with Marginal Revolution. The post coining it was, fittingly, about Nate Silver.]
Patrick continuing: In some fields, it seems viscerally distasteful that someone could be keeping a record of someone being wrong. That person is a threat to social harmony. When folks from the advantage gambler camp say, "You've expressed 99% credence that X is true; would you bet $50,000 at even odds?" it functions as a tax on bullshit. Some people find it extremely negative to be seen publicly responding to that in a repeated fashion.
Aaron: My friend Philip Tetlock did a book, Expert Political Judgment, which showed that experts in a field have less than random—or worse than random—predictions. The more prominent the expert, the worse the performance.
Here's a good trader question. When somebody says, "Gold is overpriced, it's going to fall to a thousand dollars," you ask them: "How much would the price of gold have to go up before you admitted you were wrong?" For most people, it's a blank look. They haven't thought about it. A trader will tell you, "I think it's going to a thousand, but my stop is six thousand. If it hits six, I'm getting out; I was wrong." If you haven't thought about that, you haven't taken the first step toward forming a bet. If no evidence will convince you, then it's an article of faith.
Using market pricing to evaluate risks
Patrick: I think it is underused by people in other truth-seeking institutions that financial markets price a variety of different risks. You can give a sideways glance at the markets to see if they agree with you. One example that sticks in my mind: the NYT once identified three experts who thought there was a twenty-five percent chance of war with North Korea. Well, there are ten thousand artillery emplacements around Seoul. There are Seoul REITs (real estate investment trusts) which are publicly traded with options chains. If there is truly a twenty-five percent chance of war, those options are very mispriced. Either the financial side of the world is completely ignorant of what happens to commercial property when hit by an artillery shell, or the unnamed sources are overconfident in their estimation. [Patrick notes: I had misremembered. They are named in the piece, which is at least a gesture in the direction that there might be reputational consequences for being that wrong about a war between two nuclear powers.]
(Indeed, looking back, war with North Korea didn't happen.)
The track record of scary predictions
Aaron: If you broaden your sample and look at every scary prediction in the New York Times, what percentage actually came to pass? Certainly, there were papers on pandemics prior to 2020 and that came true. But for the most part, the bad things that do happen tend to be unpredicted, and the things that get predicted much more often than not don't come true. If you grab a newspaper from thirty years ago and see what people were worried about—garbage, pit bulls, Satanic cults—most of them just evaporated.
Patrick: I don't think people who weren't here for the 1980s can appreciate how much "we are running out of space for garbage" was a live issue. As a child, I felt: it just doesn't seem credible that there are vastly more non-recyclable plastic cups than there is land in the United States. And indeed I was not wrong. [Patrick notes: A great thing about Fermi estimates is that seven year olds can do them and they don’t even need to know who Fermi was to try.]
One thing I remember where people claimed it was a huge issue and pouring resources in led to improvement was the hole in the ozone layer. We did a CFC ban that successfully reduced the primary drivers. There has indeed been some measurable recovery.
Environmental success stories and technological optimism
Aaron: It is important to keep in mind that sometimes these predictions do come true. But I'm not sure the science behind CFCs was any better than the science behind lots of other stuff that didn't work. You should treat everything with skepticism. We didn't do any of the really draconian things people were saying we'd have to do. We made some relatively moderate, albeit expensive, changes. The problem went from growing to shrinking, which is a win.
[Patrick notes: Wanted to mention during the recording but couldn’t so I’ll mention here: I think we are abundantly convinced that CFCs do catalytically destroy ozone, because you can do the equation after a high school chemistry class and then run it in a lab to check your work. The uncertainty is largely around the planet-scale geoengineering and to what extent that has the impact you want. Oh, and in some circles, calling this “geoengineering” is considered a low blow, because planet-scale attempts to alter atmospheric chemistry are only geoengineering when unserious people propose them. When serious people propose them, they’re multinational protocols.]
Patrick: There was no worldwide epidemic in skin cancer, which was one of the forecast outcomes. Things are actually within our capability to improve on multi-generational timescales.
Aaron: I've been interested in climate since the 1970s. Back then I wrote it was a hundred-year problem and that we should focus on total energy use as the determinant of human environmental footprint. Since then, we've reduced the amount of energy needed to produce a dollar of real GDP by sixty percent. That is why I'm optimistic. If that continues, we'll have solved the problem because less energy means less CO2 and less of all kinds of other stuff.
None of that was regulation. The climate activists have been soaking up government money and flying private jets; they haven't done one thing to actually solve the problem. Meanwhile, bottom-up engineers and inventors have been devoting their lives to this. People investing their own money have been quietly solving the problem. The activists are still heroes for scolding people, while the people actually solving the problem are derided as deniers because they don't want a big, expensive subsidy.
Patrick: In partial defense of the activists, they did attempt to get solar kickstarted through subsidies. When one draws out the curve for solar prices per megawatt over the last twenty years, it looks like falling off a cliff after cliff. [Patrick notes: See generally the episode with Casey Handmer about solar economics.]
Aaron: I have solar panels, but I put them up to save money, not to save the environment. Subsidizing solar and wind is counterproductive to the extent it means people use more energy. All else equal, I'd rather have energy from solar or nuclear than fossil fuel, but more important is to reduce total energy use.
Energy efficiency and the path to global wealth
Patrick: Pro-growth people would say total energy use is an excellent proxy for how much total value humans are creating. We should want future values to be much higher—more megawatts, please—rather than simply focusing on efficiency margins. There is no achievable path by which most of humanity gets raised up to the standards of the US in the 1910s without greatly increasing energy consumption.
Aaron: You left out one option. Right now, every kilowatt hour of energy generates two and a half times as much wealth as it did 40 years ago. If we continue that and each kilowatt does 20 times as much wealth, we can reduce energy use while increasing living standards. Any reasonable solution has to say the Earth has to get much richer and everybody has to be brought out of poverty. To me, that is the easy step. We have all sorts of plans—simple things like more efficient logistical arrangements so trucks drive less empty. 100 things like that make a measurable difference, and they are much easier than inventing some magic technology that generates energy for free.
Patrick: I think one of the things least appreciated by people outside of "big tech" since 2008 is how much of the impact of the mobile era is in boring plumbing, like better routing of trucks. Amazon and Walmart are operating on a different plane of existence versus the typical Fortune 500 in the 1970s. That has made us wealthier and decreased aggregate emissions attributable to an individual's basket of goods, even where parts of that basket are now made in China.
Aaron: If we can reduce the number of trucks on the road by 30% without reducing GDP, we save fuel, carbon emissions, traffic accidents, and highway maintenance. That's why I want to focus on total energy. I think people who actually work in this field are optimistic because we see the progress. People who just complain are pessimistic.
Patrick: I don't know anyone in any energy subfield who shares the degree of pessimism that many politicians feel. Everyone in nuclear, geothermal, and solar is saying, "We're having the time of our lives right now. The future has never looked brighter."
Patrick: Aaron, you have a book out called Wrong Number. Where would you suggest people get it?
Aaron: You can order it from Amazon. Go to your local bookstore if you still have one—I love bookstores, but you may not be able to find it there. Get it on Kindle. It will come out on audio eventually. Steal it if you like; I didn't write it for the royalties. Borrow it from your library. Whatever works for you.
Patrick: I will put a link to Amazon and your recent video series in the show notes.
Aaron: The videos have very high production values. I'm very grateful to the Reason team. The videos are a lot more fun, but the book goes into considerably more depth.
Patrick: Thanks very much for your time today, Aaron. And for all of you listening, thanks for tuning in to Complex Systems. See you next week.