Scott Eastman has been professionally forecasting for over a decade with major forecasting groups, some of which advise the US intelligence community and executive branch.
Transcript
The transcript is AI-generated and slightly differs from the phrasing used in the recording.
Rai Sur
00:00:16
Scott Eastman has been professionally forecasting for over a decade with major forecasting groups, some of which advise the US Intelligence community and executive branch. He's also one of Sentinel's forecasters. His areas of expertise are geopolitics and epidemiology.
One thing I enjoy about talking to Scott is that he has a pretty atypical background. He spent many years painting homes and doing documentary photography. His travels and eclectic interests have brought him in contact with many influential and downtrodden people.
Before we get into some of the questions about risks and escalations, I have some lighter questions that I personally want to know the answer to, and I think the audience would enjoy. The first one is: do you have a type of forecasting error that you systematically make, one that took you a long time to correct for, or that still bites you?
Scott Eastman
00:01:11
A basic forecasting error I made frequently in the beginning, and one that still bites me sometimes, is overreacting to the news of the day. It's very easy to get swept up in it. News is often presented as breaking news; everything is urgent, but usually, not much changes.
If you wanted to be a good forecaster, in terms of a good Brier score or number, you could simply say status quo. If you said whatever happened yesterday is likely to happen tomorrow, you would mostly be right. This isn't always true for fields like the progress of AI or computers, but even there, following the existing curve would often be correct.
It's easy, especially with issues like war, to overreact to a country moving assets or a leader making a statement. So, as a forecaster, it's generally wise to downplay the spikes that occur regularly.
However, a challenge is that simply stating 'status quo' isn't very useful, because the goal is to be a *truly useful* forecaster, not just one who is often correct. Pointing out major changes is the difficult part. For example, many didn't foresee the fall of the Soviet Union, though some had a good idea Russia would invade Ukraine.
Honestly, most of us don't know with 100% certainty when something will happen. If a rare, eventful occurrence has a 1% forecast from me, and I raise it to 10% and it happens, one could argue I was still wrong, as 10% implies a 90% chance it wouldn't. It's a significant challenge. If I say something has more than a 50% chance in a specific time period, it should be truly meaningful.
Basically, overreaction is the biggest thing to avoid.
Rai Sur
00:03:15
Is there an example in the news now where you can feel it tugging at your mind, making you think it's a big deal?
Scott Eastman
00:03:22
A recent example is when the US appeared likely to bomb Iran's nuclear facilities. They may still do so. The US took many preparatory steps: moving B2 bombers to the region and deploying expected aerial and anti-ballistic defenses. These were signals of intent. We knew Israel wanted it bombed, Iran was weakened, and other factors pointed to an imminent strike. But it hasn't happened.
This is often the case. While Iran isn't friendly with the West, most leaders are, to some degree, rational. Even if perceived as extremists, they want to preserve their power and their country. If they perceive inaction will lead to a devastating bombing, potentially regime-ending, they will make moves to reduce that likelihood.
Often, some forecasters might say an event is extremely likely. For instance, during the Ukraine war, US intelligence reportedly estimated up to a 50% chance of Russia using a nuke within days or weeks. It didn't happen. That estimate was extraordinarily high and concerning.
At the time, I spoke with some excellent forecasters. One, who is typically better than me, estimated a 30% chance, which is wildly high for nuclear use. After more discussion with more people, we realized that while current events suggested a high probability, many factors would intervene to lower it. The US strongly opposed it, it would be bad for Russia, and other parties, including China, urged against it, even against testing.
Our core group's estimate eventually came down to perhaps 4 or 5%, which is still a very spooky number. This was based not just on current knowledge, but on anticipating all the intervening steps that would occur before such an extreme event, steps not yet apparent.
A similar dynamic occurred with Pakistan and India. During a recent conflict, both nuclear powers understood the horrific consequences of escalation. Even in simpler situations, it's human nature to try to survive and avoid conflict.
So, without being Pollyannish, optimism about things taking a better path is more often correct than not.
Rai Sur
00:06:44
One way to frame this is distinguishing between forced steps and steps that allow for de-escalation. Perhaps for subsequent steps that could involve de-escalation, we too easily project that they will resemble the current situation.
Rai Sur
00:07:03
What's a forecast you're proud of and one you wish you could forget?
Scott Eastman
00:07:07
One forecast I wish I could forget, and that many of my fellow forecasters also wish they could, is the 2016 election of Donald Trump. This isn't about ideology; it's about not following basic, clear information.
The clear basic information was that most polling going into the election showed Hillary Clinton ahead, but within the margin of error. Her lead was consistently within a few points.
From the beginning, I don't mind having given her more than a 50% chance of winning, but I do regret giving her around a 90% chance.
When candidates are polling very close and then a significant event occurs, it can shift things. Shortly before the election, the Access Hollywood tape emerged, which was detrimental to Trump. However, the subsequent reinvestigation into Clinton's email server—even though nothing new was found—gave some voters another reason to doubt her or decide not to vote.
Those events should have been enough to adjust the forecast significantly. If I had predicted a 60% chance for her to win and Trump had won, I would have been comfortable with that. I would have been on the wrong side, perhaps, but I would have appropriately acknowledged the chance of it going the other way. But I didn't, and I should have known better.
Sometimes such errors are based on emotions, or on an awareness of living in a bubble where, in my case, Clinton was more popular than Trump. I'm fully aware of that. I remember in 2004, living in a very liberal part of the country where John Kerry had overwhelming support. But I knew the middle of the country is conservative, as I used to live there, so I should have known better.
Still, when everyone around you shares one perspective, it's very difficult, even if you diligently follow all the steps of a good forecaster, like considering the other side and various factors. We still get emotional.
It's very tough for me to forecast in areas where I have strong personal biases. I try to set them aside, but I don't believe we can ever fully block them.
A forecast I am proud of—though 'proud' is a difficult word given the terrible outcome—concerns the Russian invasion of Ukraine. I had many good signals indicating it was likely to happen. I didn't know for certain it would happen or its extent, but well over a month in advance, I forecasted that an invasion was more likely than not.
I also believed it was highly probable they would attempt a large-scale invasion, aiming for Kyiv and other major areas, not just confining it to the Donbas region. I care about this particular forecast because it had the potential to be useful.
I had spoken with people experienced in refugee situations and had previously worked on refugee flows, including those from Syria during its war. We know that anticipating a refugee crisis two or three months in advance allows for crucial preparations: pre-stocking supplies, readying borders, and mobilizing personnel. These actions can prevent a complete calamity.
The war itself wasn't stopped. Ideally, someone could have determined Russia's definitive invasion plans and taken steps to prevent it. While that didn't occur, I believe the forecast provided a valuable heads-up for surrounding countries on managing the impending migration crisis.
Rai Sur
00:10:45
As you became increasingly convinced the invasion would happen, did you find yourself unable to see why it wouldn't, yet still feel some internal reluctance towards that conclusion?
Or was it a more fluid realization, where recent events made it clearly inevitable to you?
Scott Eastman
00:11:07
I had an internal dialogue and experienced cognitive dissonance leading up to the Ukraine invasion. Up until that invasion, almost everything Putin had done made sense to me when viewed from what I perceived as his perspective.
When he invaded two separatist regions in Georgia, areas that didn't want to be part of Georgia, I thought he was entering an area where he was essentially welcomed. His intervention in Syria was at Bashar Al Assad's request, so he had some support there, even if not universal. When he started his presidency fighting in Grozny, Chechnya, it was a continuation of a pre-existing war. While not necessarily wanted, it was a continuation of conflict.
Most of the steps he had taken throughout his career seemed rational from a Russian perspective. Though I'm not Russian, I try to study the person, the country, and their history to gain a deep understanding of how their motivations might differ from mine or those of other countries in the region. This understanding affects my assessment of how much they might be willing to lose to achieve their objectives, such as taking land.
However, with the full-scale Ukraine invasion, I thought, "This is going to be a disaster. This one doesn't make sense. This will be a losing deal." I didn't think that about the far eastern part of Ukraine. That region is not just Russian-speaking—as many pro-Ukrainian people are—but more pro-Soviet and less inclined to align with the West. So, I could understand actions in the far east, but trying to take more of the country seemed like a mistake, and it has proven to be one.
But he did it anyway. That was the dissonance: in my view, this would be the first time he would make a major international blunder. On the flip side, he was taking all the steps one would expect for an invasion. I became more convinced when they started moving blood supplies, which have a very short shelf life.
However, a leader wanting to scare someone to the negotiating table might still do that—take all preparatory steps up to the last minute. So, I still remember being shocked, even though I knew it was very likely. I also thought it would happen after the Olympics. He invaded right after, and we had thought he wouldn't want to upstage Xi Jinping during the Olympics, as he needs Xi, the leader of China, as an ally.
Yet, there was still a sinking feeling that this was a mistake, but it was happening. Even though I thought it was likely, up until the last minute, there's always a chance they'll back down. They didn't.
Those were the main reasons: primarily, thinking this was a mistake, even from a Russian perspective. They've lost about half the land they originally took. If he had originally said, "I'm going to take the Donbas, this region, and stop there," I would still consider it morally wrong, but militarily and politically, it might not have been a mistake.
Rai Sur
00:14:15
You used to do documentary photography, is that right?
Scott Eastman
00:14:17
Yes.
Rai Sur
00:14:17
What's different about the world when you're walking through it with serious photography equipment? What differences do you notice?
Scott Eastman
00:14:26
As an observer, sometimes I can be a fly on the wall, hopefully unnoticed. When doing documentary photography, I used fairly large cameras, like the Canon EOS-1Ds with large lenses. This meant I wasn't inconspicuous unless at a mass event where people didn't pay attention. In some parts of the world, an expensive camera made me stand out.
When I do documentaries, unlike being a tourist, I enjoy deeper interactions. Tourists see beautiful things but generally don't interact extensively with local people. In documentaries, even with just photography, I eventually needed to connect with people, even ordinary individuals. Meeting someone on the street could lead to an invitation to their home. I've spent the night with a Roma family, for example. I end up in closer situations than I would as a tourist.
While doing a documentary in a country or region doesn't make me a native or local, I believe I get a closer view of what the place is truly about, beyond just admiring beautiful buildings. When you start talking to people and gain a deeper understanding from them, your perspective changes because you learn what's really important to them.
It's also easy as an outsider to discount people, perhaps judging a group as not very smart or wealthy. But you realize many people, whether a farmer or someone else, are wise in their own field and understand their region well. This relates to forecasting in general: if someone becomes a leader, even of a village, they are likely quite clever to have reached and sustained that position.
Beyond documentaries, in my own life, I've done all sorts of jobs, from being a part-time janitor during vacations to manual labor. I'm acutely aware that systems need everyone to function.
Recently, at a prominent company, a guard said, "I'm just a guard," or someone might say, "I'm just a janitor." But if any of those people fail in their job, there are problems. Each job and each person is important. This isn't just some platitude; it's genuinely true.
Through travel and trying to see a whole society—not just attending social events—I've gained a deep view of how places work and how people are.
Rai Sur
00:17:18
There's a lot of hidden knowledge in the world that can't be simply Googled—things like insights gained from meeting people, understanding their intentions, and private conversations. How much of your forecasting edge would you attribute to these tacit, un-Googleable insights versus your ability to aggregate easily accessible public information?
Scott Eastman
00:17:41
I would think that in maybe 80-85% of the forecasts, Google's going to get you a long way, and you're going to be fine because most things do just follow the trends. But there are some times where I feel like if you don't really understand a culture or know somebody who does that you can trust, you're going to miss some really major points.
An example of this, a really small event, is Iceland. Iceland is an intensely proud country. They have basically no military, and yet they challenged the British Navy at one point. They got into a dispute over fishing rights, decided to challenge the British Navy, and actually captured some British people. They fouled one of their ships, I think by putting nets or something in the propeller. This is crazy if you think about it. They're a country of 300-and-some-thousand people, and they could get crushed.
The same thing happened when they had a financial crisis. Europe was saying, "You've messed up, you need to pay us." All sorts of people had made deposits in their banks, which were getting a high interest rate. They were saying the people of Iceland need to bail them out. And Iceland was like, "No, we're not doing that. You took a risky bet, you lost. Whatever the minimum guarantee is that the bank had, we're going to cover a certain amount of money." They paid that, but they're not going to pay any more. They said, "No, you took a risk, sorry."
Some people in Europe were saying, "If you do this, you're going to be ruined because nobody's going to deal with you anymore." And they're like, "No, sorry, we're not going to make our people cover your risky things." Knowing that Iceland had already essentially gone to war with the UK, it's like they're not going to back down on this stuff. They're also in a strategic position in the world, so nobody wants that. The US still needs them. There's enough going on, and it's probably not worth somebody fighting for there. So, they are able to punch way above their weight.
Then there are other areas of the world, other countries where, if you know their history—for instance, I find it really difficult as an American to understand why Serbia and Croatia have differences that go back 800 or 1,000 years. But some battle matters intensely to them. Americans don't care about the War of 1812 at all. I don't hear any American going, "We don't like the Brits, they burned down the capital." It's just not an issue.
Sometimes, if you do have this local knowledge, it helps a lot to know when something is almost impossible politically for somebody to do. Trying to figure this out with leaders is difficult. You can try to get at some of it by reading biographies if possible. Sometimes, if you know enough about the history of a person, you could understand why they're never, or almost never, going to back down on some issues.
I think it helps to travel around, meet people, and be as broadly knowledgeable as possible. Some of that can be gained from reading from the internet, but not just from a simple search.
Rai Sur
00:20:40
Nice. Okay, on a little grimmer topic. According to Our World in Data, the projected number of base deaths in 2030 is 68 million. That assumes no major catastrophes, status quo. As we go through the orders of magnitude of excess deaths on top of that, what are your top theories for what will cause them? Let's go as a spectrum.
Scott Eastman
00:21:02
The easiest way to get a large number of deaths beyond current rates would be massive conflict. I usually try to look at base rates from the past. Big issues in the past include the Cultural Revolution in China, World War I, World War II, which all had mass deaths, and then pandemics.
Pandemics normally occur every hundred years, but there's no guarantee that because we just had one, we have another hundred years to go. COVID clearly caused more than 10 million excess deaths. Those are the first things that come to mind.
Nuclear war is very low on my list. We've never had a nuclear war since more than one country possessed nuclear bombs. Even with nuclear war, I doubt it would start with countries attacking each other's capitals, as that's a quick endgame. My guess is that even nuclear war wouldn't radically increase that number. I don't want to find out, but it's not so likely.
I would first look at naturally caused pandemics or any disease that wipes out far more people than normal. A normal flu year in America may kill 50,000 people, but globally it's much more, as America is a small portion of the world. A really bad flu could cause high numbers. To get into even higher numbers, I consider pandemics.
Looking out to 2030, my biggest short-term concerns involve AI, and for me, 2030 is still short-term. Some of my colleagues believe AI will wipe us out before then. I am not in that camp.
However, if there's an ability to adapt a virus—and this doesn't necessarily require AI, as countries have attempted to create biological weapons in the past, though largely stopped—that capability will become easier for more people. If that capability becomes easier and isn't contained, it's an area where something could easily escape bounds and kill masses.
Most kinetic events, like big bombs or terrorist attacks, affect a limited number of people. If we're talking about 68 million deaths per year, 9/11 killed about 3,000 people. Even the Ukraine war, which is horrible, is still well under a million deaths. The war in Sudan is also awful, but these events aren't changing the overall death numbers significantly. That's why I start thinking about biological issues.
I consider it extremely low risk that artificial general intelligence or a singularity would create masses of robots to extinguish us. I think there are too many steps along the way where we could see it happening and try to stop it. It also assumes that AI or robots would consider wiping us out their top priority, which I don't believe. Many steps would have to occur for that to happen.
Unlike base rates, AI is an area where we don't have a base rate. We are going into the unknown. In some ways, I feel we're like the world in 1945, or more importantly, when the Soviet Union first got nuclear weapons. If you asked someone in 1950 about the chance the world would not see another nuclear attack by 2025, most people would have thought it absurd; they would have expected a nuclear war before then. They had no priors.
With AI, it would be naive to say that because nothing bad happened with nuclear war from 1950 to 2025, we have at least 75 years of no issues. We simply don't know.
We can use our best judgment and try to figure out scenarios. One way to look at the future is to create scenarios and ask what's likely, and if something is likely, what are the probable counters?
We're in an area where I find it foolish when someone says, "I know this is going to happen," or "We're going to be extinguished by 2032," or "Everything will be fine in 2032." I think we don't have enough knowledge to make a very confident call.
Rai Sur
00:25:27
I know you're interested in the shorter-term impacts of AI and how those could butterfly out to affect many other aspects of society. Could you say more about these short-term impacts and how you think they might play out?
Scott Eastman
00:25:42
It's strange when I think about short-term impacts because we've experienced many technologies in the past that transformed society, such as the printing press, personal computers, and steam engines. Industrialization brought significant changes.
However, with AI, the change is incredibly rapid compared to past transformations. Much of this impact will be on high-skilled, cerebral jobs.
Consider the practice of law. Many years ago, a friend worked on a massive law case, possibly the Microsoft antitrust case, which required document discovery. Traditionally, hundreds of lawyers would manually review emails for relevance.
Now, technology would scan and identify relevant documents with very few lawyers, as you must disclose all evidence to the opposing side. If this reduces the need from, say, 700 lawyers to just 10, that's a significant reduction in the number of lawyers required.
Does this mean we'll need fewer lawyers, or will the cost of legal services decrease, leading to more lawsuits as people find it cheaper to file them? I've represented my own company in small claims court, and AI could simplify case preparation by finding relevant precedents.
I'm unsure how much AI will eliminate jobs versus creating new applications for those skills. For instance, AI speeds up programming, meaning fewer humans are needed for the same amount of code.
Perhaps we'll create more beautiful, complex programs or use programming in more capacities than before. I suspect we'll need fewer programmers five to ten years from now, but I don't know how far-reaching these societal changes will be.
A major concern is job loss. I find the idea of a minimum wage funded by a tax on AI, where everyone receives a basic income, somewhat unconvincing.
Historically, when have companies willingly distributed large sums of money to help the general populace? A government could, of course, impose a tax, and that revenue might sustain us.
Rai Sur
00:28:32
A plausible scenario is that if the alternative to providing support is significant social unrest, the private sector, which benefits from automation, might contribute some of its gains. There's an incentive to placate a large number of people as automation progresses and eliminates more jobs, to prevent their operations from being curtailed by significant social disapproval.
Scott Eastman
00:29:09
Where people decide to start bombing data centers, for example.
Rai Sur
00:29:14
Or just using democracy to...
Scott Eastman
00:29:17
...change the system or shut it down. I can see that. However, if AI is as transformative as many believe, reaching a point where half the jobs are redundant, we would need to subsidize half the population who have little to no work. That would be uncharted territory.
We currently have social safety nets in the US and Europe, with ongoing discussions about their appropriate level. I'm doubtful they would be enough to maintain a high standard of living.
I'm also concerned about meaning. Even a basic job, like cleaning toilets or washing floors, provides a sense of accomplishment. I've done such work, and at the end of a few days, I could see a difference; I had done something.
I'd rather do that than sit at home, receive money, and immerse myself in VR, games, or alternate realities. While I enjoy those things somewhat, I want my life to have purpose beyond mere enjoyment.
I believe most people need a reason to wake up and do something meaningful, perhaps helping others, which is part of being human. If that is largely taken away, it might be fine; perhaps we won't need to work for money and will engage in activities like philosophy groups. Regardless, it will be a challenge.
Rai Sur
00:31:12
Besides unemployment, another short-term uncertainty with AI is its effect on the social realm: relationships and how people interact. Do you have thoughts on what we might see and the risks involved if these effects were to escalate?
Scott Eastman
00:31:32
There are many challenges with this. One is that an AI can respond in the way you want, controlled by the company that wrote it. It's in their interest to maintain our engagement with the AI. If we disengage or spend less time per day, the algorithm might be changed to re-engage us.
There may also be an underlying agenda. For instance, authoritarian countries, or any country, might want their society to behave in certain ways. The AI company simply wants you to stay engaged and buy products or whatever generates revenue.
A key challenge for me, as someone old-fashioned, is that human-to-human relations remain important, not just human-to-AI or human-to-robot interactions. If an AI consistently responds in a way I prefer, and then I deal with a human who is sometimes difficult, I might question why I'd interact with that challenging human. When my bot is always friendly and provides good, useful information, tolerance for human inadequacies or outbursts could become very low.
I find this not only with AI but also when dealing with high-level forecasters. I enjoy interacting with them because they are typically skilled conversationalists who engage in high-level discussions without personal attacks, even with differing views. This leads to great intellectual conversations.
Conversely, when I talk with people who lack these skills, I sometimes feel disinclined to engage in what might seem like unproductive conversations. The same could happen with AI if it consistently provides excellent information.
For instance, I have a friend who is a brilliant astrophysicist, and I greatly enjoy our conversations, even though I'm not an expert in the field. However, if I could ask an AI like Gemini a question, it might already provide answers as good as his in many areas. I'd still prefer talking to my friend due to my preference for human interaction, but eventually, one might wonder why consult a human guru when the computer offers comparable expertise.
The prospect of AI intentionally swaying us is frightening. I've seen U.S. and European elections significantly skewed by algorithms promoting specific candidates or viewpoints. This can affect personal relationships, social cohesion, and politics; it's alarming how detrimental this could become.
At the same time, people growing up with this technology are becoming accustomed to challenging authority, which can be positive. When I was growing up—I'm over 50—we had limited news sources: a few major TV stations, a local newspaper, and perhaps The New York Times for some families. We all drew from the same core body of knowledge, and most people accepted news reports as true without question, then discussed them.
Now, people question if content is AI-generated, if what they see is real, if a video is authentic, or if they're interacting with a real person or AI. This leads to a decay of trust in some ways, but it also fosters a healthy skepticism, encouraging people to seek multiple sources and verify information.
It's not entirely negative. In the right hands, this technology can make us more perceptive and questioning, hopefully in constructive ways, though it can also go the other way.
Rai Sur
00:36:01
What other aspects do you find underrated, especially concerning Sentinel's coverage?
Scott Eastman
00:36:08
One thing that is easy to ignore is climate change. Most of us are aware of it and accept that it's real, and that humans are having a major effect on it. But it's not something that changes day by day. Even when a massive event like a huge hurricane or forest fires happens, it's still not global at one moment. So, it's easy to keep that as never the main issue of the day.
Even at Sentinel, where there is a goal to look at pressing issues, including some of the longer-term issues, it's not always rising to the top.
I've lived in different areas of the world where it's either intensely clear that climate change is a pressing issue and even the next decade is scary, or where it seems really distant. If you live in an area that gets plenty of water and where temperature extremes are not bad, it might not feel urgent. If you're in a place that doesn't get over 100 degrees Fahrenheit, or 38 degrees Celsius, in the summer, and someone tells you the Earth is going to raise another one or two degrees, it might sound good.
But if you're living in Arizona or large parts of the United States that now have smoke for much of the summer, even if you're pretty far east but there are massive fires in the west and you need to wear a mask for a month, it feels a lot more real. This is obviously even more so for areas hit by hurricanes.
We know climate change affects wars. There's big concern across sub-Saharan Africa and Central America that as there's more drought and famine, there will be migrations. Migrations, or just scarcity, often tend to lead to conflict and human suffering. That is really concerning.
A challenge is that science increasingly has some ability to counter global warming. I've been on projects looking at how we can try to engineer our way out of climate change, to reverse it. There are some ways to make progress, but those are also scary. If you try to bioengineer something to be more reflective or cover rocks with something to absorb CO2, anything on a large enough scale to have an effect would be a massive experiment that may have unintended consequences.
Some science-positive people, and I'm generally science-positive or human-positive, say that eventually we will come up with a lot of solutions. We have already made many positive moves, like windmills, solar panels, electric cars, and I'm still not against all nuclear power. We've gotten better in many areas.
However, I am still deeply concerned about climate change and how we deal with it. It's also one of the areas where we have the most ability to positively affect it. If taken on as a priority, it could be an economic driver, not just a negative. That's an area we sometimes skip, depending on the administration. Even in a relatively liberal administration, it's usually not the number one thing we're dealing with.
Rai Sur
00:39:47
A common operationalization of climate change is the change in average temperatures, in degrees Celsius. But that doesn't really seem to give us a good intuition for how climate change could then affect other conflicts or create instability.
Do you have any ideas for specific operationalizations of climate change that might be closer to those kinds of impacts? This could help us make bright lines in the sand and reliably look at them, so we have a way to coordinate around when it's important to mention climate change.
Scott Eastman
00:40:26
One would be if we can quantify famines and deaths from famine that we can separate out from normal cycles. Even in the 1970s or 80s, though climate change was already happening, there were still famines, for example, bad years in Ethiopia. But if you could try to figure out migrations and famines that you could relate specifically to climate change.
One of the challenges with this is it's really hard to say this specific storm or this specific spike in temperature this year is only because of climate change. We know temperatures are going up, but attributing a specific event solely to climate change is difficult.
For example, in the Western United States, we're starting to get "atmospheric rivers." In the past, a big rainstorm might last an hour or two. Now, you might have a cloud parked over a city for 60 hours, dumping a year's worth of rain. That's not normal, and these are becoming pretty common in California. Depending on where they park, they can cause mass flooding and destruction.
This year in the Los Angeles area, there were mass fires. If you have mass fires followed by atmospheric rivers, you get mudslides. You could look at how many houses are destroyed as a result of extreme weather compared to an average year. If you just took the United States, where there's lots of data, and looked at housing units historically, year by year, how much has been taken out and how much is it going forward? That may be a factor.
But I still don't think it's going to make everyone care. The people in the areas hit hardest will care. But if you're living in Norway, even if many people there are compassionate and want to help, it's probably not going to personally affect you very much. The same for someone living in Germany, which is another country that seems to care about climate change but is a pretty green, relatively cool country. People living in England, I work with a lot of them, and I don't think they're having the same feeling of urgency as someone living in the southwest of the United States.
So I'm not sure how you actually make somebody care when it's not in their backyard.
Rai Sur
00:42:53
What's something that I should be asking you about?
Scott Eastman
00:42:56
One of the goals of forecasting is to give people a heads-up about what we need to be looking at—things we can do something about. This might be because we're curious and care about global events, or because they are immediate concerns.
It's a huge challenge because some important areas are not currently forecasted. For example, an erosion of democracy matters to me, though I don't know how to forecast it. In countries where freedom of speech and the ability to make change are quashed, we have less resilience and ability to react. Hopefully, authoritarian countries have benevolent governments that address problems, but I'm less confident in that.
I'm concerned with issues of democracy and how we, and our information, become increasingly siloed. Beyond our sentinel reports highlighting global issues, we should also consider how to change our own thinking. This includes how we process and seek information, and how we step outside our comfort zones to consider views that disagree with our own.
This is important for us to do. It's not a direct forecast, like predicting conflict in Sudan next week, but rather understanding how our thinking influences our future.
Rai Sur
00:44:31
That's what I hope to do with this podcast: provide the background color for things that can't be neatly put into a line item. For instance, concerns about the erosion of democracy still factor into your world models and your outlook on the future.
That's a great place to wrap it. Thank you, Scott, for coming on.
To the listeners. If you enjoy this episode, please consider sharing it with someone. Most of the benefit of forecasting comes from influencing decisions, and to do that, we need to grow our reach.
Also, if you want to reach out to us about anything, please send us an email at podcast@sentinel-team.org.
Share this post