Cascade Institute Director Thomas Homer-Dixon discusses how complexity science can help us make sense of today’s interconnected global challenges in this recent interview with In-Sight Publishing editor Scott Douglas Jacobsen. The two discuss how small shifts in complex systems can lead to major social and political change, and how understanding those dynamics can help us steer toward more resilient futures.
The version of record of this interview was published under a CC BY-NC-ND 4.0 license by In-Sight Publishing.
Scott Douglas Jacobsen: I’m grateful you could join me today—it means a lot. To start us off, how can complexity science help us make sense of immense global challenges like man-made climate change and widespread economic instability, and what tools does it give us to confront them more effectively?
Thomas Homer-Dixon: Right, you’re getting straight to the point. That’s a terrific question.
As most people do, I came to complex systems science somewhat indirectly. However, within my discipline—political science, conflict studies, and international relations—the conventional ways of thinking about causation didn’t help me untangle what was happening in my study areas. They didn’t adequately explain the underlying causal dynamics.
Over about 15 years, I transitioned into complexity science and developed a much clearer understanding.
At its core, complexity science helps us understand non-linear phenomena—situations where relatively small changes in a system, whether in an economy, climate, geopolitical structure, or ecological system, can lead to significant and sometimes unexpected consequences. Conversely, it also helps us understand why, in some cases, considerable interventions appear to have little or no impact.
The proportionality of the relationship between cause and effect in complex systems breaks down. In our everyday world, we think of small changes causing minor effects, small causes having minor effects, and significant modifications producing significant effects. So, there’s a proportionality.
But in complex systems, that breaks down. This means that complex systems—again, we’re talking about everything from ecologies to economies to the climate system to even the way the human brain works—have the capacity to flip from one state to another, from one equilibrium or stability zone to another, often in quite unpredictable ways.
The business of complexity science is identifying the various possible stability zones, what configuration of an economy or a political system will be stable, and what factors can reduce that stability and cause it to flip to another state.
To give a contemporary example, we’ve just seen a flip in the United States political system—a reconfiguration—from one equilibrium to something else yet to be determined. Mr. Trump generates enormous uncertainty, so the nature of that new equilibrium isn’t entirely clear yet. We have some ideas, but that is a classic example of non-linearity.
In an ecological system, a non-linearity would be something like the cod fishery collapse off the east coast of Canada in the late 1980s and early 1990s. That was one of the most productive ecosystems in the world, and it has wholly reconfigured itself. It will never return to its previous level of productivity, which was incredibly abundant in biomass production.
The 2008–2009 financial crisis was another example of non-linearity. Complexity science aims to identify the factors that produce these sudden changes—these flips—and anticipate them. However, the other side of this work is that once we understand those connections and causal relationships better, we may be able to induce changes in a positive direction.
We might be able to cause positive flips—positive in a value sense—good flips instead of bad ones. At the Cascade Institute, we divide our work into two areas. One focuses on anticipating pernicious cascades or harmful non-linearities, and the other on triggering virtuous cascades that benefit humankind. We then drill down in these areas to identify threats and opportunities using complexity science.
Jacobsen: Around the world, ideological polarization seems to be intensifying, not only in the United States during the Trump years but across a range of societies. Complexity science suggests that when several tipping points are reached—whether all at once or in succession—they can unleash powerful non-linear effects. Do you see today’s deepening polarization as one of those moments, where competing ideologies could drive us into a new wave of unpredictable, destabilizing dynamics beyond the recent election?
Homer-Dixon: Yes. So, part of the framing of complexity science—and it’s almost inherent in complexity itself—is the recognition that a lot is happening. Within conventional social science, or even conventional science, there’s a strong emphasis on parsimony—identifying relatively straightforward relationships between causes and effects.
Within complexity science, there’s less emphasis on parsimony. There’s an initial recognition that the world is complex, with numerous factors operating and interacting in ways that are, at least at first, difficult to understand. You won’t develop a good understanding by focusing on single variables or isolated factors. You have to examine multiple elements simultaneously. That is the foundation of all complex systems work.
Frankly, that’s what initially attracted me to complexity science. I was grappling with the broader issue of the relationship between environmental stress and violent conflict. As I studied factors like water scarcity, forest degradation, and soil depletion—and how they interacted with conflict—it became clear that multiple causal pathways were involved. Many interconnected factors had to be taken into account. So, I needed a different framework rather than a simplistic approach that looked at single causes and effects.
That’s the background. Now, you can find more details on polarization on the Cascade Institute website. We have developed a set of hypotheses about the factors driving social polarization and deepening social divisions—factors that are far more complex than standard analyses suggest. We use a four-pathway model to explain polarization. The first pathway consists of economic factors—rising inequality and economic precarity- fueling polarization.
The second pathway involves social and managerial factors, precisely the decreasing capacity of societies to address complex problems. Our technocratic elites and experts are increasingly perceived as incompetent in handling crises, whether related to healthcare, climate change, or managing the pandemic. This leads to a delegitimization of expertise and expert governance—a growing rejection of specialists and institutions.
The third pathway is connected to our information ecosystem—social media, information overload, and how these influence communication. These dynamics amplify emotional negativity, making people more inclined to engage only with those who share their views rather than those who think differently.
The fourth pathway is more fundamental: epistemic fragmentation. People increasingly live in their knowledge bubbles, developing their versions of reality and dismissing alternative perspectives on truth. This fragmentation fuels a breakdown in shared understanding.
We have four distinct pathways and are studying how they interact. These interactions can create precisely what you suggest—tipping points in people’s attitudes.
However, these four pathways can be considered underlying stresses in our social systems. Over time, these economic, managerial, cognitive, informational, and epistemic factors make our social systems less resilient. They make people angrier, more afraid, and more distrustful of institutions.
Many of these changes can occur gradually, but then suddenly, you get a significant event—like the political shift in the United States—where the institutional arrangement of an election triggers a system-wide flip.
The best way to think about these polarization processes is that they have drained resilience from our social systems, making them more vulnerable to abrupt shifts that ultimately harm people. In this case, the flip was an institutional one. However, the long-term changes in people’s attitudes, ideologies, and belief systems haven’t been so much a flip as a gradual erosion of resilience.
That erosion manifests in institutions where a radical right-wing regime comes into power in the United States. This is a clear example of non-linearity—where long-term trends, or stresses, accumulate relatively linearly over time, much like tectonic pressure before an earthquake. Once they reach a certain threshold—bang—you get the quake, and the system flips to another state. In this case, that flip was a shift in control of federal institutions in the United States.
Jacobsen: Let me put this in two parts. First, do you think President Trump will go down as one of the most consequential presidents in American history? Second, there’s now a massive nine-figure investment on the table for artificial intelligence.
AI has moved well past being just a trendy buzzword—it’s become a driving force for high-tech firms, major investors, software development, and breakthrough innovation. Do you see these areas steering the development of AI, or is it more accurate to say that AI will end up reshaping them instead?
Homer-Dixon: Yes, 100%. These are related but distinct questions. Let’s talk about Trump first.
The answer is clearly yes—he is already one of the most consequential presidents in American history, alongside Lincoln and Washington. In a recent piece in The Globe and Mail, I argued that he would also be one of the most consequential figures in human history, and I laid out the reasons for that.
One reason is that he is one of the most influential individuals in the world—perhaps alongside Elon Musk. However, he and many people around him are profoundly ignorant of how global and national systems function, even at a basic level.
For example, he doesn’t understand how tariffs work or their economic consequences. That ignorance is deeply consequential because there will be moments when deep system knowledge and strategic intelligence are needed to navigate an acute crisis.
I often point to John F. Kennedy during the Cuban Missile Crisis as an example. He surrounded himself with top experts, forming what he called ExComm, the Executive Committee of the National Security Council, to carefully think through the U.S. response to the Soviet placement of nuclear-capable missiles in Cuba.
I can’t imagine Trump doing anything remotely similar. He has surrounded himself with individuals who are radically ill-equipped to manage the complex systems they now control.
They have their hands on the levers of these systems, yet they are radically ill-equipped to know how to position those levers effectively. So, that’s point one.
Point two is that Trump’s relationship with his followers drives him in a more radical direction. I won’t go into all the details, but if he fails to implement his agenda, he will become more radical, not less. He will seek out more enemies, attempt to attack them, and crush and destroy both perceived enemies within the United States and those outside it.
Point three is that multiple global systems—climate, geopolitical structures, and more—are already highly stressed and near tipping points. Trump could push them past those thresholds in various ways. One prominent example is climate change. He is actively rolling back climate action.
Essentially, his policies amount to humankind giving up on addressing the climate crisis. That alone could change the trajectory of human history and civilization.
If he escalates tensions into a nuclear conflict, which his actions significantly increase the risk of, that too would mark a defining inflection point for humankind. So, when you take these three factors together—his radicalization, the fragility of global systems, and the existential risks he exacerbates—Trump is among the most consequential figures in human history.
That’s a controversial position, but it was interesting to see the response to my article, published three days before his inauguration; three weeks later, people are already reassessing and saying, “No, that view wasn’t exaggerated.”
Now, on artificial intelligence, which is equally relevant, AI dramatically accelerates what we call epistemic fragmentation. It enables the creation of multiple contradictory realities and allows for the substantiation of false narratives. People can manufacture evidence at will using AI, making it difficult—if not impossible—to discern whether information has any real-world grounding.
This is all part of the more significant shift toward anti-realism. Increasingly, people live in massively multiplayer game-like realities, and AI enhances the ability to generate convincing but completely false realities. Worse, these fabricated narratives can be weaponized against groups or political opponents.
So, regarding your point on AI, I am deeply concerned. I have been in contact with many experts who are central to this debate and the development of AI itself. One of the fundamental issues with our world today is that we don’t know. Due to the inherent complexity of our systems, we are witnessing an explosion in possible futures.
Take, for example, DeepSeek, a breakthrough that dramatically changed AI energy consumption estimates overnight. We previously assumed AI required massive energy and material inputs into server farms, but suddenly, DeepSeek cut those estimates by 90%.
Yet, despite these developments, we don’t fully understand the pathways AI will take. There are still enormous unknowns across technological, political, and social dimensions. This uncertainty offers some potential for hope. Within that very uncertainty, there will be positive outcomes—opportunities we can’t see yet, even from AI.
However, I am profoundly concerned about AI’s ability to exacerbate epistemic fragmentation, further entrenching the creation of multiple conflicting realities. These alternative realities will not only shape the way people see the world but will also be weaponized against one another. AI is likely to worsen polarization rather than help us overcome it.
Jacobsen: Your comments call to mind the perspectives of two intellectual figures who represent strikingly different traditions of thought—Margaret Atwood, the Canadian novelist, and Noam Chomsky, the American linguist. Each has reflected on the relationship between ignorance and intelligence, and Atwood once distilled her view with a stark observation: “Stupidity is the same as evil if you judge by the results.”
Homer-Dixon: That’s very good. That’s true.
Jacobsen: I’ve been thinking about the points you’ve made so far, and they bring me back to a question that Chomsky once raised—though it actually traces to Ernst Mayr. He suggested that “intelligence is a kind of lethal mutation.” It’s an unsettling thought when you consider that beetles and bacteria are thriving quite well without it. So when we look at AI and its implications, the question still lingers: could intelligence itself prove to be a lethal mutation?
Homer-Dixon: Yes, we are modifying our environment to such an extent that we may ultimately cause extinction. You’ve encountered this in your discussions—the famous estimate regarding the longevity of intelligent life in the universe, which is embedded in the Drake Equation.
Frank Drake was the head of SETI—the Search for Extraterrestrial Intelligence. I once visited the SETI offices in the Bay Area. At least at one point, Drake had a custom license plate that read something like “IL = L,” “Intelligent Life = Longevity.”
In his equation, Drake included a series of factors that could contribute to the development of life: the size of planets, their distance from their stars, whether water exists on Earth, and other standard variables.
But the final factor, L, stood for longevity—essentially, the question of whether intelligent life would survive long enough to reach a stable and enduring state. That factor dominated everything else for him because intelligence might ultimately destroy itself.
I don’t think they are.
Human beings—and this is where I have a soft spot for accelerationism, people like Thiel and Musk—are extraordinarily creative, especially in moments of crisis and extreme stress. Things don’t look real right now, particularly existential problems like climate change.
The Peter Principle by Laurence J. Peter and Raymond Hull was published in 1969.
The basic idea is that within bureaucracies and organizations, people get promoted to their level of incompetence—they rise until they reach a position where they can no longer do their job effectively, and then they stop advancing.
What we may be witnessing with problems like climate change is that humanity has reached its level of incompetence. We have solved everything up to this point. Still, eventually, we will face a challenge too complex to overcome.
It’s an open question.
I’m not prepared to count humankind out yet. I have two kids—one is 19, the other 16—and they are very worried. But I keep returning to this: the world is so complex that we don’t know its game.
There may be an explosion of possibilities, but we can’t see the adjacent possible. These could be technological, institutional, ideological, or belief-system shifts. We don’t know. That is precisely why the Cascade Institute exists. We are trying to identify those possibilities and which ones can be leveraged.
Jacobsen: Thank you very much for your time. I appreciate it. It was nice to meet you.
Homer-Dixon: Great questions.

