About Resilient Systems

As societies come to depend ever more on big and complex systems—such as world-spanning communications, finance, and production networks—the art and science of distributing political and physical risks become that much more important.

Sometimes we fall short. In recent years we have witnessed a terrifying new phenomenon—the crash of entire industrial and banking systems due to the cut-off of a single city or region, even, a single factory or bank. Many different events may trigger such a crash, but all crashes share a single characteristic: too much capacity, hence too much risk, concentrated in one place.

This project investigates the root causes and dynamics of this huge and growing danger; convenes engineers, business managers, policymakers, and security experts to identify solutions; and works with news media to deepen the public’s understanding of this threat. The aim is to promote the resilience of key systems by redistributing risk, power, and opportunity among more people.

Complex systems are as old as human society—indeed, they are the tools we use to make society. Agriculture has always required systems to direct and deflect great rivers; politics has always required systems to direct and deflect passions and power. From the beginning, people have proven adept at designing such systems to be resilient enough to survive big shocks. The key to such resiliency is a simple principle: that all risk be distributed.

New technologies clearly play a role in making today’s systems more fragile, mainly by speeding things up. But the fundamental problem is that we violated the principle of distributing risk. In crash after crash, we find the cause to be that too much of one thing—be it semiconductors or debt—is concentrated in one place.

This problem, in turn, is usually rooted in politics. The reason too much of one thing is concentrated in one place is usually because some powerful actor or group—in control of some big corporation or state—used their power to grab hold of some human activity and to lock it away from everyone else.

Experts describe the systems that result as “Too Big” or “Too Integrated” to fail. We believe it is more accurate to describe such systems as “Built to Break.” Failure is natural, and inevitable. Any social system that fails to allow for failure places society itself at risk.

The goal of this project is to clarify the dangers posed by the instability of key systems, to detail what events might trigger a crash, and to identify potential fixes. This project focuses especially on industrial networks, but works closely with groups seeking to promote the resilience of financial, environmental, social, and economic systems.

The project is also designed to identify the political roots of this problem—namely the failure to prevent the few from capturing control over some activity of vital importance to the many.

The origins of this project lie in research and studies begun by Barry C. Lynn in late 1999, immediately following the world’s first industrial crash, triggered when an earthquake in Taiwan broke the flow of semiconductors to factories in the United States. Over the years, our work has been supported by the Rockefeller, Ford, Sloan, and Nathan Cummings foundations, as well as others.