Book Summary: Thinking in Systems by Donella Meadows

Book Cover for Thinking in Systems by Donella Meadows

This summary of Thinking in Systems: A Primer by Donella Meadows explains how systems thinking can help us better understand and change the systems in our lives.

Buy Thinking in Systems at: Amazon | Kobo (affiliate links)

Key Takeaways from Thinking in Systems

  • What is a system?
    • Systems are things where the whole is more than the sum of its parts.
    • There are three essential components: Elements, Interconnections and a Purpose/Function.
    • In the real world, systems don’t have clear boundaries because almost everything is connected. For models to be useful, however, we need to draw boundaries.
  • Stocks and flows:
    • Flows are the material or information going in and out of an element.
    • A stock is the history of accumulated flows over time.
  • Feedback loops:
    • Balancing feedback loops adjust flows so as to keep stocks within a certain range.
    • Reinforcing feedback loops amplify flows, leading to exponential growth or collapse over time.
    • Resilient systems have multiple feedback loops and some degree of redundancy.
    • Delays in a feedback loop can lead to complexity and oscillations, making it hard to predict a system’s behaviour over time.
    • Complex systems may involve hierarchies of different subsystems, with feedback loops operating a different levels.
  • Before intervening in a system, we should try to understand it:
    • Everything we think we know about the world is merely a model of reality.
    • Understanding systems will likely require interdisciplinary thinking.
    • Some common unintuitive features include: non-linearities, causation, limiting factors, and bounded rationality.
    • Common system archetypes include: policy resistance; tragedy of the commons; drift to low performance; rich get richer and others.
  • Even when we understand a system, we may not be able to control it or predict its behaviour.
    • There’s a gap between knowing and doing.
    • Systems naturally tend to be resistant to change.
    • We’ll never fully understand systems — but we can redesign and learn from them.

Detailed Summary of Thinking in Systems

What is a system?

A system is a set of things interconnected in such a way that they produce their own pattern of behaviour over time.

Every system has three things:

  • Elements
  • Interconnections
  • Purpose or function

Elements

Elements are the easiest to spot, as many are visible, tangible things. For example, a university system is made up of students, professors, administrators, buildings, books, etc.

Often people blame problems on particular actors or events, even when the root cause is the system’s structure. The elements tend to be the least important part of a system because you can usually change the elements without changing the system’s overall behaviour.

There is an exception to this — sometimes changing an element also changes the interconnections or purpose/goal of the overall system. For example, sometimes the leader at the top of the system may have power to change the system’s overall purpose.

Interconnections

These are the relationships that hold the system together, and they often operate through the flow of information. (See below under “Feedback loops” for more.)

Purpose or function

A system’s purpose or function can be the hardest to spot as it’s not necessarily written down anywhere. The best way to find a system’s purpose is to observe its behaviour over time, rather than its rhetoric or stated goals. (See below under “Events vs behaviours vs structure” for more.)

A system may have a function or purpose even if no individual within the system has that function or purpose. For example, no one deliberately created poverty, drug addiction, or war, and many people want to fix them, but it’s difficult to do so because they are intrinsically systems problems.

… everyone or everything in a system can act dutifully and rationally, yet all these well-meaning actions too often add up to a perfectly terrible result.
— Donella Meadows in Thinking in Systems

A system is more than the sum of its parts

Systems have an integrity or wholeness about them. A set of things that are not interconnected and do not have an overall function is not a system. For example, sand scattered on the side of the road is not a system. You can add or take away sand, but it’s still just sand.

A football team’s elements, interconnections and purpose

In a football team:

  • the elements of the system are its players. If you change all the players in a football team, it will still be a football team.
  • the interconnections are the rules of the game. If you change the rules to basketball, you have a whole different game.
  • the purpose of the system is to win games. If the purpose were to lose games, it will profoundly change the system.

Boundaries

Real-world systems rarely have strict boundaries because everything is connected. As complexity grows, systems will often organise themselves into hierarchies. We often take this self-organisation for granted.

However, some degree of simplification will be inevitable as you try to understand a system — so boundaries are also inevitable. Hierarchies are often “decomposable”, meaning its subsystems can function autonomously when separated from other parts. For example, a doctor can usually treat liver disease by just looking at the liver, not at other organs or your personality. But there are also many exceptions to this — e.g. perhaps your job or lifestyle is causing the harm to the liver.

Boundaries of a car dealer’s inventory system

A diagram of a car dealer’s inventory system may show cars coming in from a “supplier” and going out to “customers”. If both supplies and customer demand are guaranteed to be consistent, this is fine.

But in real life, supply and demand will not always be fixed. If we forget that the boundaries of our system models are artificial, we could be surprised by future events.

Drawing the right boundaries can be tricky:

  • If boundaries are too narrow, the system will likely surprise you as you’ll fail to account for something outside of your model. The greatest potential for unintended consequences arise at a system’s “boundaries”;
  • If boundaries are too broad, your diagram gets too complicated. Systems analysts often fall into this trap, producing diagrams with so much small print it ends up inhibiting understanding rather than aiding it.

Where you draw the boundaries depends on what you want to understand as well as your timeframe (longer timeframes generally require broader boundaries). Ideally, we’d find the right boundary for thinking about each new problem, but we tend to get attached to the boundaries we’re used to.

The right boundary for thinking about a problem rarely coincides with the boundary of an academic discipline, or with a political boundary. Rivers make handy borders between countries, but the worst possible borders for managing the quantity and quality of the water.
— Donella Meadows in Thinking in Systems

Stocks and flows

Flows are the material or information going in an out of an element. A stock is the history of accumulated flows that have built up over time. For example, the amount of water in your bathtub is a stock while the water going in and out of it are the flows.

Stocks act as shock absorbers

A system’s “stock” acts as a buffer or shock absorber. For example, even if you turn off the faucet and pull the plug, it takes time for your bathtub to drain. This is one reason why systems are slow to change.

While this slowness can cause problems, it also helps keep the system stable even when flows are unpredictable and there are lags in the system. For example, if you’re a retailer, customer demand may vary and it takes time to receive orders from your supplier. Maintaining a stock (an inventory) allows you to keep selling constantly.

We often focus too much on flows, instead of stocks or a system’s underlying structure

Events are the output of a system

We tend to see the world as a series of events — elections, battles, disasters, economic booms and busts. We ordinarily talk about specific events at specific times and places. News media does this too — they don’t generally help to put news events into the broader context.

Yet events are merely the output of the black box of the system.

It’s endlessly engrossing to take in the world as a series of events, and constantly surprising, because that way of seeing the world has almost no predictive or explanatory value.
— Donella Meadows in Thinking in Systems (emphasis added)
Try to understand longer-term behaviour and structure instead

Instead of focusing on events, systems thinkers should first look at a system’s performance over time — its behaviour. Time graphs and data can reveal how elements vary or don’t vary together.

Over the long-term, that behaviour can help us understand the system’s stocks, flows, and feedback loops — its underlying structure. System structure is the source of system behaviour, so is key to understanding why events happen.

The key is to start with behaviour (the facts) and go back and forth between behaviour and structure (the theory). People are often far too confident of theories that are utterly unsupported by any data. We also have a tendency to define a problem not by the system’s behaviour but by leaping to our favourite solution.

Feedback loops

A feedback loop occurs when changes in a stock affects the flows into or out of that same stock. Many systems have feedback loops, but not all do.

Real life systems usually have multiple feedback loops of different strengths interacting in very complex ways. A loop that has a stronger impact on behaviour is said to dominate the other. But dominance can shift over time. Complex behaviour often arises as the relative strengths of feedback loops change.

There are broadly two types of feedback loops: balancing and reinforcing feedback loops.

Balancing feedback loops

Balancing loops adjust inflows or outflows to keep the stock at a certain level or within a certain range. For example, if you see your bank balance is low, you may work extra hours (increasing inflows) or cut your spending (reducing outflows), to keep your bank balance within the desired range.

Balancing loops oppose whatever direction of change is imposed on the system, so are sources of stability in the system.

Example: thermostat — two competing balancing loops

A thermostat regulates your room’s temperature by trying to keep it within a desired range with two competing balancing loops:

  • Heating loop. When the temperature falls below the desired range, the thermostat turns on the furnace. When the temperature drops below the desired range, the thermostat turns off the furnace.
  • Cooling loop. Because heat leaks, the cooling loop always tries to make the room temperature equal to the temperature outside.

The two loops push in opposite directions. Which one dominates depends on things like the your insulation quality, furnace size, and how cold it is outside.

Once a room reaches the set temperature, the furnace cycles on and off as heat slowly leaks outside. This is a state of dynamic equilibrium, where the system’s stock (the room temperature) does not change even with flows in and out of the system.

Reinforcing feedback loops

Reinforcing loops amplify, self-multiply or snowball, creating cycles that lead to exponential growth or collapse over time. This type of feedback loop generates more input to a stock the more there already is (and less input the less there already is). Examples include the inflation wage-price spiral, population growth and compound interest.

However, any physical, exponentially growing system will eventually run into a counteracting balancing loop. No physical system can grow forever in a finite environment. A popular product will eventually saturate the market; a virus will run out of people to infect.

Growth in a constrained environment is so common that it’s called the “limits-to-growth” archetype. If limits to growth are not self-imposed limits, the system will impose them.

For any physical entity in a finite environment, perpetual growth is impossible. Ultimately, the choice is not to grow forever but to decide what limits to live within.
— Donella Meadows in Thinking in Systems
Example: population growth — reinforcing and balancing loops

A population has both a reinforcing loop (birth rate) and balancing loop (death rate). If birth and death rates are constant, the system will either grow exponentially or die off, depending on which loop dominates.

In real life, birth and death rates change over time. A population may grow exponentially for a while and then level off. Various feedback loops will influence birth and death rates, and some of those loops are themselves affected by the size of the population. In particular, the population is part of a larger system — the economy — and both systems influence each other.

Example: oil extraction

Oil extraction is an example of a stock-limited system, with a renewable stock (capital) constrained by a non-renewable stock (oil).

The oil extraction system has both reinforcing and balancing loops:

  • Reinforcing loop — extracting more oil yields more profits, which you can reinvest back into capital to extract even more oil.
  • Balancing loop — as you extract more oil, the next barrel of oil becomes harder and costlier to extract.

Since the system is stock-limited, a faster extraction rate just shortens the lifetime of the resource. If the extraction rate grows exponentially, even doubling the amount of oil won’t make a big difference to the lifetime of the oil well (though it does of course make a big difference in the amount of oil extracted).

Example: fishing economy

Fishing is an example of a flow-limited system, with a renewable stock (capital — fishing boats, etc) constrained by another renewable stock (fish stocks). Again, both reinforcing and balancing loops are at play:

  • Reinforcing loop — as before, extracting more fish leads to more profits, which can be invested into more fishing boats.
  • Balancing loop — again, as fish become scarce, it costs more capital to get the same amount.

But there’s a further factor here since it’s a renewable stock: the fish reproduction rate, which varies depending on fish density:

  • High fish density — low reproduction rate as population is limited by available food and habitat.
  • Medium fish density — high reproduction rate as a reinforcing loop can lead to exponential population growth.
  • Very low fish density — low reproduction rate as fish may not be able to find each other, or another species moves into their habitat. Below a critical threshold, the stock may not be able to recover. Small or sparse populations can therefore be particularly vulnerable.

What ends up happening depends on the relative strength of the feedback loops and the critical threshold. Meadows walks through three scenarios:

  • Equilibrium — the balancing loop brings the harvesting rate in equilibrium with the fish stocks, allowing a steady harvesting rate indefinitely.
  • Oscillations — the balancing loop is less strong, as the industry manages to improve its harvesting efficiency. This causes the harvesting rate, capital stock, and fish stock all to oscillate around a stable value.
  • Collapse — technology overpowers the balancing loop, eventually collapsing the harvesting rate, capital stock and resource stock.

Resilient systems have multiple feedback loops and some redundancy

Resilience is a measure of a system’s ability to persist in the face of change and to repair or restore itself. Our bodies and ecosystems are examples of incredibly resilient systems.

A resilient system will have a structure of feedback loops that can restore it even after a large shock. Systems are more resilient if there are multiple feedback loops operating through different mechanisms and different time scales, with some redundancy. Feedback loops that can learn and evolve more complex restorative structures lead to even more resilience.

A common mistake is to confuse resilience with constancy. Constancy is easy to observe; resilience is not. You might only see if a system is resilient if you push it to its limits or take a whole-system view. But constant systems can be remarkably unresilient, and resilient systems can be very dynamic. Because resilience can be hard to see, people often sacrifice it for more visible measures like stability or efficiency. For example, just-in-time supply chains increase efficiency at the cost of resilience.

Delays in feedback loops

Delays are inevitable

We seem to be constantly surprised by how much time things take. [See also Planning fallacy.] In a system, all stocks are delays. Most flows also have delays (shipping delays, processing delays, etc). A system cannot deliver information or have an impact fast enough to correct the behaviour that drove the current feedback. A flow cannot react instantly to a flow; it can only react to a change in a stock, and only after a slight delay.

A balancing feedback that tries to maintain stock at a particular level must have its goal set to compensate for other inflows or outflows, else it will miss its target. Recall the thermostat example — it takes some time for the furnace to correct for heat loss, so the equilibrium temperature will be slightly lower than the thermostat’s setting. You should therefore set the thermostat slightly higher than your desired temperature.

Which delays you care about will depend on your purpose and timeframe.

Delays can lead to oscillations and complexity

Delays can make a system’s behaviour complex and hard to predict. Many economic models assume that consumption or production can respond instantly to changes. But real economies are extremely complex systems, full of balancing feedback loops with delays. They are inherently oscillatory and don’t behave exactly like the models.

The example below shows how a system can behave counterintuitively when we start trying to change them.

Car dealer inventory and delays

A car dealer tries to keep enough inventory for 10 days’ worth of sales. If the sales rate (outflows) rises, the dealer increases her orders to the supplier (inflows) to try and match it. Assume for simplicity that she makes orders daily.

There are three types of delays in the system:

  • Perception delay. The dealer averages sales over 5 days to try and sort out real trends from temporary blips.
  • Response delay. The dealer spreads out the change over the next three orders, rather than making the whole adjustment in one order.
  • Delivery delay. The car supplier takes time to process the order and deliver the cars.

These combined delays will cause inventory levels to oscillate. Let’s say an increase in sales causes orders to go up. But it takes time for those orders to come in and, in the meantime, inventory goes down further (thanks to the higher sales rate) and the dealer orders even more cars. Once the orders come in, inventory levels more than recover, as the dealer ordered too much when inventories were low. She then cuts back but, again, because of the delays, she ends up with too little inventory.

If the dealer tries to smooth out inventory levels by averaging sales over 2 days instead of 5 (reducing her perception delay), this actually makes the oscillations slightly worse. If she instead tries to reduce her response delay by making the order adjustment in one order instead of 3, it turns out the oscillations get much worse. By contrast, if she increases her response delay from 3 orders to 6, the oscillations are significantly dampened.

Before intervening in a system, we should try to understand it

Before disturbing or intervening in a system, watch how it behaves and try to “get its beat”. But our understanding will never be perfect. Systems will always surprise us to some extent.

Everything we think we know is a model of reality

Everything we think we know about the world is a model of reality, not reality itself. Our models work in most cases, which is why humans are such a successful species. But they are not perfect. They don’t fully represent the world, which means we will often make mistakes.

There is a lot of value in making your models’ assumptions explicit and consistent and inviting others to challenge them. It’s also a good practice to set it out with diagrams and equations to clarify your thinking. Using words, lists or pictures is fine as well.

Systems require interdisciplinary thinking

Understanding systems will likely cross traditional disciplinary lines. To understand the system, you must learn from various experts — e.g. economists, chemists, psychologists and theologians.

At the same time, you should be careful not to get drawn into the distortions that come from their narrow and incomplete expertise. This can be hard. [In Range, David Epstein similarly talks about how specialists are better at kind problems, while generalists are better at wicked problems.]

Common unintuitive features in systems

Nonlinearities

A linear relationship between two things is represented on a graph by a straight line with constant/propotional effects and returns. Linear thinking comes very naturally to us. But the world is full of non-linear patterns, which confound our expectations.

More importantly, non-linear relationships can change the relative strengths of feedback loops. This can completely flip a system’s behaviour. A dominant reinforcing loop may lead to exponential growth… until the balancing loop suddenly dominates and leads to rapid decline.

Budworm outbreaks

The spruce budworm is a pest that attacks fir and spruce trees in North America. Starting in the 1950s, governments sprayed forests with insecticide aimed at the budworm. By the 1980s, this seemed unsustainable: spraying costs were very high; citizens were objecting to the widespread poisonous sprays; and the budworm was still killing tons of trees.

Two academics, CS Holling and Gordon Baskerville, tried to get a system-level understanding of the problem. They discovered:

  • Budworms’ favoured food source is the balsam fir tree, followed by spruce.
  • Balsam fir trees grow like weeds, crowding out spruce and birch. Without budworms, forests would become monocultures of fir.
  • As fir populations build, the probability of a budworm outbreak increases non-linearly. The final trigger for an outbreak is several warm, dry, springs, which make it easier for budworm larvae to survive.
  • Once an outbreak triggers, budworm populations also increase non-linearly, faster than its predators can multiply.
  • But once fir populations drop (thanks to overconsumption by budworms), budworm populations crash — again, non-linearly. Spruce and birch trees then move into the spaces where firs used to dominate. The budworm starvation balancing loops begins to dominate the budworm reproduction reinforcing loop.

This cycle oscillates over decades, but is ecologically stable and can go on forever. Insecticides, however, mess this up. They kill not only the budworm, but the budworm’s natural predators, weakening the feedback loop keeping budworm populations in check. Moreover, as the density of fir trees remains high, budworm populations keep moving up to the point where they’re constantly on the edge of a population explosion.

Causation

Our minds like to think about single causes producing single effects. At most, we’re comfortable with dealing with a handful of things at a time.

But the real world is much more complex than that. Many causes (or inputs) will often combine to produce many effects (or outputs). [See also Causality is so much harder than we normally think.]

Limiting factors

Every growing thing has multiple limiting factors [aka “bottlenecks”]. We often fail to appreciate the importance of limiting factors. For example, economics evolved at a time when labour and capital were the most common limiting factors to production. Most economic production functions therefore only account for these two factors, and overlook other limiting factors such as clean water/air, dump space or raw materials.

The interaction between growth and its limiting factors is dynamic — i.e. growth itself will change which factor is limiting. When one factor stop being limiting, growth occurs; but growth then changes the relative scarcity of factors until it hits another limiting factor. For example, if a city meets its inhabitants’ needs better than any other city, people will flock there until some limit brings down the city’s ability to satisfy people’s needs.

Bounded rationality

People make reasonable decisions based on the information they have.
However, there are many examples of people acting rationally in their short-term best interests and producing negative results in aggregate. This is because of bounded rationality, due to:

  • Imperfect information. We usually have incomplete or delayed information, particularly on more distant parts of the system.
  • Satisficing. We tend to satisfice instead of optimise. We don’t see the full range of possibilities, so we stick to choices we can live with instead of seeking a long-term optimum.
  • Misperceptions. Even when we have information, we don’t always interpret it correctly. We misperceive risk, discount the future too heavily, and are vulnerable to things like confirmation bias.

Meadows argues that the solution in these cases is rarely to blame the individual or replace them with another person. Instead, we should restructure the system’s information flows, goals, incentives and disincentives so that separate, bounded, rational actions produce outcomes that are in the common interest.

Common system archetypes

Some systems may look very different on the surface but have common feedback loop structures that produce similar dynamic behaviours. These are also known as “archetypes” — common system structures that produce characteristic patterns of behaviours.

I’ve included a very brief summary of the archetypes discussed in the book (Meadows goes into much more detail):

  • Policy resistance (fixes that fail). When there are inconsistent subsystem goals, if one subsystem manages to move the system stock in one direction, other subsystems may just increase their efforts to pull it in the other direction.
  • Tragedy of the commons. This occurs when there is growth in a commonly shared, erodable environment. The commons structure makes selfish behaviour much more profitable than pro-social behaviour.
  • Drift to low performance. In a reinforcing loop, negativity bias causes us to gradually lower our standards, which ends up becoming a self-fulfilling prophecy.
  • Escalation. Competing actors trying to get ahead of each other in a reinforcing loop can build exponentially and quickly reach extremes.
  • Rich get richer. When the winners of a competition get an advantage that helps them compete even more effectively in the future, it results in a reinforcing loop. If the winners’ gains are at the losers’ expense, the losers are eventually bankrupted or forced out. [Meadows calls this “success to the successful” but that sounds far clunkier imo.]
  • Addiction. An actor wants to maintain a stock through a balancing loop and resorts to some intervention (e.g. drugs) to do so. However, the intervention merely fixes the symptoms of the underlying problem. This can cause issues if the intervention undermines the system’s original capability to maintain itself — then more intervention will be needed, leading to more atrophy, etc.
  • Rule beating. Rule beating is evasive action that abides by the letter of the law, but not its spirit. It often occurs when the lower levels in a hierarchy don’t understand or agree with rules imposed from above. This becomes a problem if it causes unnatural behaviour that would not make sense in the absence of the rules.
  • Seeking the wrong goal. This is just about the limitations of proxy measures and Goodhart’s law. In some ways the opposite of Rule beating.

Even when we understand a system, we may not be able to control it or predict its behaviour

There’s a gap between knowing and doing

New systems thinkers often mistakenly think that understanding a system will allow them to predict and control it. Meadows herself made this mistake when she was a student.

But she quickly discovered that there was a big difference between understanding how to fix a system and actually fixing it. This was as true for their own systems as for broader social systems — they understood the structure of addiction yet couldn’t give up their coffee.

Systems tend to be resistant to change

An important function of almost every system is to ensure its own perpetuation.[I think this is due to selection bias, since systems that are not resistant to change would quickly cease to exist in that form.]

If a revolution destroys a government, but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselves ….
— Robert Pirsig, as quoted by Donella Meadows in Thinking in Systems

Systems tend to be resistant to change because of:

  • Stocks (since, as noted above, stocks act as shock absorbers); and
  • Balancing feedback loops.

Meadows suggests a list of 12 possible leverage points, where small changes can cause large shifts in behaviour. Leverage points are often counterintuitive. For example, the Club of Rome (a bunch of business leaders, policymakers, economists and scientists) asked Jay Forrester to develop a complex model to try and understand the nature of the world’s systems and how to solve problems such as poverty, environmental degradation, etc. He did, and came up with a clear leverage point: growth. But decreasing growth (both economic and population growth), not increasing it. World leaders correctly focused on growth as the solution — just in the wrong direction.

In order from lowest leverage to highest, the leverage points are:

  1. Numbers and parameters
  2. Buffers
  3. Stock-and-flow structures
  4. Delays
  5. Balancing feedback loops
  6. Reinforcing feedback loops
  7. Information flows
  8. Rules
  9. Self-organisation
  10. Goals
  11. Paradigms
  12. Transcending paradigms

The book goes through each of these with examples, as Meadows explains why she put it in the position that she did (with the caveat that her list is just a work-in-progress). I won’t attempt to summarise it all here.

My main takeaways were:

  • We often focus a lot on “Numbers and parameters”, which tend to be the short-term flows. But since these numbers generally won’t change the system’s underlying structure, they’re last on the list.
  • Changing the “Buffers”, “Stock-and-flow structures” and “Delays” have bigger impacts, but they tend to be very hard or costly to change.
  • Feedback loops are powerful leverage points. Slowing the growth of a reinforcing loop is usually more effective than strengthening the counteracting balancing loops (of which there could be many), which is why “Reinforcing loops” are higher on the list.
  • All the leverage points from “Information flows” up tend to be pretty powerful.
  • “Paradigms” and “Transcending paradigms” seem to be about changing your/other people’s mindset or assumptions. (Meadows didn’t clarify, but I assume these only apply to social systems, not biological or physical systems.)

We’ll never fully understand systems — but we can redesign and learn from them

Complex systems are inherently unpredictable. Each new insight raises more questions, many of which people from other disciplines have been asking for years. We can only understand systems in the most general way and control them temporarily, at best.

That said, we can still design and redesign systems. We just have to expect surprises along the way and learn from them.

We can’t control systems or figure them out. But we can dance with them!
— Donella Meadows in Thinking in Systems

Faced with dynamic and changing systems, we must be flexible and adapt depending on the state of the system. The best policies should have meta-feedback loops — loops that alter, correct, and expand other feedback loops — so that learning is incorporated into the process. [This reminded me of the construction examples in The Checklist Manifesto and Chris Blattman advocating for taking an incremental approach in Why We Fight].

Other Interesting Points

  • Meadows wrote Thinking in Systems in 1993, but the book was not published until 2008, seven years after Meadows unexpectedly died.
  • It’s difficult to discuss systems only in words because words follow a linear order. Systems, on the other hand, happen all at once, with connections in multiple directions simultaneously. Meadows uses various diagrams in the book, which I have not reproduced because of copyright.
  • Sailboat racing started out being for fun — people just raced whatever boats they already had. The sport became more standardised as people discovered that races are more interesting if competitors are roughly equal in speed and manoeuvrability. People then started designing boats for winning races in particular classes. As races got more serious, rules became stricter, and boats got more and more weird-looking. Today, racing sailboats are nearly unseaworthy — you’d never use them for anything but racing.

My Review of Thinking in Systems

I really enjoyed Thinking in Systems. I’d already thought a lot about many of the ideas in it (bottlenecks, delays, feedback loops, etc) in a loose, scattered way. Meadows’ book helped me put that all together into a more coherent framework. In some ways, it felt like when I started learning basic economics — while some of the ideas may sound kind of “obvious” taken individually (externalities, principal-agent problems, etc), it’s still incredibly useful to have an overarching framework to support your thinking.

Generally, I found Meadows to be a trustworthy writer. She explains at the outset that the book is biased and incomplete, dealing only with the core of systems theory and not the leading edge. Meadows also admits to being regularly surprised by systems herself. And, for the most part, her proposed solutions also contain a good dose of humility.

That said, the book is not perfect. Several of the practical examples (especially the short ones) I thought were a bit more arguable than Meadows made it seem. Some may also accuse Thinking in Systems of being “too left-wing”. This didn’t particularly bother me since Meadows admits at the outset she has biases and I actually agree with her on many issues, though I could see how others with different views may find it annoying. However, I found it interesting that Meadows’ conclusions tend to be more left-wing (e.g. protect the environment, tax the rich), but she reaches them using more conservative lines of reasoning (i.e. don’t disturb systems you don’t understand).

I’ve also posted some detailed thoughts about economic issues and examples raised in Thinking in Systems here.

Have you read Thinking in Systems? Share your thoughts on this summary in the comments below!

Buy Thinking in Systems at: Amazon | Kobo <– These are affiliate links, which means I may earn a small commission if you make a purchase. I’d be grateful if you considered supporting the site in this way! 🙂

If you enjoyed this summary of Thinking in Systems, you may also like:

2 thoughts on “Book Summary: Thinking in Systems by Donella Meadows

  1. Thanks, this is interesting. The Thinker’s Toolkit has some rudimentary stuff on this in the “Causal Flow Diagrams” chapter, with some interesting examples. I think it’s an incredible skill to think this way.

    However, I think there’s one red flag:

    “For example, the Club of Rome (a bunch of business leaders, policymakers, economists and scientists) asked Jay Forrester to develop a complex model to try and understand the nature of the world’s systems and how to solve problems such as poverty, environmental degradation, etc. He did, and came up with a clear leverage point: growth. But decreasing growth (both economic and population growth), not increasing it. World leaders correctly focused on growth as the solution — just in the wrong direction.”

    To take poverty – I think if system thinking suggests that economic growth causes poverty then there’s problem something wrong in the development of the system model.

    I’m reading Enlightenment Now, by Steven Pinker (very good so far) and it’s got a good pespective on poverty with some great quotes. In particular:

    “Poverty has no causes. Wealth has causes” (Peter Bauer)

    He also has a quote fro historian Carlo Cipolla:

    “In preindustrial Europe, the purchase of a garment or of the cloth for a garment remained a luxury the common people could only afford a few times in their lives. One of the main preoccupations of hospital administration was to ensure that the clothes of the deceased should not be usurped but should be given to lawful inheritors. During epidemics of plague, the town authorities had to struggle to confiscate the clothes of the dead and to burn them: people waited for others to die so as to take over their clothes – which generally had the effect of spreading the epidemic”.

    That is, prior to economic growth, we were all just desperately poor.

    I’ve noticed the “degrowth” narrative is somewhat popular these days but I do think it’s very wrong. There’s some good Noah Smith stuff on this here:
    https://www.noahpinion.blog/p/people-are-realizing-that-degrowth

    Definitely interested in more discussion of economic issues and examples – could be that Meadows has a more sophisticated argument than what I’ve understood from the excerpt in the summary.

    1. Thanks, Phil. Yeah, I thought you might pick up on this point. I do go into it more in my economics post which I’ll post later this week.

      Meadows’ argument in this book is not much more sophisticated than what I put above, but she wrote a whole other book called “Limits to Growth”, which I haven’t read, but which may well be more nuanced.

      Spoiling my later post somewhat, but – basically, I think Meadows is right to point out that there *are* limits to growth in a constrained environment. However, I think she’s wrong in jumping to *decreasing* growth as the answer.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.