Reining in the ‘Unaccountability Machine’
Public and private efforts at simplification have instead created a crisis of decision-making impotence.
By Lars Erik Schönander, Research Fellow at the Foundation for American Innovation
The Unaccountability Machine: Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind by Dan Davies
Imagine you’re on an automated phone line, selecting options on a phone tree as you struggle to navigate an opaque menu in an effort to connect to the person who can resolve the reason you called. Such struggle and tedium just to connect to a human is a quintessential American consumer experience. No one enjoys the process, but it hardly seems like the end of the world.
But these types of opaque systems also exist in the public sector, on larger scales, and with far more damaging consequences. For example, the project permitting process—what allows companies to build anything—is filled with laws whose application prolongs projects to no end. New projects can be endlessly delayed by endangered animals that don’t even exist. The National Environmental Policy Act (NEPA), a mandatory system of environmental review, can delay new builds for years and subject would-be builders to thousands of pages in paperwork with little to no explanation. New York City’s recent environmental review for congestion pricing involved years of work and thousands of pages to be reviewed.
What connects the automated phone tree to NEPA? Both of them are what Dan Davies would call accountability sinks; mechanisms that siphon accountability out of systems, leaving outsiders struggling to discover the responsible parties when those systems fail. In The Unaccountability Machine: Why Big Systems Make Terrible Decisions—and How the World Lost Its Mind, Dan Davies undertakes a journey to figure out how decision-making systems have created a world of unintended consequences and how attempts to simplify complex systems instead led those systems to go haywire.
***
To illustrate what an accountability sink looks like in practice, Davies recalls an incident in 1999 where 440 squirrels transported to Schiphol airport in the Netherlands met their demise in an industrial poultry shredder. The squirrels’ death resulted from the combination of several policies understandable in isolation, but made without considering how each would interact with the rest—or the well-being of the cargo.
The plane that transported squirrels left the airport before anyone realized that there wasn’t a permit for their cargo. KLM Royal Dutch Airlines, not knowing what to do about the furry passengers, shunted decisions about animal paperwork to the airport‘s agricultural department. But a department concerned with biosecurity, not animal welfare, had a policy of euthanizing animals without paperwork—not figuring out why the animals were deliberately put on the plane. The result of 440 squirrels with no paperwork entering the airport, with modern processes operating in isolation? 440 squirrels needlessly shredded by low level employees.
While accountability sinks are obviously bad, their existence is understandable, Davies explains. People create processes for organizations to standardize routine decisions, making them less dependent on people’s whims. Having standardized processes lets an organization grow as excess attention is directed away from previously solved problems.
But the main focus is on the implications of why people create unaccountable systems. Understanding that requires a grasp of the work of British management theorist Stafford Beer. He serves as our guide to cybernetics—the study of self-regulating systems, or systems that can make adjustments when needed, like a thermostat.
Beer invented the ideas behind the theory by accident. He practiced what people now call “operations research” during World War II, specifically to improve the education of unruly troops. After the war, Beer learned more about best-practices on training through American computer scientist Norbert Wiener’s book Cybernetics, which provided an analysis of feedback systems. Hooked, Beer set out to apply the formula to management consulting, dedicating his life to implementing the approach everywhere from traditional factory production lines to less-conventional markets—even once applying the theory to the economy of an entire country.
Beer’s fundamental (and human) issue—his inability to read the minds of the soldiers he was tasked with educating—was a classic “black box” problem: a complex system whose inner workings are difficult, time-consuming, or even dangerous to understand. Rather than attempt to scrutinize the inscrutable, cyberneticians look at the outputs of the given system to see what happens when one of the inputs changes. In World War II, Beer would make one change to his unit and evaluate whether it improved their educational level or not—no individual feedback necessary.
What Beer can be credited for is a pithy explanation of black boxes and how they can be used to analyze systems: “The purpose of a system is what it does,” abbreviated as POSIWID. Davies applies this to KLM. The airline, the airport, and the Dutch government did not intend for 440 squirrels to be shredded. Unfortunately, the overlap of these policies did just that. The KLM incident encapsulates an ominous observation about modern decision making: In many crises, there are no easy villains, just people working in systems that they do not truly control or understand. If 440 shredded squirrels is the result of three small policies overlapping, what happens when larger systems interact with one another with limited understanding of where everything could go awry?
Davies closes off by using an idea from Stafford Beer, the “viable systems model,” which models the flow of information throughout an organization. The model is a recognition that, without effective methods of managing information, corporate and institutional decisions are made without consideration of reality.
Organizations deform as their decision-making systems lose their grounding in the real world. Once reality intrudes, it is often too late for an organization to reform itself, and disaster ensues.
***
But how do organizations become out of touch in this way? What makes Davies’s book interesting and valuable is how oversimplification separates institutional decisions from reality, and where that problem creeps into so many other aspects of our modern world.
The offender Davies’s starts with are the dismal scientists—economists—because of their outsize influence in informing public policy. Two methods economists learned from other fields gave them such power: the power of modeling issues as optimization problems with a “simple” solution and going out and actively collecting data to answer quandaries. With these tools, economists can model proposals and, if they like the outcome, help organizations enact them. But what happens when these oversimplified models create an inaccurate picture of reality?
These powerful tools can create blindspots, particularly concerning policy recommendations via modeling. The “Ricardian Vice,” as described by the economist Joseph Schumpeter, is the tendency to assume a simplified model can capture the complexity of the real world and that solving a model on paper means that the problem can be solved beyond theory. This often triggers a feedback loop where compounding models become more and more abstract and isolated from reality.
But the more critical flaw is the field’s surprising lack of understanding around the language of fields integral to economics itself, like financial accounting and business operations. Davies notes economists typically never learn much of anything about all of the other players within the economy and how they operate. For example, he points out that the majority of economists will never take a course in accounting. If your specialty is understanding the economy, yet you are illiterate in the ways the actors therein actually operate, it’s hard to imagine your advice will have merit.
Despite this, the economists score a utility victory for policymakers against other social scientists for a simple reason, numeracy. With numbers, you can make a quantitative versus a qualitative argument. For data to be useful, it has to be actionable. Economists provide data in the form of dollars to be saved or gained as a result of a policy change. A politician can easily use a cost-benefit analysis to explain why a program should be cut—even if the conclusion is wrong. A more nebulous case study is harder to make tangible than something tied to numbers.
To prove a point on unrealistic modeling, Davies highlights how one country, Chile, tried to implement cybernetic management to its entire economy through what it dubbed “Project Cybersyn.” Beer was brought on to develop a computerized set of tools to help track the Chilean economy. The goal was to build a system to efficiently manage the industries the Allende government nationalized after they took power.
But instead of the beautiful vision of autonomous worker control and management by exception, the new management faced the same problems the previous managers had—just with another layer of redundancy. “Interventors” passed down orders from the government and had the same troubled relationship with workers as the management they replaced. The socialist effort invented a new class of workers with no utility.
What the experience made clear was that attempting to automate an industrial economy was a doomed effort. Pinochet’s coup of Allende in 1973 brought the project to an end before much could be learned. But this type of simplification isn’t limited to socialist pipe dreams. The most important aspect of the saga is that it highlights similar failures of simplification that appear in economic theory, particularly in the work of Milton Friedman.
In his seminal essay, “The Social Responsibility of Business Is to Increase Its Profits,” businesses, to hear Friedman tell it, do not have responsibilities; only managers in those businesses do. Friedman believes firms are just groups of individuals making decisions. Friedman’s argument requires managers to pretend they are enacting the will of individual shareholders.
But the number of relationships even a small firm has between all its employees is enormous, and even larger when one adds in the shareholders of a given firm, each with their own different sets of interests. How could economists smooth all of this complexity over without a care for the specifics?
But instead of being laughed out, these theories quickly became a widespread ideology, sometimes with disastrous consequences. Davies ticks through a number of other examples (particularly General Electric under Jack Welch), but the culprit is the same: People designed systems with the goal of simplification, but these efforts did not make the world simpler. Instead, these abstractions were convincingly deceptive.
***
No failure captures the way that attempting to oversimplify reality can cause destruction more than the 2008 Financial Crisis. Economists thought the economy was in great shape up until the moment it came tumbling down, even calling that epoch the “great moderation”. The argument seemed reasonable: inflation was low and GDP was growing steadily, precisely the simplified metrics that economists suggested were definitive indicators of strength. And, as described by then-Federal Reserve Governor Ben Bernake, we were living through a 20-year period of declining economic volatility. But hiding just beneath the surface were problems that policymakers did not incorporate into the models that were supposedly such good diviners of the future, such as rising personal debt levels that fueled the housing bubble in the early 2000s or debt from the private sector, which central bankers did not track as intensely as public debt.
Even practitioners of the dismal science now recognize the failure. Bernake, writing about the aftershocks of the Financial Crisis, admitted economists as a profession erred in predicting the “the full nature and dimensions of the crisis—including its complex ramifications across markets, institutions, and countries.” Economists failed not out of malfeasance, but because they ignored new information that would complicate their models, biased by the success of economies from the ’90s to the early 2000s. By not incorporating information about rising debt levels until too late, economists created a model of reality that failed policymakers by failing to capture reality.
Alas, the book’s conclusion is light on solutions, and the few solutions are only targeted at the private sector. That’s a shame because public policy is filled with programs and processes riddled with oversimplified models out of step with reality, creating unintended consequences. In fairness, Davies’s recent work is addressing these types of problems.
And while the ideas in the book flow together well enough, there is a sense of disjointment when jumping through the thematic parts of the book. At least he is self aware about the final chapter problem.
Davies closes with a warning. The world is not going to get simpler, so we better get used to complex systems and dealing with failures that do not have a clear culprit. But Davies does provide a toolkit for how to understand how we got here. What’s sorely needed is an application to resolve the sclerotic state of American governance.
***
In recent years, there have been serious discussions on how to get around the self-created complexity shortcomings that Davies points to, often under the banner of the “abundance agenda,” best summarized in Ezra Klein and Derek Thompson’s book Abundance. The agenda’s theory of decline is that the United States is so focused on process that it makes it impossible to actually make what people need and want. Examples of processes gone amok abound. A bus shelter that does not shelter people from shade. A high-speed rail project so poorly managed that the company left for a project in Morocco, arguing that the North African nation was “less politically dysfunctional” than California. And the less said about the permitting process, the better.
Beer’s idea that “the purpose of a system is what it does,” for example, can be used as a pithy, if practical, measure to pare back burdensome, unnecessary regulation. Take the Paperwork Reduction Act (PRA), which aims to reduce the amount of paperwork the government makes individuals, businesses, and itself process. Far from reducing paperwork, PRA has increased the amount of ink agencies have to generate. As a result, approval from the Office of Information and Regulatory Affairs has taken six to nine months. While OIRA has taken steps to alleviate PRA to make it easier for agencies to complete tasks like user research, if one has to engage in a nine-month process that will create even more paperwork, then the paperwork wasn’t reduced, was it?
Another more tragic example has to do with NEPA and wildfires. NEPA is one of several environmental laws created in the 1970s to protect the environment. It requires federal agencies to assess the environmental outcomes of their actions. Congress had plenty of reasons to pass laws like NEPA because of environmental catastrophes in the 1960s. But today these regulations have unintended results that sometimes harm, rather than protect, the environment. In fact, most NEPA problems stem from one part of the bill, the environmental assessments. With Congress never defining how detailed a “detailed assessment” has to be, interpretation made agencies, when doing assessments, be as conservative as possible. How conservative? Thousands of pages and years worth of work. This conservatism has consequences. The Foundation for American Innovation’s Thomas Hochman found four cases where NEPA caused delays in wildfire preventions long enough that the wildfire happened before the wildfire mitigation even began.
With a new administration focused on building and innovation, this is an ideal moment for Congress to get more serious about how it interacts with agencies. It is not about micromanagement; it is making sure Congress has visibility into the otherwise-unaccountable systems that can wreak havoc.
From my own experience working as a Senate staffer, it is a challenge to get information from federal agencies, even to ask cursory questions. It can take months of back and forth to actually meet with executive branch staff responsible for the programs Congress is tasked with overseeing. Increasing visibility—getting a better look into the proverbial squirrel shredders before things go sideways—can help improve the underlying processes.
Fortunately, Congress would not be reinventing the wheel in providing this type of congressional oversight. As FAI policy analyst Kevin Hawickhorst wrote for the American Conservative, Congress has historically used the Cockrell Committee (set up by Congress in the 1880s to see how agencies functioned on a day-to-day basis) to see congressional agencies’ problems with their own eyes. A modern-day Cockrell Committee would allow congressional staffers to see hands-on how various programs work—how an agency employee is onboarded, or a grant is disbursed.
The end of the Chevron doctrine also means that federal courts need not be so deferential to federal agencies’ interpretation of the law, opening up more space to critique agency decision making. Sense of Congress resolutions (a statement that provides Congress’ opinion on a subject matter) within a bill is one way to do this, making clear the intent behind new laws that require the agency to implement.
Davies’s book helps resurrect the tools of cybernetics, which can help diagnose the problems in our many broken systems and provides insight into how to rebuild them. Efforts to rein in our own unaccountable machines—by Congress, previous administrations, and civic society—haven’t worked. The damage from them is all around us—and only getting worse. Perhaps this refound wisdom can lead us in a better direction.