The Pretence of Knowledge, Part 5
"When we’re dealing with incredibly complex systems—like economies or societies—we have to accept that some things just can’t be predicted."
If you're just joining us, we've been translating Friedrich Hayek's major works into plain English for modern readers. Last month we completed The Use of Knowledge in Society, which you can [download as a PDF here]. Now we're working through his 1974 Nobel Prize lecture, The Pretence of Knowledge. This is Part 5. You can find Part 1 here, Part 2 here, Part 3 here, and Part 4 here.
People today expect way too much from science—especially when it comes to fixing society’s problems. And that’s a big problem. Even if the best scientists understand the limits of what they can actually do when it comes to human behavior, politics, or the economy, the public still expects miracles. And because those expectations exist, there will always be people willing to step up and promise more than they can deliver. Some of them are frauds, others are probably sincere—but all of them oversell science.
The problem is, it’s incredibly hard for most people to tell the difference between good science and pseudoscientific nonsense—especially when both come wrapped in charts, graphs, or big institutions. Take the famous Limits to Growth report: it got huge media attention because it claimed to use science to predict ecological and economic doom. But what you probably didn’t hear about were the sharp, devastating critiques of that report by actual experts. And it’s not just economics that’s plagued by this “we can control everything” mindset. You’ll find the same blind faith in “scientific” control in fields like psychology, psychiatry, sociology, and even so-called philosophies of history. These fields often treat science like a set of formulas that can be applied to human life—with very shaky results.
To protect the reputation of science itself, we need to challenge this kind of pseudoscientific overreach—especially when it becomes entrenched in universities or public policy. Thankfully, thinkers like Karl Popper have given us tools to separate real science from imposters. His ideas about falsifiability—whether a theory can be tested and potentially proven wrong—are crucial for holding science to its proper standard. And honestly, I think some of the most accepted ideas today wouldn’t pass that test.
But beyond Popper’s filter, there's something deeper going on in the social sciences. When we’re dealing with incredibly complex systems—like economies or societies—we have to accept that some things just can’t be predicted. In fact, pretending that we can predict or control everything using “science” might actually make us worse at understanding and improving human life. That false sense of certainty might be the biggest obstacle of all.
Here’s the main thing to keep in mind: the reason physics has progressed so quickly is because it deals with problems that can be explained using just a few key variables. That’s a huge advantage. In physics, once you identify a few basic facts or probabilities, you can predict what’s going to happen with great accuracy. That’s not the case in the social sciences, where we’re dealing with complex systems made up of tons of interconnected parts. These “essentially complex phenomena”—like economies or societies—don’t behave like simple machines.
The challenge in economics and other social sciences isn’t so much coming up with theories to explain what we see. We’ve actually done a decent job of that. The real challenge is applying those theories to the real world, because that means gathering a huge amount of specific information about countless little details—details we usually can’t get. Theories are only useful when you can plug in the real-world data. But in the social sciences, gathering that data is often impossible, or so time-consuming and complex that it might as well be. Computers can crunch numbers all day, but they can’t help you gather facts that nobody knows in the first place.
Let me give you a simple example. Imagine a sports game between two evenly matched teams. In theory, if you knew everything about each player’s mental state, reflexes, heart rate, and muscles at every moment, you could predict the winner. That’s because you’d have enough data to apply your theory. But we can’t know all that, so the game becomes unpredictable—scientifically speaking—even though we might have a general idea about what matters most in the outcome.
This doesn’t mean we’re totally in the dark. If we know the rules of the game, we can at least predict the general flow of events. We’ll know what types of actions are possible and which ones aren’t. But we won’t be able to say who will score next or who will win. That’s the difference: we can make pattern predictions, but not precise predictions—and that distinction is crucial in understanding the limits of science when it comes to human affairs.
Core Ideas from Part 5 of "The Pretence of Knowledge"
Science advances best in simple systems: The physical sciences succeed where outcomes depend on a few measurable variables. Social sciences deal with many more variables and complex interconnections.
Social phenomena require specific knowledge: Predicting events in the economy (or society) requires knowing a massive number of particular facts. Theories alone aren’t enough—we need the data, and that data is usually inaccessible.
Pattern prediction vs. precise prediction: We can often forecast the general behavior of complex systems (like knowing what a sports game might look like) but not specific outcomes (like who will win or score next).
Scientific methods have limits in social systems: Computers and equations are powerful, but they’re only useful when we know what to plug into them. In human affairs, the missing data is often the hardest problem of all.
Overconfidence in data harms understanding: Expecting the same level of precision in economics as in physics leads to false confidence and potentially disastrous policies.



