How to Thrive in Uncharted Times: An Interview with Margaret Heffernan

By Kay Corry Aubrey, User Researcher and Trainer, Usability Resources, Inc., Bedford, Massachusetts, kay@usabilityresources.net
Margaret Heffernan, author of seven books and three TED Talks

Margaret Heffernan is a former tech executive, TED speaker, and BBC producer who has written several internationally-acclaimed books. We talked recently about some of the key themes in her latest book, Uncharted: How to Navigate the Future.

Kay: You mentioned that forecasting is a prime example of how reality is complex and cannot be reduced to an algorithm.

Margaret: I became really interested in the degree to which even the best forecasting techniques today have limited accuracy and reliability. The people who studied forecasting and forecasting methods most carefully say that probably the best we can do is look about 400 days out, and that’s if you are a “super forecaster.” When you spend a great deal of your life doing this, you read widely with no ideological bias, and you are constantly comparing the forecast to what, in fact, happens, so that you are learning where your errors and mistakes and biases might be. If you’re like the rest of us, it’s probably closer to 150 days.

I thought this was quite remarkable. The whole way we think about decision-making is predicated on a model that just doesn’t work. I looked at economic forecasting. I looked at political forecasting. I looked at the use of profiling events such as terrorist events, or how we can forecast where great scientific breakthroughs might occur. I talked to a lot of people in the intelligence community about forecasting violence and the outbreak of wars. I talked to a lot of historians about whether history repeated itself. Finally, I went head-to-head with geneticists who say that your DNA is the forecast of your future. None of these stood up.

Now, that isn’t to say they don’t have their uses, but what I found was, if you really want rock-solid forecasting about the future, these won’t deliver that. That just led me to think this could cause a big reset. What can we do? What are the things that we could do that might be better than the stuff that we can’t predict?

What is dangerous is believing the forecasts that people tell you. Philip Tetlock, who is probably the most eminent academic forecaster, pointed out that the more famous the forecaster is, the less likely they are to be right. So, should you get rid of all your forecasting pinups? We can’t not think about the future. The way our brains work is we’re constantly toggling between the past, the present, and the future, and we’re recruiting experiences in the past to try to envisage what the future might look like. This is a creative activity. We have to approach all forecasts with extreme skepticism and critical thinking. We have to keep asking, “Okay, so what if they’re wrong?” In business, we largely run management in a way that aims at maximum efficiency. Efficiency is great when we can predict things with accuracy, like on an assembly line, which is a microenvironment that we can control. But when it comes to the wider world, which we can’t control, and which is inherently complex, efficiency becomes dangerous because it erodes our margins for error and surprise.

We’ve seen this in the coronavirus pandemic where many health systems the world over were run efficiently, which is why when this horrible surprise hit them, they were utterly ill-prepared, without enough machinery, without enough safety equipment. In many cases, without enough doctors and nurses and beds. We have to step back and learn from this very visceral experience of uncertainty. When things are uncertain, you need significant margins. You need robust preparation for surprises, which is quite different from efficient planning for predictable outcomes.

Kay: I was also struck by your examination of the dangers around allowing technology to seep into our lives and control our institutions.

Margaret: When the internet arrived, the whole business model was predicated on this notion that if we have enough data, we can predict everything. What we’ve found after thirty years of trying is that it doesn’t work. It turns out this is a much harder problem, and we can’t solve it. The thrust of technologies is moving toward trying to force us to do what companies want us to do, rather than predict our own choices. This is the technology paradox, which is that the more we depend on technology, the more we lose the skills that we’re essentially outsourcing to machines.

I ran tech companies for nine years, so I’m not anti-tech, but on every front, the more we adopt technology, the more we risk de-skilling ourselves. We have to think very carefully when we use some of this stuff. Is it something I genuinely don’t care if I never do again, or is it something that it really matters to me or to the society I inhabit?

I can’t tell you most of my friends’ phone numbers and twenty years ago, I could. I’ve outsourced that knowledge to my phone. I’m very happy living without it. I don’t think this is a grave loss to the spiritual life of me or my friends. But when it comes to making decisions about who gets parole or which children might be at risk because they’re not being looked after well by their parents, or which teachers should be hired and which should be fired, it’s very dangerous for two reasons. One is all these systems make mistakes and it’s impossible to find out how a mistake occurred. The person at the receiving end learns nothing, and neither do the people delivering the decisions. This de-skills learning, debate, discussion, and redress. These are not things we want to lose. This is an outsourcing of responsibility to commercial entities that will not explain how their systems work because it is a trade secret. I do not think this is socially acceptable, and I don’t think we have to buy it just because it’s efficient.

Kay: You mention Shoshana Zuboff’s book The Age of Surveillance Capitalism in Uncharted. She talks about how you either accept personal data collection or you don’t get to participate in modern society.

Margaret: Shoshana’s book is a real masterpiece. She’s spent her entire life studying the ways in which we work, and this book is a summation of that work and a very brilliant synthesis of it. The rhetoric of inevitability is absolutely at the heart of Silicon Valley propaganda. “It’s happening, and there’s nothing you can do about it.” This is deeply obnoxious. It comes dressed up as a forecast, which it is not. It is marketing. It depends on our ignorance of how good and robust and fail-safe more human systems are. If it’s a piece of technology where your only answer is, “I don’t know…that’s what the machine said,” then in effect, you’ve surrendered not only your autonomy, but your responsibility, and that’s unacceptable. We have a choice about whether or not to do that.

There is no authority that any of these commercial companies have to be manipulating human behavior. There are all these other ways we can make decisions, all these other ways we can think about the future, and they’re rich and they’re human, and they’re robust, and they’re flexible, and they’re adaptable, and they can’t be commandeered by a handful of companies. We have the capacity to deal with uncertainty and ambiguity; we have done that since human history began. We need not, now, lose sight of our own capabilities and intelligence and imagination.

Kay: You describe how humans can regain control by coming together in “cathedral projects.”

Margaret: Stephen Hawking came up with the idea of cathedral projects, which are destined to last more than a human lifetime and are started without a clear idea of where they will go. I live in England, about nine miles from one of the great gothic cathedrals in Europe. These buildings were started without a plan. They had no architect. They were evolved or they emerged as people worked on them generation after generation. The people who worked on them knew they wouldn’t see them finished. If people can do something like that 800 years ago with almost no tools, then we ought to be able to face uncertainty and ambiguity with a lot more courage, a lot more intelligence, and a lot more determination.

Institutions like CERN, like the Human Genome Project, even the completion of Gaudí’s Sagrada Familia in Barcelona, have a huge amount to teach us about how institutions can flex and change as the world changes. Their ability to do that is really what keeps them relevant. We have to think about healthy institutions being those that change as we do and as society does. Writing that chapter left me with a question: Is democracy a cathedral project? If it is, do we need to think more carefully about how it needs to flex and change and adapt to stay relevant? Have we been, perhaps, a little bit too smug in thinking that the democracy that we had in the 1950s or 1960s was responsive enough and adaptive enough for now?

It struck me that cathedral projects start big and they stay big in their conception. What businesses do is they start small and they grow. As they grow, the jobs get sliced and diced into smaller and smaller and smaller pieces, which become increasingly meaningless. There are many questions to be asked about how you keep the grandeur and the meaning and the importance of work. The concept of a purpose in a business is a big idea. What is a genuinely collaborative, reciprocal relationship between society and business in which we both preserve the best of each other?

Kay: How can qualitative researchers respond to the situation we’re in at this point in human history?

Margaret: There are two things. First, they need to think hard about the ambiguity in numbers. One of the magical things about numbers is that they appear to be so concrete and unambiguous. The great masterful statisticians will tell you that it’s never quite as clear-cut as that. We have to think about the ambiguity in numbers and what got left out. I mean, especially when we’re looking at data: what isn’t there, who hasn’t been asked, what have we not looked at that might not be included? When you are looking at any hard problem, the potential dataset is so immense that we start to think, “Okay, this data matters and that data doesn’t.” These are not objective decisions; these are subjective decisions. Therefore, the analysis is going to be a value judgment. So we have to keep very, very clear-eyed and very transparent about what has been left out and what the value judgments about what matters really are and really say.

A second key is collaboration. In one of my earlier books, Willful Blindness, I told the story of physician Alice Stewart, who did one of the earliest studies into childhood cancer and found that one reason that they were increasing was because mothers were being x-rayed when they were pregnant. She fought for twenty-five years to get this practice outlawed, which eventually, it was. I wondered, wow, how do you keep going for twenty-five years when the whole medical establishment is against you? This is tough! What I discovered was that she had a fantastic collaboration with a statistician named George Kneale. George was very different from her, so there’s a clue in there: work with people that are not like you. He once said, “My job is to prove that she’s wrong. Because if I can’t, if there is no other explanation I can find anywhere, then she has the confidence to keep going.” This is one of the most beautiful levels of collaboration I have ever encountered, which is a thinking partner who’s not an echo chamber, isn’t just a mirror image, but who really is looking under tables, over chairs, around your back, everywhere to see if there can be a better way, a different way, to understand the problem or to find the solution.

Kay: So, gain a deeper perspective by seeking out people who are not like you.

Margaret: One of the most important things is just to be able to be the greatest listener imaginable. Listen for what people are not saying. What are they avoiding? Listen to the kinds of words, metaphors, and analogies that they use. What do those choices really mean about them? Get them to focus as much as humanly possible on their lived experience. Don’t tell me your opinions. Don’t tell me your generalizations. Tell me what’s happened to you. Because that’s ideology-free. That is just what happened, told by somebody who was there. It’s not what people are thinking, it’s what’s happening to them, and in what order, and how they experience it.

Be wary of projecting one’s own narrative or one’s own experience onto it. So, the other thing I would say is find people as different from yourself and hang out with them, get a sense of how the people around you are not you. I mean, one reason I live in the middle of nowhere in England is I want to be in an environment that challenges everything I see and everything I hear. That keeps my language as jargon-free as humanly possible. It is a constant reminder that what I see is not what everybody sees, and what people tell me in airplane lounges is never, ever the whole truth. From those contradictions and paradoxes, great creative ideas and insights emerge.

In recognizing that there are multiple experiences of the same thing, you’re confronting the ambiguity that is endemic to being human. You are not trying to pretend that it does not exist. This is crucial. The minute we start thinking we can control human beings and we know exactly what is going on with them, we have missed 95 percent of them. Keeping oneself alert to the complexity of human rights and social existence seems to be at least one way of keeping critical challenges alive and avoiding some of the really terrible mistakes that we’re seeing in things like AI, in marketing, and in advertising. There is always a different answer and a different perspective, and the question is: Have you seen it yet?

Kay: Thank you, Margaret!

Margaret: Well, thank you. Thank you for such wonderful questions.

Be the first to comment

Leave a Reply