Heuristic principles for scanning

As a futures scanner ‘back in the day’ (as they say), working in the corporate area of Swinburne responsible for undertaking organisational foresight and planning, and in the time since, I came to employ several heuristics or principles of scanning that I had found empirically to be useful. There are about ten of these that I can think of right at this moment, which I will enumerate below. But before I proceed, though, let me first explain one of the (many) models we used in our teaching in the old Swinburne Master of Strategic Foresight (MSF).

Competence, conscious and not

In the very first unit of the MSF we used to introduce a well-known model from psychology, the so-named “conscious competence” sequence, in order to presage (and perhaps inoculate against) the feelings of overwhelm that students were very likely to feel (according to our experiences teaching it) as they encountered the unbounded vastness of the Futures field. We used to say – only half-jokingly, and maybe even less – that “futures studies begins with the sum total of all human knowledge”. It’s for this reason that we used to use Wilber’s integral model as one possible map with which to orient oneself, because it not only shows the territory that is being mapped, but also the very mechanisms that exist within the mind of the map-maker, which are equally important for understanding how human knowledge has been acquired or created. (It is also, in fact, no less than a model of cosmic evolution itself as it has played out here on planet Earth, or what has more recently become known as ‘big history’).

The sequence of stages through which anyone moves who is learning either new knowledge or a new skill is:

  1. Unconscious incompetence. ‘You don’t know what you don’t know about’. We live in this state for most things that human beings ‘know’ as a species. That is, most of us don’t know anything about most aspects of collective human knowledge or skills; that is, until we encounter that topic, and discover that we possess
  2. Conscious incompetence. ‘You now know that you don’t know about it’. This is the awakening to ignorance that being introduced to a new topic area or skill produces. For many people this is usually uncomfortable, especially if their sense of self is defined by their knowledge or skill-set, but it is a necessary phase of learning. It pays in spades to develop the skill of being comfortable with being unskillful, or with not knowing.
  3. Conscious competence. ‘If you concentrate you can probably (mostly) do it’. With some familiarity with the new knowledge area or skill, it becomes possible to (usually haltingly and imperfectly) have a go at doing it. Picture learning to ride a bicycle or drive a car. At first it is not easy or pretty, and we tend to be hyper-vigilant while trying, which is quite exhausting. But with what Cal Newport describes as “deliberate practice”, it is possible to improve our ability. This requires a concerted effort on operating at the very edge of and beyond what we feel competent in. Anyone who has learned to play a musical instrument, or similarly acquired a physical skill, is familiar with this need. It is less common when applied to cognitive expertise, for precisely the reason that undertaking it pushes one squarely into conscious incompetence, something which knowledge workers whose identity is tied up with being competent are generally reluctant to do. However, to paraphrase an aphorism due to science-fiction author Arthur C. Clarke (his so-named ‘second law’): “the way we find the limits of our competence is by deliberately moving beyond it into incompetence”. After some time and sufficient practice we might even end up with
  4. Unconscious competence, where we can undertake the activity with a recognised high level of skill without having to think about it very much or even at all. The skill or expertise has become so well ingrained as to be effectively invisible or effortless. How many long drives have you had where equally you don’t quite remember the details of driving, and yet managed to safely undertake that very complex activity? Watching people do what they are unconsciously competent at is often mesmerising, for precisely the reason that it appears so magical.

Many people who have heard of scanning will likely have what I would consider a somewhat incorrect idea of what it entails – that it is something to do with what’s ‘out there’ in the wider world and with how to chose the ‘right’ framework, method or tool to employ. Well, yes and no. While having some sort of framework or method is certainly important and preferable to none, it is less important than having an appropriate overall general attitude, mindset or approach to scanning. And it is that general attitude, mindset and approach – at least, as I see it – which I will try to encapsulate in a few heuristic principles below. The reason I mention the above sequence is because it applies, not surprisingly, to futures scanning as well. Therefore, if you intend to use some or all of these principles in your own scanning practice, be aware that it will probably feel very artificial at first to try to do this – and that is entirely to be expected: conscious competence always feels forced and unnatural until such time as it gives way to unconscious competence. So, give it time, and it will become easier, like riding a bicycle or driving a car.

Principles for scanning

Note well the term that I am using in the title here: ‘heuristic’. These are not ‘laws’ or rationally-produced ‘rules’ that can be inferred or derived from some sort of optimal model of what scanning is and how it is done. They are, instead, practice-based ‘rules-of-thumb’, and as such are impossible to justify a priori. Like the ‘art’ of scanning itself, they rely on the knowledge and experience of the scanner, most of which is pretty idiosyncratic and almost entirely non-reproducible. That said, if you speak with people who do or have done scanning for a living, they will likely describe a set of ideas for how they do it which might well coalesce around a few key principles or practices. These are some of mine. Their justification comes a posteriori – that is, from experience and what seemed to work in practice.

In light of this admittedly outlandish claim, an experiment I am planning to carry out here over an extended sequence of posts is to revisit and re-examine the scanning ‘hits’ I reported upon two decades ago as a foresight scanner for Swinburne. This will be done by posting them in their original form in order to reflect (and perhaps even invite commentary) upon how those signals identified back then played out as what became eventual lived reality. Or maybe just to show what was said and simply leave it at that for the reader to form an opinion. We’ll see how it turns out. For now, though, here are ten principles I have employed (and still do) in my futures and scanning practice. If you haven’t already, it might pay to have a quick (re-)read of the Foresight Primer to get into the right headspace, as it outlines a number of foundational assumptions that are not explicitly stated here.

The Suzuki Principle (初心)

“In the mind of the beginner there are many possibilities, but in the mind of the expert there are few.”

This is a slight paraphrase of a very famous quote from the Zen teacher (roshi) Shunryu Suzuki taken from the Prologue epigraph of the book Zen Mind, Beginner’s Mind (1970). This principle provides a reminder to always approach the future with beginner’s mind – and therefore to remain open to the many possibilities which may lie there, as opposed to the relatively few that, for example, an expert scenarist might have developed. In futures presentations I frequently show quotes from various experts making statements that have turned out to be very wrong indeed, not to belittle them, but rather to simply show that even great expertise in one’s chosen field is no guarantee of correctness when it comes to the future. More importantly, it also gives one permission to be consciously incompetent about the future (or the future of the topic area or domain being investigated), since a beginner is always at a state very far away from that of unconsciously competent ‘mastery’. And this allows one to engage in ‘deliberate practice’ with respect to the future, which is just as important. The glyph shown above in the heading is ‘shoshin’ – (‘beginner’s mind’) – and counsels humility about the future as well as reminding us of how much ‘mastery’ we might have of it even if we are ‘experts’ in our field – which is to say, not much at all. Let us always remember this very important, and indeed foundational, insight, by always keeping in mind the Suzuki Principle: shoshin (初心).

The Bartlet Principle

“Look for smart people who disagree with you.”

This statement is a paraphrase from some dialogue in an episode of the TV series The West Wing (Season 2, Episode 4, ‘In This White House’, 2000). In the episode, one of the aides of the Democrat president (Josiah Bartlet) is roundly trounced in a television debate by a young Republican commentator, who is thereafter summoned to the White House by the Chief of Staff, Leo McGarry, who offers her a job. She is a bit taken aback by this, and somewhat flummoxed, not understanding why a Democrat White House is asking for a Republican to come onto the staff. McGarry clarifies: “The President likes smart people who disagree with him.” This principle also encapsulates an approach that the philosopher Ken Wilber utilised in developing his Integral Theory, namely, that no-one is smart enough to be 100% wrong. Therefore, we should seek out people who have alternative views to our own, and do them the courtesy of not thinking that they are idiots for not sharing ours. We do this in order to ensure we are seriously attempting to view something from more than one limited viewpoint. And, per the Suzuki Principle, we should never become so arrogant as to imagine we are so obviously right that we couldn’t possibly be wrong. One might contrast this principle with a related one suggesting the converse: “look for dumb people who agree with you.” The (recent ex-President’s) name one might give to this converse version of the principle is left as an exercise for the reader…

The Kahn Principle

“The most likely future isn’t.”

This is another famous quote, this time from the archetypal ‘genius’ forecaster Herman Kahn, a physicist-turned-military strategist who also coined the well-known term “thinking about the unthinkable” for analysing the possibilities of nuclear war (and was thereby reputed to have been one of the inspirations for Stanley Kubrick’s Dr Strangelove). This principle deals with the observation that people often tend to form a view of the future based upon some sense of present dynamics continuing to play out as they have been (the ‘most likely’ or ‘baseline’ future), and then use that view as the basis for imagining ‘the’ future that is coming. Organisations also frequently fall into the trap of thinking only about the ‘official’ future that is sanctioned by the organisational culture, and not about any alternatives to that official future. The ‘two futures’ approach utilised in the Shell Scenarios since the 1990s has often been of this ‘official future plus an alternative’ structure, precisely in order to counteract the tendency to regard an ‘official’, ‘most likely’ or ‘baseline’ future as the future to be planned for. Hence, this principle reminds us to be wary and cautious of ‘most likely future’-type thinking as the basis for preparing for the future, and to always recall the key foundational axiom of Futures Studies: that there are multiple alternative futures to consider. Of course, when it comes to The Limits to Growth, however, it does indeed seem that the ‘do nothing’ baseline is the future that is coming (but that’s something for another time…).

The Gibson Principle

“The future is already here – it’s just not evenly distributed.”

This is another very famous quote (seeing a pattern here?), this time widely attributed to science fiction author William Gibson, although an exact first attribution is difficult as it appears to have emerged over a period of time in a variety of contexts (including in interviews, which is typically where it is quoted from). It essentially refers to what is known in futures as ‘precursor analysis’ or to so-named ‘bellwether indicators’, wherein one may (theoretically) look for places where ‘the future’ seems to be arriving ‘early’, as it were, in comparison with ‘reality’ (whatever that is) as a whole. For example, the Scandinavian countries are usually regarded as being ‘ahead’ in some aspects of social policy compared to other Western nations, so they are sometimes regarded as bellwether indicators of how social policy will eventually end up in other places. For our purposes, this principle reminds us, essentially, that the future does not arrive unbidden as a whole but rather ‘signals’ itself in more-or-less isolated pockets or corners of the present. It is therefore the conceptual foundation of Graham Molitor’s earlier method of Emerging Issues Analysis – or equivalently, ‘Horizon 3’ in the ‘Three Horizons’ framework – which, effectively, seeks after these very “pockets of the future found in the present.”

The ‘Cocktail Party’ Principle

“The prepared or attuned mind can more easily pick out weak signals from cacophonous noise.”

This, together with the Gibson Principle, forms the conceptual basis of the idea that one may, as it were, look for evidence of the future in the present. That is, if we regard scenarios (broadly conceived) as hypotheses about how the future might turn out, we can use them to examine present-moment scanning ‘hits’ as potential indicators, or possibly even ‘evidence’, of how – and perhaps even the direction in which – the future may be emerging from the dynamics of the present. The name comes from the observation that, when at a cocktail party and surrounded by a lot of noise and chatter, we may nevertheless hear someone speak our name fairly quietly in a corner of the room behind us and still be able to almost magically ‘pick it out’ of the cacophony. I often remarked in teaching in the MSF that when reporting our foresight findings to others we need to “tune our transmitter to the frequency of the receiver” in order to assure better ‘reception’ of the information we are providing to them. This ‘attunement’ to outside information of course operates within ourselves, as well, something captured in the humorous maxim that the most interesting radio station, for our reticular activating system, is WIIFM – ‘What’s In It For Me?” This ability to discern a weak signal amid lots of noise has an important analogue in futures scanning: a mind prepared or attuned to a variety of alternative futures or scenarios may be able to pick up much weaker signals of those emerging futures than one that is not similarly attuned. Therefore, part of scanning should be to try to ‘recognise’ weak signals of ‘known’ (i.e., scenario-ed) emerging futures (usually considered ‘monitoring’), while also being equally assiduously on the lookout for entirely novel signals as well (see The Feynman Principle, below).

The Sturgeon Principle

“90% of everything is crud.”

This is yet another famous quote, this one by the science-fiction (SF) author Theodore Sturgeon, who famously pushed back against widespread criticism of SF for its allegedly low quality compared to other genres, with critics deriding SF by saying that “90% of science fiction is crud.” He countered by retorting:

They’re right. 90% of science fiction is crud. But then, 90% of everything is crud, and it’s the 10% that isn’t crud that’s important.

Of course, this statement took off and eventually became more widely known as “Sturgeon’s Law”. In essence, this principle reminds us that much, if not most, of the information we are likely to find, about any and all target topics, is likely to be somewhat less than useful. This may be due to it being mere uniformed opinion – which can now certainly be found in even more abundance these days thanks to the pervasiveness of social media – or to more prosaic reasons, such as a lack of good data or poor critical analysis. The key point, then, is that not all information turned up by scanning is created equal. And so the trouble – or trick – is finding the 10% or so of this material which actually is worth looking at, as opposed to the very plentiful but generally not very useful crud that is always rather more easy to find even with quite minimal and/or cursory effort. Futures scanners sifting through the information torrent need to be fully aware of the pessimism inherent in The Sturgeon Principle, and not lose heart at – or their minds in – the enormous quantity of crud they are inevitably going to encounter. Futures scanning is like running a marathon through treacle, which is one reason why it is so difficult to do well. This is also why the much easier – and thus much more common – form of very limited just-in-time ‘quickie scanning’ is almost universally so awful. It confuses volume/noise with insight/signal, and can easily end up being worse than useless due to the false sense of (only apparent) utility that might be imputed to it. Like an iceberg, the (useful) 10% we do see rests upon the (useless) 90% we don’t.* You don’t get the former without the latter, since the former cannot exist without the latter, and so the core skill lies both in recognising that this is so in the first place, and then in extracting the rare useful from the abundantly not. One way is by way of:

The Goldfinger Principle

“Once is happenstance; twice is coincidence. The third time it is enemy action.”

This is the ‘Rule of Three Hits’ that former students in the MSF will find very familiar. It comes from something the eponymous character in the Ian Fleming novel Goldfinger says to James Bond (“they have a saying in Chicago, Mr Bond…”; indeed, it also forms the overall three-part structure of the book, something only barely hinted at in the film). Since Fleming actually worked in military intelligence during WWII, and based much of his characterisations on his experiences, it is quite possible (although perhaps it is only merely preposterous) that this quote actually derives from something a personage no lesser than Winston Churchill himself may have said. At any rate, it is a useful principle to help avoid being overwhelmed by the continuous stream of data and noise out of which we are attempting to pick up signals. While not perfect, it is nonetheless a practical and workable threshold condition, since it allows one to simply note something interesting during scanning, and then put it aside; note a second instance as having already pinged; and to use the third instance as a trigger to go ahead and explore it further and more fully. As we used to put it in the MSF: ‘three strikes and it’s in.’ Note how there was a prompt above after the third instance of using a famous quote to formulate one of these principles …

The de Bono Principle

“Consciously and deliberately seek out novelty.”

This principle, unlike the majority of those above, does not come from a quote but, rather, from a practice popularised by Edward de Bono in his techniques of lateral thinking and creativity. Specifically, it comes from the practice of ‘random input’ and its closely-related practice of the ‘provocation operation’ or “po” (see, e.g., Serious Creativity, HarperCollins, 1992). In this approach, one attempts to disrupt and destabilise what may be rutted patterns in thinking by deliberately introducing a novel element – choosing a random word from a dictionary, say – and by allowing that resulting instability to ‘resolve’ itself into a new idea. In our case, we build into our scanning practice a deliberate and conscious structure designed to force the appearance of novelty. An example from my early pre-academic days scanning for Swinburne comes to mind: each day, while driving to work, I would move the radio tuner along by exactly one station in the direction on the dial which I was currently following (all the way up, turn around, all the way down, turn around, etc …). The only options were for it to be either left on or turned off (in case of headache, etc, but it would then not be shifted the next day), but no change beyond the movement to the new station was allowed. This forced me to listen to radio stations I would not ordinarily listen to, thereby exposing me to viewpoints and perspectives that I might not otherwise hear or seek out. (MSF students might recall this as the ‘DILLIGAF discussion’). This is, in a certain sense, a variant of the Bartlet Principle, although I have to confess that talk-back radio very seldom yielded disagreeable smart people to listen to! I was very happy to discover later that – so the story goes – the legendary Buckminster Fuller used to do the same thing: at the airport he would buy a random magazine selected from the shelves in the bookstore and then read it cover to cover during his flight. If it’s good enough for Bucky…

The Feynman Principle

“The thing that doesn’t fit is the thing that’s most interesting.”

As a physics student, Richard Feynman was one of my scientific heroes, largely for his highly original and totally idiosyncratic approach to physics (and life in general), whose autobiographical reminiscences I read as an undergraduate. (In fact, as a second-year student I even convinced the physics workshop technicians to build a ‘Feynman reverse sprinkler’ in order to – as Feynman would say, “do the experiment!” – to determine which of the various predictions for its motion was correct.) The above quote comes from an extended interview with Feynman called The Pleasure of Finding Things Out, as accurate a statement of the joy of science as one could hope for. (Feynman also commented elsewhere that: “Physics is like sex: sure, it may give some practical results, but that’s not why we do it.” But I digress …) The same basic principle was applied much earlier by Johannes Kepler and led to his discovery that the orbits of the planets around the Sun are not circles but ellipses, something he found by not ignoring a small discrepancy in the expected data for the orbit of Mars. By focussing on the datum that didn’t fit, and remaining rigorously beholden to looking for the truth rather than what he desired, it finally led him to one of his great discoveries. It was also the principle employed by Sherlock Holmes in the story ‘The Adventure of Silver Blaze’ when he remarked upon “the curious incident of the dog in the night-time”. Given that in futures scanning we are specifically looking for early and weak signals of impending change or emerging futures, the Feynman Principle reminds us to pay close attention to the odd datum or thing that seems somehow out of place in the gushing data stream we are seeking to plumb. Why it is out of place or doesn’t fit is always of interest, and following-up on that piqued interest might even lead us to a much deeper insight. In the Houston school of futures scanning, while (baseline) confirming and (alternatives) countering/resolving hits are useful in their place, it is the (novel) creating hits that really are of the most interest, as they suggest new futures. That is the Feynman Principle in action.

The Sagan Principle

The path to the future lies through the corpus callosum.”

Carl Sagan was another of my scientific heroes; indeed, I credit his 1980 book/series Cosmos as being one of the main reasons my interest in science was rekindled after being almost-completely extinguished by my early experiences studying at university. The above is a quote from chapter 7 of Sagan’s book The Dragons of Eden (1977), which discusses early scientific work which had shown that the right and left hemispheres of the human brain tended, respectively, towards intuitive holistic, and logical critical thinking. Connecting the two hemispheres is a thick bundle of nerve fibres running between them – the corpus callosum. This somewhat obscure quote needs to be seen in the broader context of his other writings, such as in the first chapter/episode of Cosmos (1980): “we need imagination and skepticism both”. The central premise, both here and in many other related writings, is that in order to do science well, one needs to be both open minded and sceptical – an uneasy alliance, as it were, of the complementary proclivities of the hemispheres of the brain, made possible and facilitated by the exchange of information across the corpus callosum. Open-minded, “but not so open your brains fall out”; and sceptical, but not so much that “no new ideas get through to you”. In futures scanning, where we often need to spend a great deal of time out on the fringe looking for new ideas among those who tend to have them first, it is altogether too easy to ‘go native’, lose our critical faculties and buy uncritically into whatever it is we might find there. But we could also just as easily go the other way and become jaded with the quite often looney ideas we may encounter out there, losing the ability to allow any new ideas in at all. The Sagan Principle reminds us that we need to be willing to entertain even the most outlandish ideas about the future as hypotheses (per the Cocktail Party Principle) and to then also subject them to very rigorous but unflinchingly fair-minded scepticism. Both are vital and necessary in order to, as Sagan put it, winnow “great thoughts from great nonsense”.


So, there you have it: a small (initial) set of principles for conducting futures scanning in particular, but also for doing foresight work in general. I’ve always considered my training in theoretical physics to have been a perfect preparation for undertaking serious futures work – it instilled the need to both imagine new possibilities and hypotheses and to subject them to unflinchingly sceptical evaluation and critique, without making the mistake of falling in love with them. As Feynman also said: “the first principle is that you must not fool yourself and you are the easiest person to fool”. Perhaps that one should be Principle number 11, or perhaps even ‘0’ 😉 .

Whether these can be considered justified as ‘useful’ heuristic principles will probably depend on the results of the experiment (“do the experiment!”) which is re-examining around 75 or so scanning hits from two decades ago, as a kind of ‘longitudinal retrospective scanning study’. It should hopefully be interesting. Stay tuned.

Note

* In reality, it’s not necessarily exactly these proportions, owing to the relative densities of ice and salty seawater, but this is close enough for our purposes here.

Image credit: Photo by Linus Nylund on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: