During our futures scanning – doing what I like to sometimes call ‘futures intelligence analysis’, which therefore makes us ‘futures intelligence analysts’ – we need a way to capture and retain the information about the scanning ‘hits’ we find.
The indispensable book Teaching About the Future (Bishop and Hines 2012) – to which I’ve already referred elsewhere – describes the basic format of a ‘scanning hit form’ that can be used to report a scanning hit (Figure 6.1, p.182; Appendix 3, p.288). While it is certainly excellent as a way to report the hit, it nonetheless comes a little way down the track of the full life-cycle of a scanning hit. There needs to be a way to effectively capture and store the hit first, before analysing it for its degree of relevance, timeliness and so on for the organisation, ahead of ultimately committing it to institutional memory as a formally-submitted scanning hit.
Starting out
When I started out doing futures scanning for Swinburne 20-odd years ago, the question of how to collect, collate, store and report the scanning hits I was finding popped up pretty quickly as an issue. I had begun to print and file hits according to a set of target categories, but this approach was rapidly becoming unmanageable, and it was clear this would not scale over time to large volumes at all well. The well-known high-profile futurist Peter Schwartz – famous in the futures field for The Art of the Long View (1996) – had said in that book (p.81) that he no longer rigorously tracked everything that came across his desk – there was just too much. Rather, he said that he simply let it ‘pass through’ his mind and make connections with whatever was already there from previous reading and scanning. At some point these would somehow ‘gel’ into themes which would eventually ‘crystallise’ into useful forms during later scenario work. This approach seemed a little too heuristic for what needed to be positioned at the time – it was taking place in a university, after all – as a very rigorous research process, so another solution was needed.
As part of its membership benefits, The World Future Society had since 1979 published a monthly Future Survey magazine that provided 50 abstracts of future-oriented information sources, whose principal scanner, Michael Marien, offered invaluable advice for anyone undertaking scanning practice (e.g., Marien 1983, 1991). The categories used in Future Survey had greatly informed the choice of scanning categories I had started with. As well, the Futures Program at the OECD had recently produced a new version of a CD-ROM of scanning called Future Trends (OECD 2000) that used a scanning hit ‘1-pager format’ that I very much wanted to emulate, but it was not yet clear to me then in late 2000 how to combine the core ideas and category schema of these two major scanning resources into the form I wanted.
Enter an article by Verne Wheelright (2000) in Futures Research Quarterly, which made the case for using bibliographic citation manager software for capturing scanning. This was one of those “of course!” moments, when something un-thought-of seems very obvious in retrospect (like plate tectonics did in the 1960s). I had spent a great deal of time during my PhD compiling a reference bibliography using BibTeX, the bibliographic system that is paired with LaTeX, the software that allows typesetting of complicated mathematical equations using a plain-text typescript (and with which my PhD was typeset, being 50% text and 50% mathematics). BibTeX uses plain text files to store information, which makes it effectively independent of any specific program to maintain, since any text editor can be used. Although initially fairly basic, it has been developed and extended over the years to the point where one can now use BibTeX-based citation managers (such as JabRef) for quite complex research work, while still running over the top of ordinary plain text files.
From BibTeX to ProCite …
At the time, Swinburne had a licence for the reference manager software ProCite. It was a very powerful and useful program that allowed considerable latitude to modify the workforms that stored reference information for different types, such as journal articles or book chapters. I eventually ended up modifying it a great deal to produce a set of customised workforms that would map onto the reference types used in BibTeX, which also required writing ProCite output style (.pos) files to render the information into the formats that I wanted. One of these was a BibTeX export style that would allow me to cut-paste reference information from the ProCite preview window directly into my main BibTeX .bib file which I still used for writing papers. I had not left the LaTeX/BibTeX habit behind at that time even though by then I had had to use MS Word for other university work for years (and a quick look will show that a good number of the reprints of my earlier papers are hosted at Swinburne’s ResearchBank in their LaTeX form, converted to a PDF output). I still recall ProCite with some fondness, although by about 2006 or so its fundamental structure was no longer able to accommodate any new fields or the need to be able to embed URLs into more than just the single field which was available in a given record.
ProCite was the platform, therefore, for collecting and storing the scanning hits for the 2½ years I spent as a futures intelligence analyst in the Foresight and Planning Unit (FPU) doing foresight analysis for the University, and thus was the form in which the so-named Strategic Scanning Database (SSDb) was maintained. Because anyone who was employed at Swinburne could install ProCite, due to the site-wide licence, anyone could request a copy of the SSDb for their own information and use. A master record showed the current version date and extent of the database (i.e., how many records), as well as contact information to contact FPU, request updates, etc. Each record had a unique auto-generated record ID (initially numerical, but after I left FPU I changed the form of these IDs to custom alphanumeric labels – see below) so that specific sources were always uniquely identifiable. The later practice was an offshoot of the manner by which individual sources are uniquely distinguished in BibTeX, and so eventually the ProCite RecordID became the BibTeX label that I used as the citation key for the entries in the .bib file (although I had done this in reverse, of course, since I had come from BibTeX initially to ProCite later). The point is that every record of a source had a unique identifier – whether stored in a BibTeX file or in a ProCite database – and so in principle could also be identified independently of the particular platform in which it was stored.
… to EndNote …
This last point became important when it became clear, some years later after I had become a foresight researcher and educator, that ProCite was destined to be ‘end-of-life’-d by the maker, to be replaced at Swinburne by EndNote. Since I had been forced to store electronic files in a separate directory while running ProCite (which would link to them via a filename path), this had instilled the idea of making file storage independent of the reference manager software – that is, existing in a directory outside the directory structure of the software itself. Thus, a directory called /References
could hold all the electronic files which needed to be held, with the filenames based on the (unique) ProCite RecordID / BibTeX citation key to avoid collisions or over-writes, and which allowed for independent backing up of both the citation database and the references separately. It is here that the unique citation key approach required by BibTeX came into its own.
Very early on, I had decided to use a citation key pattern based on the first author’s surname, the last two digits of the year, and the initials of those major words of the main title (not the subtitle) which are capitalised in what is known as ‘headline’ capitalisation (i.e., not ‘the,’ ‘of’ ‘an,’ etc). This was done both for brevity but also to reduce the computing resources load for the processing variables in the early mainframe-based versions of LaTeX/BibTeX I had used. Thus, a reference like this one from my PhD:
Einstein, Albert, and Leopold Infeld. ‘On the Motion of Particles in General Relativity Theory.’ Canadian Journal of Mathematics 1, no. 3 (1949): 209–41. doi:10.4153/CJM-1949-020-8
has the citation key einstein+49mpgrt
, where the ‘+’ here indicates more than one author (in effect, ‘+’ means ‘et al.’). The citation key forms the base filename for electronic versions of the reference, so the filename for a PDF of this article is therefore einstein+49mpgrt.pdf
, while a Word document would be einstein+49mpgrt.doc
, and so on. In (fairly rare) cases of collisions of the citation key due to identical keys beings produced by different sources, these are distinguished with numbers or letters, post-fixed to the key.
When it came time to make the switch to EndNote, it was ‘fairly’ easy to write an export format that would match up (to the degree possible) the analogous fields in the two programs as well as export the ProCite RecordID to become the EndNote Label that, by design, would be used by EndNote when doing a BibTeX export, which was an unexpected bonus. Crucially, however, both ProCite and EndNote could look at the same /References
directory, so that I was able to inter-operate them during the process of swapping over. This took some time to complete (as was described in some detail in an email I sent to the ProCite user mailing list in response to a question about how to do such a thing), so that continued inter-operability during the change-over window was a strict non-negotiable. At the end of the process, though, I now had the same directory with the same files, albeit now being viewed by a different software package doing the same basic job as the previous one. The raw materials had not changed, only the processing machinery to access them. This would not have been possible had the files been subsumed inside the directory structure of the software.
So, while EndNote does allow the user to ‘import’ electronic files directly into the directory structure of the program, this is something I have always strenuously resisted the temptation to do (convenient as it is), because of the potential end-of-life issues with that software – not necessarily of the program itself, but with access to the software. And I am glad I did, because as part of my ‘CoViD redundancy’ last year, it was necessary to return the laptop computer Swinburne had issued to me – even though I had offered to buy it outright (denied) – and with it the access to EndNote that came with it by dint of it being a university computer running site-licensed software. Of course, it would have been possible to just buy another laptop and continue to access and install EndNote software through my Swinburne Adjunct status, but that, too, has an uncertain time-window associated with it. Hence, I made the decision last year to forgo proprietary software systems entirely – such as Windows and MacOS – and to instead enter the world of FOSS – free, open-source software; to wit, a privacy-focused Debian-based Linux machine running open-source versions of the types of programs that I use.
… to Zotero …
Enter, therefore, Zotero. I had been investigating the possibility of and the requirements for switching over to Zotero since 2018, as part of making a stance against the lock-in that proprietary software often brings with it (a stance that takes me way back to my Netscape days in the 1990s). The time to do so finally came in the latter part of 2020, and I transitioned my EndNote database across to Zotero by exporting it to the standard RIS format, editing the required tags in a standard text editor, and importing that file into Zotero – a process that would take several hours due to the thousands of references being transferred (I did it a few times to make sure the import worked). Fortunately, the Forums on the Zotero web site are highly active with experts (including the program developers) willing to help and answer questions from end-users, so the technique to transition while keeping the citation key labels intact was explained in detail by people who knew how to do it.
As with any transition from one system to another, there are some disconnects in functionality between them – some features from the old are absent from the new, some from the new are new in the new – and it takes a bit of time to re-jig the database to suit the new format. That process is continuing and, on the whole, I am glad I made the switch. Zotero’s user base has a lot of people who have developed all sorts of add-ins to the base program. The one I want to particularly mention here is Better BibTeX (BBT) by the Zotero user @emilianoheyns (whose several very useful add-ins can be found on GitHub). This is an add-in specifically designed for, as Emiliano puts it, “us LaTeX hold-outs.” It is designed to allow for seamless creation and maintenance of BibTeX citation keys in Zotero, as well as a number of other functions allied to that. One spectacularly useful function in particular is the ability to create an export file of the entire reference database – and keep it updated even when the main file is changed – so that third-party programs can use the exported format, such as a standard text-based .bib file or other more complex formats. Primarily, this allows interoperability between Zotero and LaTeX/BibTeX implementations which require a text .bib file for the reference information, and the citekey is correctly exported to the proper place in the standard BibTeX record. For example, the above Einstein and Infeld journal article is stored in a basic .bib file in the following way (the braces in the title are to prevent changing of the case to lower case by any bibliographic styles that might seek to do so):
@article{einstein+49mpgrt,
title = {On the motion of particles in {G}eneral {R}elativity {T}heory},
author = {Einstein, Albert and Infeld, Leopold},
journal = {Canadian Journal of Mathematics},
year = {1949},
volume = {1},
number = {3},
pages = {209--241},
doi = {10.4153/CJM-1949-020-8}
} .
There may also be other fields in the record, such as note
and other more specialised ones, such as for the physics preprint archive arxiv
, but this gives the general flavour of a BibTeX .bib file – a collection of records of various types (such as @article, @book, @inbook, @inproceedings, etc), each with various fields defined (such as author, editor, title, booktitle, etc). The key point is that the Zotero database is the main repository of the source information, while the BBT add-in allows for export and continual-updating of the exported formats for operating with other programs. Thus, the Zotero database becomes the centre and linchpin of a web of inter-operating programs all using the contents of the database, exported to the format(s) they need, and kept continuously updated as changes are made to the main database. As the wonderful (late) Hans Rosling put it at the end of a BBC TV documentary segment showing how 120,000 data points could be rendered in order to provide deeper insight into the last 200 years of history (BBC 2010): “pretty neat, huh?”
This interoperability with third-party programs brings us to the final section – working with zettelkästen.
… with Zettlr
In 2018, as part of the same stance as described above, I had considered the benefits of using less proprietary formats for doing my work than is required by relying on software systems that force lock-in, such as Windows or EndNote. One such format is called Markdown – a text-based language for formatting text using only symbols found on a standard keyboard. The benefit is that it uses only plain text, so that any text editor can be used to write Markdown files – and no particular program is needed to do so – and, importantly, it is also easy to read, even in its un-rendered ‘raw’ form.
I had also by then come across an approach to note-taking (Ahrens 2017) known as a ‘zettelkasten,’ from the German word for ‘note’ (or, more exactly, ‘slip’) ‘box.’ A zettelkasten (plural zettelkästen) is literally a box (kasten) filled with paper slips (singular and plural ‘zettel’), each of which have notes written on them (usually one key idea per zettel), which are cross-indexed with other zettel to create a web of interconnections between the ideas contained on each such zettel. The approach was popularised in the 20th Century CE by the German sociologist Niklas Luhmann, who credited his prodigious research output as due to the use of a zettelkasten – in his case one that grew to over 90,000 zettel (Schmidt 2018), regarded in some of his writing almost as a research partner, not as a tool (Luhmann 1981).1 The approach has spawned a considerable movement on-line, with at least a couple of dedicated web sites – such as zettelkasten.de, TakeSmartNotes.com, as well as, of course, at least one subReddit www.reddit.com/r/Zettelkasten/. Clearly, the hyperlinking nature of the zettel in a zettelkasten makes it an ideal approach to note-taking for digitising with software. And that’s where Zettlr comes in.
Zettlr is, first and foremost, a Markdown editor and viewer – which is a very useful way of taking notes in a program-independent way. But it also has several features that make it an excellent digital zettelkasten. I had briefly looked at a couple of potential zettelkasten software programs a couple of years ago, but nothing seemed suitable at the time (I don’t recall seeing Zettlr back then, though). This year, however, upon returning to research after some time away from it, I looked again seriously for a potential program that would allow for smart note-taking, in a platform-independent way, that was also FOSS, and available on Linux. And while there are many more around now which are perfectly fine and would do the job, Zettlr ended up fitting the bill very well for me – the more so because it seamlessly interoperates with Zotero as the underlying program dealing with research reference management, as described above. In other words, Zettlr is an open-source program that is transparently interoperable with Zotero, in precisely the way mentioned above: using an always-updated secondary file exported from Zotero to hold the references that can be drawn into it using the open-standard citeproc
schema, which is being promulgated to ease the infuriating problems of trying to get citations to work across different programs across different platforms (an issue I have 25+ years of experience dealing with, as above).
The upshot
So, my (research, writing and scanning) workflow these days consists of Zettlr as my primary note-taking and -making system, which can pull in references from Zotero (via an external file auto-generated by Better BibTeX) using a simple keystroke sequence, mark up and format the note using simple-to-use Markdown, and export the note to a – FOSS, of course – word processing program, LibreOffice (for which there is also an add-on for interoperability with Zotero). Or, Zettlr could instead export the note formatted as HTML – which is in fact how this entire post was written over a couple of days, complete with headings, emphasised text, web links and the References list (and even footnotes) that appear below. The close inter-operation of Zettlr and Zotero, with BetterBibTeX providing the updating reference info file to Zettlr, means that my research workflow is now much more optimal and seamless than it has ever been in the 35-odd years I’ve been a researcher. Oh that I had had this system way back then when I was doing my PhD, and then later when searching for and compiling scanning hits!
Despite Peter Schwartz’s advice to do precisely what we should not ever do as scanners – namely, let an idea get past us without making a note and capturing the hit – weak signals are weak, so we can’t afford to let any of them sneak past without making sure we not only see them, but that others could, too. We need to capture each idea that pings one or more of our scanning heuristics because you never know how they might play out over time – it’s only later that they may or may not mature into something more substantial, something that is not necessarily clear in the moment at the time. If you don’t capture it, you can’t reflect upon it and then evaluate it for its potential relevance. And that’s the very point of the scanning retrospective experiment I’m undertaking right now. To see how well the heuristics I used worked, in order to refine them if needed, and update them if necessary.
A lookout who didn’t announce the coming-into-view of objects on the horizon would not last long in that job (nor should they). As Bishop and Hines put it in an earlier post: “the scanner is to the future what the lookout is to the ship.” To make our scanning as effective as we can, we need tools that let us do that as seamlessly as possible. Your mileage may vary, but my set-up of Zettlr/Zotero with interoperable add-ons works very well for me. Give them a try. Maybe they could work for you, too.2
And, by using FOSS, we automatically enter an informational and cognitive ecosystem founded on the core principle of co-operative interoperability, not proprietary lock-in – or, more to the point – lock-out. Because, in a world now becoming increasingly dominated by the insidious and ever-spreading blight that is surveillance capitalism, any act of dissent becomes a small but vital grain of “friction” against the machinery of this pernicious cancer of the information economy (Sardar 1999; Slaughter 1999; Zuboff 2019; Zettlr 2019; Parker 2020; Doctorow 2021).
And that, I’m sure I don’t need to tell you, is not a weak signal any more…
Notes
1. A translation of the Zusammenfassung (summary) shows this almost-personification of the zettelkasten as collaborator.
2. I do not have any financial stake in either Zotero or Zettlr, although I am considering different ways to support them.
Image credit: Photo by Susan Q Yin on Unsplash
References
Ahrens, Sönke. 2017. How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and Thinking – for Students, Academics and Nonfiction Book Writers. Amazon Online: TakeSmartNotes.Com.
BBC. 2010. Hans Rosling’s 200 Countries, 200 Years, 4 Minutes – The Joy of Stats – BBC Four. BBC Official YouTube Channel. https://youtu.be/jbkSRLYSojo.
Bishop, Peter C., and Andy Hines. 2012. Teaching about the Future. Basingstoke, UK: Palgrave Macmillan. doi:10.1057/9781137020703.
Doctorow, Cory. 2021. “How to Destroy Surveillance Capitalism.” Cory Doctorow’s Craphound.com. January 23, 2021, viewed May 9, 2021. https://craphound.com/destroy/.
Luhmann, Niklas. 1981. “Kommunikation mit Zettelkästen: Ein Erfahrungsbericht [Communication with note boxes: A field report].” In Öffentliche Meinung und sozialer Wandel [Public opinion and social change], edited by Horst Baier, Hans Mathias Kepplinger, and Kurt Reumann, 222–28. Wiesbaden: VS Verlag für Sozialwissenschaften. doi:10.1007/978-3-322-87749-9_19.
Marien, Michael. 1991. “Scanning: An Imperfect Activity in an Era of Fragmentation and Uncertainty.” Futures Research Quarterly 7(3):82–90.
———. 1983. “Touring Futures: An Incomplete Guide to the Literature.” The Futurist 17(2):12–21.
OECD. 2000. Future Trends: An Information Base for Scanning the Future (version 6). Paris: Organisation for Economic Co-operation and Development (OECD). http://www.oecd.org/futures/.
Parker, Carey. 2020. Firewalls Don’t Stop Dragons: A Step-by-Step Guide to Computer Security and Privacy for Non-Techies. 4th ed. New York: Apress Media. doi:10.1007/978-1-4842-6189-7. https://firewallsdontstopdragons.com/book-links-v4/
Sardar, Ziauddin. 1999. “Dissenting Futures and Dissent in the Future.” Futures, Special Issue: Dissenting Futures, 31(2):139–46. doi:10.1016/S0016-3287(98)00123-2.
Schmidt, Johannes F.K. 2018. “Niklas Luhmann’s Card Index: The Fabrication of Serendipity.” Sociologica 12(1):53–60. doi:10.6092/issn.1971-8853/8350.
Schwartz, Peter. 1996. The Art of the Long View: Paths to Strategic Insight for Yourself and Your Company. Sydney: Prospect Media.
Slaughter, Richard A. 1999. “Towards Responsible Dissent and the Rise of Transformational Futures.” Futures, Special Issue: Dissenting Futures, 31(2):147–54. doi:10.1016/S0016-3287(98)00124-4.
Wheelwright, Verne. 2000. “Software for Futurists – Scanning.” Futures Research Quarterly 16(2):63–70.
Zettlr. 2019. Developing Open Source Software Is a Political Act. Zettlr YouTube Channel. https://youtu.be/A7N4NJWtq-s.
Zuboff, Shoshana. 2019. “Surveillance Capitalism and the Challenge of Collective Action.” New Labor Forum 28(1):10–29. doi:10.1177/1095796018819461.