Digital history provides new opportunities, but the use of digital tools should not blind historians to the existing challenges. An uncritical belief in the power of digital tools would be wrong, as much as an outright dismissive stance.
The very first digital history project I was introduced to during my Bachelor studies was called ‘European Navigator’ (ENA), developed by the CVCE (Centre virtuel de la connaissance sur l’Europe). Though our professor encouraged its use for our assignment, every time I cited or referred to articles published on the ENA, I only indicated the link to the main page instead of the specific link to the articles, which, understandably, cost me some points.
This happened more than seven years ago, and the project has come a long way since then: the site has been overhauled, renamed (cvce.eu), and enriched with new material. It was one of the rare projects we were shown at the time. We never heard of digital history or digital humanities. We never engaged critically with digital tools. We were rarely introduced to digital (public) history projects. In fact, digital humanities were still something new to most scholars, even though, as Niels Brügger notes, the appearance of the digital dimension in humanities is “a major shift that has slowly affected the humanities since the late 1960s” . It was only around 2014 that, for the first time, I was introduced to digital humanities, to big data, to the challenges of digital history, to digital tools, to metadata or concepts like ‘distant reading’. My traditional education in history suddenly clashed with this new perspective.
Digital history creates new possibilities, not only for the historian’s own research, but also for presenting his research, for making sources accessible, for engaging with the public. The web design and use of hyperlinks make non-linear narratives possible. Digital history projects can connect various types of sources, enriching articles with audiovisual material, such as Inventing Europe, European History Online, or the CVCE. Many cultural institutions invest efforts in digitizing their collections or parts of them, such as the Bibliothèque nationale de France (Gallica). Following a bottom-up perspective, Europeana uses crowdsourcing to grow its collection, with the necessary metadata for each source.
In 1973, Emmanuel Leroy Ladurie wrote
L’historien de demain sera programmeur ou ne sera plus.
Recently, I had to think of this quote again when I tinkered with a program previously unknown to me. The starting point was an article about the Metropolitan Museum of Art collections, now freely accessible as a dataset. The accompanying graphs, based on the database, provided interesting insights into the collection and acquisition policy of the Met over time. This attracted my attention as I would like to include a case study of a museum in my dissertation project, and why not a quantitative analysis of the collection? Hence, I was curious to know what software the author used and asked him. He explained to me that the graphs were created with ggplot2 in R. This answer, however, made the traditional historian inside of me panic, as I had never heard of ‘ggplot2’ or ‘R’ before. After some research, I discovered that ggplot2 is a package for R, which in turn is a programming environment for statistical computing. I also stumbled upon RStudio, an integrated development environment (IDE), which makes it easier to use R. In the beginning, I thought that ggplot2 was a standalone programme. Moreover, I was not aware that RStudio needed R to be installed first. I only grasped this after I had tried opening RStudio without R being installed – which is also a consequence when one does not carefully read the instructions on the web page. Thus, the whole installation process was slightly irritating, even before I really started using the software. Once installed and ready to use, I tried out some very basic commands for creating a simple graph, by following instructions in a guide. Since then, I have not come very far and the graph I created could also be rendered in Excel. I am certain that ggplot2 in R can be quite powerful, but then again, do I need to use this programme, which will cost me a lot of time to learn and understand, if I could also use Excel or similar, easier-to-use software for comparable results? This is an important question I need to ask myself, and it depends on what I would like to show.
In The History Manifesto, Joe Guldi and David Armitage observe in the context of long-term history that “the emergence of the digital humanities as a field meant that a range of tools are within the grasp of anyone, scholar or citizen, who wants to try their hand at making sense of long stretches of time” . Tinkering is an important aspect of digital history (and of research in general). Yet, historians should not be blinded by the sparkling promises of technology. In their study on art museums, the French sociologists Pierre Bourdieu and Alain Darbel used the expression “charismatic ideology”, referring to the conviction that less cultivated classes, because of their “cultural innocence”, could assimilate the highest art forms simply by looking at them . Though the context is very different, I think we could as well speak of the charismatic ideology of digital tools: the belief that their mere use could make everything better and nicer. In the larger context of modern technology, Evgeny Morozov introduced the notion of solutionism (and also discussed by Anita Lucchesi in her blogpost) referring to the belief that modern technology, data and algorithms can solve all of our problems. Historians should be aware of this issue, and even though playing around with innovative ways of doing history can be inspiring and enriching, we also need to reflect on what tools are best for our specific needs in a specific context. Our use with digital tools should nourish a critical, but not outright dismissive, stance towards them, a point I already made in another blogpost about my experience with the software Nodegoat. A good mechanic has a hammer and a screwdriver in his toolbox, but this does not mean that he uses both every time (and certainly not simultaneously) simply because they are in his reach.
In the same blogpost mentioned above, Anita Lucchesi also argues for overcoming the opposition between technology and humanities. In Homo Deus, Yuval Noah Harari dedicates a chapter to what he calls the “data religion” or “Dataism”, i.e. the belief that “the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing” . Electronic algorithms should process the data, and not human beings, who lack the necessary capacities. Harari does not take a clear stance in favour of or against “Dataism” and leaves it up to the reader to decide for himself, but I think that an uncritical and overenthusiastic belief in the power of algorithms will not make research better. The human aspect is still needed; to use the words of Guldi and Armitage: “Cautious and judicious curating of possible data, questions, and subjects is necessary. We must strive to discern and promote questions that are synthetic and relevant and which break new methodological ground” . Algorithms cannot do that, but historians, and in general human beings, can. This must also include possible failures and frustrations, but it enables research to move forward, to explore new ways and to learn from past experiences. In his blogpost on how to teach digital humanities, Max Kemman suggests that students “should explore the tools and engage with all their messiness”. To which I can only agree, and I think this applies not only to students, but also to every researcher who is confronted with digital tools.
So, will I engage with all the “messiness” of ggplot2 in R? I might do that, but only if this will be useful for my future research and my dissertation project. If not, I will continue to look out for other potentially useful tools, and explore them.
This blogpost was also published on the page of the C2DH.