A New Tool for Digital Manuscript Facsimiles: Introducing the Manicule Web Application

Aylin Malcolm, DM Postgraduate Subcommittee

Much of my work in digital manuscript studies has been informed by a simple question: is this something I can show to my parents? I am the only person among my family and childhood friends to pursue graduate studies in the humanities, and when others take an interest in my work, I try to provide resources that do not depend on specialized knowledge or institutional subscriptions. This question can also be framed in broader terms for scholars interested in public engagement: how can we make our research accessible and engaging for nonspecialists? How can scholars working on the material culture of previous periods demonstrate the relevance of such studies now? And how can digital resources enable us to learn from communities outside the traditional bounds of academia?

I recently confronted these questions while examining a late-fifteenth-century astronomical anthology, written in German and Latin close to the city of Nuremberg, and now identified as Philadelphia, University of Pennsylvania, Kislak Center for Special Collections, Rare Books and Manuscripts, LJS 445. This codex, which you can see in my video orientation below, is remarkable for its inclusion of material from three incunables, making it a clear example of the transmission of knowledge from print to manuscript.

For more videos like this one, see the Schoenberg Institute Youtube channel.

My own fascination with LJS 445 began when I opened it for the first time and saw a charming sketch of a man on the first page. Turning to the second folio, I was struck by its whimsical doodles of gardens and doors. What were these doing in a book dealing mostly with astronomical calculations and predictions about the Church?

birds.pngDetail of fol. 2r of LJS 445.

My non-medievalist mother knew the answer immediately. “They’re children’s drawings,” she observed, pointing out the uneven writing and repetition of common motifs, such as trees. And turning to the 1997 catalogue description by Regina Cermann, I found that she was right: this book can be traced to two of the sons of a Nuremberg patrician, Georg Veit (1573-1606) and Veit Engelhard (1581-1656) Holtzschuher. Veit Engelhard left numerous marks in it, including the year “1589” (fols. 95v, 192r, and 222v), suggesting that he inscribed this book when he was around eight years old. Thus began my efforts to find out more about the contents and uses of this book, from its faithful copies of print editions to its battered and often mutilated constellation images. Perhaps my favourite discovery occurred as I was reading German genealogical records, when I came across an engraving of Veit Engelhard as an adult.holzschuher-1.jpg

This digitized portrait of Holtzschuher is from the Herzog August Bibliothek Wolfenbüttel. It was also printed in Die Porträtsammlung der Herzog August Bibliothek Wolfenbüttel, vol. 11, ed. Peter Mortzfeld (Münich: K.G. Saur, 1989), no. A 100058, p. 266.

To make this remarkable manuscript more accessible to the public, I created a digital edition of it using Manicule, a web application built by Whitney Trettien and Liza Daly. Manicule, which is available on GitHub at https://github.com/wtrettien/manicule, allows scholars and students to create accessible, dynamic web editions of manuscripts and other rare books. It offers three modes of entry into a digitized text: a “Browse” function, whereby the viewer scrolls through pages of the facsimile alongside marginal notes written by the editor; a series of editor-curated “Tour Stops,” which provide commentaries on pages of particular note; and a “Structure” view, which draws on Dot Porter’s VisColl data model to depict the physical makeup of the manuscript, including missing, inserted, and conjoint leaves. Manicule can be downloaded and deployed on Mac OS systems using the instructions on the GitHub repository; Whitney is also available to provide advice and resolve issues.

The finished edition of LJS 445, available at aylinmalcolm.com/ljs445 under a Creative Commons Attribution 4.0 International License, is a true collaboration. In writing the text and creating the digital resource, I have built on the labours of many other researchers, including Regina Cermann; Whitney Trettien and Liza Daly; Dot Porter, whose tools for generating a collation model and image list are also available on the VisColl GitHub repository; and an entire digitization team at the University of Pennsylvania, from photographers to data managers and programmers. The result is also an evolving resource that can be adapted and augmented as new information about this manuscript emerges. Please feel free to contact me at malcolma[at]sas.upenn.edu if you have suggestions or queries, and I hope that you’ll enjoy exploring this unique manuscript.

DM at the IMC 2019

Session report

At this year’s International Medieval Congress (IMC), which took place from 1-4 July, the Digital Medievalist sponsored two sessions and a round table focusing on “Digital Materiality”. The IMC has become the world largest annual conference dedicated to medieval studies. This time “materialities” was chosen as a special thematic focus, which proved to be an interesting topic to be tackled from different perspectives, various angles and with regard to a wide range of material objects.

Leeds Campus
The main building on the campus of Leeds University

The first of the DM sessions, organised by Georg Vogeler (Graz), and chaired by Franz Fischer (Venice) was dedicated to “The Digital Edition and Materiality” (#s224). After a brief introduction to the digital medievalists’ community, their focus and work (such as the “gold standard” open access journal), it started with a paper by Vera Isabell Schwarz-Ricci (co-authored by Antonella Ambrosio, both Naples) entitled “A Dimorphic Edition of Medieval Charters: The Documents of the Abbey Santa Maria della Grotta (near Benevento)”. In her talk, Schwarz-Ricci presented the hybrid approach taken in their project to account for both print and online edition. Aiming at to different outputs and trying to accommodate them in the best possible way, enforces the development of a very sophisticated and integrated workflow.  The encoding is based on CEI-XML, a TEI derivate especially for charters. The XML-data also works as the base for the printed edition. Both outputs serve different needs and have their strength. While a printed edition that applies to the common standards for editing charters, offers usability and also stability besides acceptance in the field, the digital version has got its benefits when it comes to availability, data integration and analyses. 

In the second paper entitled “Artificial Intelligence, Handwritten Text Recognition (HTR), Distant Reading, and Distant Editing”, Dominique Stutzman (Paris) provided insights into some recently finished or ongoing projects, which are concerned with various developments in the fields of handwritten text recognition, natural language processing (NLP), machine learning, distant reading of manuscripts or script identification. The increasing number of interdisciplinary approaches and projects has also led to the inclusion of computer scientists, so that the opportunities for further research are opening up. In recent years, computer-aided approaches have made great progress in these domains. HTR has become more accurate and can now be applied to different scripts and hands. The majority of medieval texts that have been handed down to us via handwritten tradition is still not edited, and it might also not be possible to do so in the (near) future just by manual work, because of the vast amount of material. Hence, artificial intelligence can become a game-changer for medievalists’ research. Inspired by the term “distant reading”, coined by Franco Moretti for the quantitative analysis of textual data, Stutzman suggested “distant editing” as a complementary approach, based on databases and search engines to query the source texts. 

The final paper of this session was given by Daniela Schulz (Wuppertal), who focused on the potentials and limitations of modelling material features of medieval manuscripts by using the CIDOC Conceptual Reference Model (CRM), which is an event-centric modelling tool. She started with a brief introduction to the issues connected with the term “materiality” in the domain of textual scholarship. Although, since many years, “materiality” features very prominently, apparently, still no commonly accepted definition exists. To narrow down, which material features of a manuscript can be modelled and why it is useful to do so, she referred to Jerome McGann’s definition of “bibliographic codes”. By focusing on one specific manuscript (Cod. Guelf. 97 Weiss.), Schulz demonstrated the application of the CRM and some of the CRM extensions to model its material features also in connection with the history of the codex. The suggested approach seems promising, although Schulz also drew attention to the fact, that an additional effort for the proper modelling and encoding is needed, which makes the application of this approach problematic for editorial projects with limited resources (time, money etc.).

The second DM session was organised by Roman Bleier and chaired by Sean Winslow (both Graz). It was dedicated to the question “How to Represent Materiality Digitally in Palaeography and Codicology?” (#324). It started with a paper by Peter A. Stokes (Paris) entitled Towards a Conceptual Reference Model for Palaeography”. Stokes briefly introduced the idea of a conceptual reference model and outlined the necessity to define what writing is. When taking a closer look, the answer to the question, what a grapheme (commonly defined as the smallest significant graphic units that differentiate meaning) is, is not so straightforward. It becomes more problematic, when we consider the level of shapes. Since a sign has multiple functions and can be represented by different shapes, modelling multigraphism can help us clarifying the fundamental concepts palaeographic research is based on. Whereas linguistics and palaeography have up to now neglected the meaning conveyed in using different letter shapes, the development of a conceptual model for palaeography seems a promising approach, to account for these problems.

The second paper was given by Caroline Schreiber (Munich). In her talk Book Covers as Material Objects: Possibilities and Challenges in the Brave New Digital World” Schreiberreported on her experiences in the digitization of book covers at the Bayerische Staatsbibliothek. In the course of the digitization projects, a modular standard for the description of elaborate book covers like treasure bindings has been developed. Besides the advance of a multilingual thesaurus for iconographical and also general features, also Linked Open Data approaches have been applied in this context. LIDO as well as the semantic wiki for documentation was used. She also provided deeper insights into analytical methods used and technical advancements made during the digitization and described their different potentials and limitations.

Some of the DM representatives
Some of the DM representatives
(back: Jamie B. Harr, James Cummings, Daniela Schulz;
front : Franz Fischer, Sean Winslow)

In his talk “On the Epistemological Limits of Automatic Classification of Scripts” Marc H. Smith (Paris) discussed the consequences and limits of AI-based methods in the classification of scripts. These new digital approaches not only seem promising to facilitate future research, but also provide us with an opportunity to rethink the analytical categories our research has been based upon in the past and still is. 

The number of papers with the index term “Computing in Medieval Studies” has increased over the last years, and thus the common interest of scholars working in the field of Medieval Studies. This was also testified by the fact that the room was packed in both DM sessions, and people even needed to be sent away, because there were no chairs available anymore. Given this great success, a continuation of sessions sponsored by DM jointly organized by the DM board as well by its recently founded subcommittee, is planned for IMC 2020 with its special thematic strand “borders”. See CFP here (Deadline: Sept. 15th.).

Materiality in Digital Editing – State of the Art – Panel at IMC 2019

At this years International Medieval Congress in Leeds, the Digital Medievalist organised a panel on the relationship between Materiality and Digital Scholarly Editing. Alberto Campagnolo, James Cummings, Franz Fischer, Daniela Schulz, and Georg Vogeler presented their impression on the state of the art and future directions. Here, you can find the slides of this presentation:
Materiality in Digital Editing – State of the Art_DM@IMC2019

The database as a methodological tool

Written by Dr Matthew Evan Davis

The traditional role of the database in scholarship has been as a repository – a place to store information for later retrieval.  Over the past couple of years, however, I’ve found myself becoming more interested in the methodological use of the database not simply to store information, but to clarify points of tension between the questions we’re asking and the information we’re using to attempt to find answers.

My scholarship attempts to reassess medieval and early Tudor texts by setting paratextual and contextual elements equal with the text in examining questions of staging and hagiography. I do this for a couple of reasons: first, I think that our disciplinary and sub-disciplinary silos tend to get in the way of understanding how literary, devotional, and performed texts would have functioned as a part of the larger culture of late medieval England.  Second, accepting that context requires us to not just examine the text as a platonic ideal, but also the means of its production, reception, and dissemination. In short, I treat the medieval text as part of a holistic. This work involves thinking not only of the ways that the text doesn’t fit our general expectations (performance and non-codex witnesses, for example, do not fit neatly into the categories we’ve created to deal with the codex book online), but also about the inscription, reception, and re-inscription of ideas.

As an aid to my thinking I’ve shamelessly stolen the idea of the network, taken from John Law’s explanation of Actor-Network Theory and applied to the written or performed text. Law describes ANT as

a ruthless application of semiotics. It tells that entities take their form and acquire their attributes as a result of their relations with other entities. In this scheme of things entities have no inherent qualities: essentialist divisions are thrown on the bonfire of the dualisms. […] it is not, in this semiotic world-view, that there are no divisions. It is rather that such division or distinctions are understood as effects or outcomes. They are not given in the order of things.[1]

Beginning with this viewpoint may seem counter-intuitive when talking about the use of a database as a methodology.  After all, as Law notes, under ANT there are no inherent qualities or essential divisions, yet the process of rendering the analogue digital is fundamentally a process of reducing something to its most essential properties – at the most essential, the 0 and 1 or true and false of binary – and then presenting those as representative of the whole. For example, an algorithm is trusted to do the work of stripping out non-essential information from a song, reducing the audio waveform from a series of curves to a series of steps.[2] This aspect of the digital, however, is what actually makes a database useful as a tool for thinking through texts as part of larger questions regarding medieval culture.

One aspect of Actor-Network Theory that’s often talked about is the idea of the “black box,” a term that should be familiar to technologists as well.  A black box is basically anything that takes in input and generates output, but doesn’t allow the observer to discern its underlying workings. In terms of ANT, a black box is an attempt to simplify a complex relationship for the sake of making the understanding of relations something that is manageable by the average person. We don’t really think about all the things that go into what makes our televisions, automobiles, and air conditioners work, for example. They simply do what we expect them to do when we turn them on – provide the input – and give us the expected results. Or, as Law puts it, “if a network acts as a single block, then it disappears, to be replaced by the action itself and the seemingly simple author of that action. At the same time, the way in which the effect is generated is also effaced: for the time being it is neither visible, nor relevant.”[3]

What happens, though, when your air conditioner breaks down, you try to watch television and the back of the unit sparks, or your car won’t start? Suddenly, that black box no longer functions as a single discrete unit, but instead has been reduced to the number of moving parts that actually made it up and which we’d conveniently ignored in an attempt to simply our lives. In attempting to solve the problem, we’re forced to acknowledge the pieces that make up the seemingly singular unit, their relationships, and the ways in which that relationship produced the effect that no longer works.  The top has been removed from the black box and we can see how the sausage is made.

If the black box is thought of as an analog process, then the removal of the top of the black box and the examination of all the moving pieces can be seen as akin to the process of digitization. You no longer can simply elide everything together and make assumptions based on the item of your study as a single unit.  Instead, everything has to be categorized, considered, and put in its proper place to determine what went wrong and to repair the whole.  The relationship between these pieces – something that was assumed before – becomes of paramount importance.  And it is in dealing with relationships that a database can excel.

Much like the process of creating an edition, reading a text or texts with the intent of inputting their relationships into a database forces you to think about them in a way that you may not have considered before.  For example, when I was trying to understand the staging of the fifteenth-century Digby Mary Magdalene play, I created a database to record every location and character that are described in the manuscript. Where even with careful close reading I might have been content to accept that certain locations in the play are references by characters and don’t really exist, the fact that I had to concretely record that a character visited a particular location forced me to recognize how the various locations related to each other and to consider not just that certain locations exist or don’t exist in performance, but why they have to exist or not and what the implications of those relationships were on the overall narrative of the play, the heterodox way it presents the legendary material from the Magdalene’s vita, and to the likelihood that it was performed. This recognition of that interplay was then read back onto the larger cultural context of fifteenth-century East Anglia and the Magdalene cult in order to determine not just what is required to be represented in performance, but the reasons why certain assumptions regarding location in prior scholarship could not be valid.

Nowhere in this process was my intent to simply create a tool; that is an approach to using databases that I have seen in other scholarship and have followed myself as circumstances warrant. The list of manuscript witnesses on my Lydgate archive project and the force-directed graph attached to them are generated by a database that serves primarily as a tool for automating the process of keeping track of where manuscripts are in the transcription/display process, for example. There, the database simply exists to speed up a process that I could do manually and to serve a record-keeping function.  It’s basically transactional. With the database for the Digby staging piece – and the work on East Anglian wills I’m undergoing in preparation for another piece – the database existed entirely to help me think through the various relationships (between characters and locations, or between testators and their beneficiaries), rather than to simply record things as they currently stand.  Likewise, while the database for the Lydgate archive was created with a particular output in mind the staging database was not originally intended to have an output beyond simply being a way for me to reference connections in the play as I thought through the staging problem. The subsequent “output” that has been created – a visualization of where characters are at particular points in the play – was thus not the original end goal and was instead created as a type of quick shorthand for myself.  That’s not to say that the database is static.  The database has been further expanded to include the characters and locations for the Castle of Perseverance, and my hope is to include other place-and-scaffold works as time permits.

Obviously, as a methodological tool a database is not a Swiss army knife.  It shouldn’t be considered to be something you can use for every single aspect of examining a text. What it does do, however, is not allow ambiguity.  This means that the decisions you make have to be documented (if nowhere else than in your own notes), but it also means that points of tension are revealed that might have otherwise been overlooked. Those points of tension, in turn, can yield interesting results upon further examination. In that way, despite it not involving “making” qua “making” it does serve the same purpose methodologically as Critical Making does with physical computing.  It uses digital tools – in this case, the database itself – as a tool for critical thinking about a problem, and as a method helps to complicate notions of data as objective or agnostic.  In this way it serves not only as a tool to further the study of texts, but to interrogate and understand the underlying data structures that are classifying so much of our daily lives.


[1] John Law, “After ANT: complexity, naming, and topology” in Actor Network Theory and after. John Law and John Hassard, eds. Blackwell, 1999. 3.

[2] In fact, this reduction has occurred at least twice in most digital audio files – the initial recording has been rendered digitally, and then that format has been further altered through transferring the file into different formats, some of which are further compressed so as to make the file size smaller.

[3] “Notes on the Theory of the Actor-Network: Ordering, Strategy and Heterogeneity,” Systems Practice 5 (1992): 385.


Dr. Matthew Evan Davis currently serves as a Postdoctoral Fellow with the Lewis and Ruth Sherman Centre for Digital Scholarship at McMaster University. Prior to this he was a Lindsey Young Visiting Faculty Fellow at the University of Tennessee‘s Marco Institute and served as the Council for Library and Information Resources/Mellon Postdoctoral Fellow in Data Curation for Medieval Studies at North Carolina State University. His scholarly work deals primarily with late medieval English drama, hagiography, and material textuality, and his interest in digital scholarship comes in two flavors: first, he’s acutely interested in the “thingness” of digital presentation. Beyond that, though, he’s also interested in ways that digital tools and methods serve as shadow theories, influencing scholarship without being overtly recognized as doing so. The practical application of these interests can be seen both in his standard publications (most recently in Theatre Notebook and the Journal of Medieval Religious Cultures) and in his visualization (here and here) and 3D modeling projects.  He is also developing (slowly) an online archive of the minor works of the poet John Lydgate.

What do digital medievalists do?

As is often the case in the Digital Humanities landscape, outsiders find it difficult to imagine what kind of work a digital medievalist would engage with. If the term Digital Humanities is often perceived as an oxymoron, this is even more so for Digital Medievalist. Digital and medieval do not seem to go together, and yet, as we know, they complement each other in our projects.

Digital Medievalist (DM) was born in 2003 as a project and an international ‘community of practice’ dedicated to the development and dissemination of best practice in the use of technology in Medieval Studies[1]. In 2005, the Digital Medievalist Journal (DMJ) was added as a more formal component of DM. A review of the papers published in DMJ and of the posts and webpages here at digitalmedievalist.org provides an idea and an overview of our scholarly activities. Digital archives, digital palaeography and codicology projects, medieval corpora, textual analysis and editions are among the most prominent activities, but DM does not wish to be solely involved in medieval manuscript culture. A recent review[2] of a project on Gothic Architecture is an example of the breath of activities carried out by digital medievalists.

To start a reflection on the scholarly interests and endeavours lead by members of our community, we are launching a series of blog-posts written by digital medievalists from around the world, focussing on some aspects of their research, and showcasing their particular views of the Digital Medievalist landscape.

We begin with contributions by a group of early career researchers who are (or have been) engaged in research projects as part of a series of postdocs in Data Curation for Medieval Studies, organized by the Council on Library and Information Resources (CLIR) and funded by the Mellon Foundation.

 

We would like to take this opportunity to encourage other researchers engaged in projects that fall within the umbrella of Digital Medievalist interests to contact us and submit blog-post proposals.


[1] Paul O’Donnell, D., (2005). Welcome to The Digital Medievalist. Digital Medievalist. 1. DOI: http://doi.org/10.16995/dm.1

[2] Werwie, K., (2017). Stephen Murray and Andrew Tallon, 2012-. Mapping Gothic France. http://mappinggothic.org/. Digital Medievalist. 10. DOI: http://doi.org/10.16995/dm.54