|Current Workshop | Overview||Back | Home | Search ||
Biblical Scholarship and Humanities Computing:
Data Types, Text, Language and Interpretation
1 Description and Aims
"What are the requirements for text data bases to allow for the systematic study of ancient texts, especially Hebrew, Aramaic or Greek biblical texts, that confront the research with a century long history of production, transmission and translation?"
This workshop has been set up as a meeting place between specialists in the Hebrew Old Testament and scholars in the Greek New Testament. At the same time, information specialists were present to take in the digital aspects of the issues, and to challenge the literary scholars with emerging paradigms in computing science.
The focus and methods of Greek New Testament scholarship differ from that of Hebrew Old Testament research. This is to a large extent due to the unequal manuscript situation for both testaments. The number and variety of manuscripts for the Old Testament is far less than for the New Testament. This is related to the fact that in Old Testament tradition and scholarship the research is based on the authoritative Manuscript (codex Leningrad) that dates from about the year 1000 and is the product of rabbinic tradition. So in Hebrew studies one actually uses a so-called ‘textus receptus’. The additional information coming from Dead Sea Scrolls or Ancient Translations is in fact used first to consider matters of literary historical analysis and only secondarily to make proposals for more original readings in the Hebrew Bible. This is done so since these additional texts often represent stages of the production of the Hebrew Bible rather than stages of its transmission. So by agreeing on using a late text in a way it is easier to reach consensus about a standard text of the OT, but more difficult to observe the effects of history. A standard text is conducive to setting up a program of computer-aided linguistic analysis. In the NT case, computational methods are primarily invoked for making sense of the 5000+ manuscripts and reconstructing their history. In the daily research practice, Hebrew and Greek scholars do not interact that often.
This workshop has built bridges between the OT and NT lines of research. The fact that both types of research need access to the source materials proved a unifying concern. Currently, accessibility leaves a lot to be wished for: openness, transparency and permanence all fall short for a new set of research questions that are lining up.
There are other dividing lines that have been addressed: between the worlds of research, education and application. Commercial Bible software packages a lot of applied scholarship, but the usefulness of those packages in a research context is very limited. Even in educational situations the use of this software was felt as contra-productive. Up to a certain level of knowledge these applications do an excellent job. However, the development of new features is not being driven by the needs of academics, whether research or education.
Perhaps it was more important to articulate the distinct concerns here than to try to bridge them. Yet, between research and education there is a natural continuity that can be exploited, provided that the tool development becomes again driven by academic concerns. In our discussions we have been exploring to what extent open-source tool development could coexist with commercial software manufacturing. There is certainly potential within academics to come up with tools, but it will take time before they will reach the same sophistication in the user interfaces. More importantly, it is difficult to make certain to what extent research tools are legally allowed to use and spread the original resources.
When we mention tools, it is important to distinguish between tools used for producing research data, such as analytical databases and tools that present the data to end users. And apart from that there are tools that facilitate the collaboration between researchers.
The distinction between digitized scholarship and digital scholarship was made. Whereas digitized scholarship uses digital means in order to improve the efficiency of the classical research process, digital scholarship fully employs the revolutionary potential that the digital paradigm has to offer. Networking and visualisation are important trends.
Networking can help to direct the effort of many to work that exceeds the capacity of lone researchers. Visualisation can help to highlight significant patterns in masses of data. The digital paradigm is on the rise in many departments of the humanities. We asked ourselves the question: what can we do to make this happen in biblical scholarship?
Most importantly, the sources and the fruit of biblical scholarship should be made readily available to others: for inspiration, for checking, for application, and as the raw material for new kinds of research. In that way, the work of many sustains an ecology where results become cumulative.
Here is a clear incentive to liberate scholarship from the entanglement of commercial interests, to reclaim the sources for research and education.
4 Aha experiences
When studying a text tradition, it might seem logical to first solve the basic problem of what has been written and only after that to deal with higher-level questions such as the interpretation of what has been written. Indeed, a particular phenomenon in a text that can be fully explained in terms of the linguistic system, should not be explained in terms of an author’s special intention or a special religious interest. But in fact, we do not always know whether textual phenomena belong to linguistics, literary studies or the history of textual transmission. Problems at a basic level often can only be solved by dragging higher levels into the equation.
At the same time, the quest for the one true version of what has been written has been abandoned in favour of an interest in the historical richness of being read and (re)written that can be gleaned from the texts.
Now, if we can find ways to perform linguistical analysis without recourse to one standard text, Greek and Hebrew scholarship find common methodological ground. And there will be far less dependency on copyrighted editions of the source texts, which is good for the ecology of research.
All days of the workshop had a consistent structure: two morning lectures, one from Hebrew scholarship and one from Greek. After the coffee break there was a reflective and/or challenging lecture from computing science. In the afternoon we broke out in subgroups and reported back in a final plenary session. Only one afternoon we left completely open. We think that the participants made the most of those afternoons, whether in subgroups or on their own. It was certainly quality time.
Although not a revolutionary format, it served very well to elicit much that was in our minds and establish in-depth communication across our usual disciplines.
We are grateful for the excellent setting and organisation of this workshop as provided by the Lorentz center. Even the organisers could immerse themselves fully in the subject matter, as they had very little worries about the logistics and day-to-day running of the workshop. There was a refreshing lack of housekeeping notices.
Jan Krans (Amsterdam, Netherlands)
Bert Jan Lietaert Peerbolte (Amsterdam, Netherlands)
Wido Van Peursen (Leiden, Netherlands)
Dirk Roorda (Den Haag, Netherlands)
Ulrik Sandborg-Petersen (Aalborg East, Denmark)
Eep Talstra (Amsterdam, Netherlands)