Is There a Computer in this Class?

Susan Hockey
Canadian Institute for Research Computing in Arts, University of Alberta
Susan.Hockey@ualberta.ca

September 1999

This title lends itself to my talk for two very different and equally serious reasons. Firstly, I think that it highlights the dilemma facing anyone a humanities department who is planning to offer academic courses in humanities computing since it encapsulates the differences, which I believe do exist, between the arts and the sciences. The sciences are mostly about doing it, about carrying out experiments and empirical work which is usually reported in fairly short papers, often with many authors. Enough detail is provided to ensure that the results can be replicated and verified. The results make a factual contribution of new knowledge, often in a form that can be exploited to bring benefit to a wider community of people. As indeed was mentioned at one of the panels at ACHALLC99, the emphasis in the humanities is much more on talking about it. Publications in the humanities are lengthier than those in the sciences. They often with single authors and most certainly more discussive. Talking about the Internet, culture and computing and related topics is very popular at the moment, but I find that few of the writers in this topic seriously relate the talking to the implementation. The work of Janet Murray (1997) is one exception that comes to mind, also Jim O'Donnell (1998). I should make it clear now that I am by no means advocating not talking about it, but for humanities computing the talking makes much more sense to me if it is firmly grounded in the doing. It is important for us to reflect and speculate on the implications of our work, but I would like to see this reflection and speculation make a genuine contribution towards the better doing of the work.

The actual doing is often described as "technical", a word which seems to send rather different signals to people in different disciplines. Work that is technical, or perhaps I should use the word scientific for the word scientia really means "knowledge", tends to be elevated in the sciences and viewed as inferior in the humanities. Of course I am generalizing a lot here and not everyone holds these stereotyped views, but I do think that there is a real difference in academic cultures in different disciplines and we need to address this in order to develop cross-disciplinary work. This difference has been very evident in humanities computing conferences. Those of us who have been involved in organizing the ALLCACH conference programmes over the years have seen it again and again with the endless discussions about length of abstracts. In computing science it is normal to submit a complete paper for consideration. The programme committee can then ensure that the work has been done and that the results make sense. In the humanities an abstract of 500 words is often adequate with the expectation that what you really have to say will not be revealed until you deliver the paper. In the humanities papers are normally read out, often without any visual aids. In the sciences it is assumed that the audience already has the printed paper and the presentation goes over the main points usually with illustrations. The undercurrent here again is the difference between doing it (the normal for a scientific paper) and talking about it (the humanities paper). Students are trained to work in the ways that their discipline expects and they can find it disconcerting to adapt to different ways if they move to another area. More generally an academic culture and the expectations that it generates can really help to define a discipline.

My second reason for choosing this title goes back to its origins, for I think that Fish, together with Chomsky, effectively extinguished a lot of computer-based humanities research in North America just at the time when many possibilities were beginning to emerge. Fish's (1980) condemnation of stylistics (especially computer-based work) turned many scholars away from close study of texts and shut the door on work that might have led to developments with possibly far-reaching implications. The re-emergence of humanities computing in North America coincided with the advent of the World Wide Web and the possibility of image and text-based electronic publishing. Before the World Wide Web and the widespread use of the Internet Thomas Corns (1991) accurately predicted this change of emphasis from the analysis of information, to access and to the retrieval and delivery of electronic information. In this paper, Corns describes Fish as "deeply damaging", but ends with a prediction that the new emphasis will have a much more profound effect on scholarship. This is now turning out to be true and it has been made much easier by the Internet, but it is true across all disciplines. We must ask ourselves where humanities computing fits into this changed arena.

Similarly Chomsky closed the door on much empirical work in corpus linguistics, arguing that corpora could never be entirely representative and that intuition and invented examples were a better foundation for linguistic research (Leech 1991; McEnery and Wilson 1996: 4-6). All this was at a time when computers could only process text and thus all research was focussed on textual data. Things have obviously changed since then, but one point I want to make now is that Fish and Chomsky were not quite so influential outside North America. With one or two exceptions, notably Biber's (1988) work on language variation and register, much corpus-based work stopped with the Brown Corpus in North America and began again only in the early 1990's when computational linguistics "discovered" corpora. However it developed continuously in Europe, notably with work by Geoffrey Leech, John Sinclair, Stig Johansson, and the group directed by Antonio Zampolli in Pisa. Research carried out by these scholars and others in corpus and computational linguistics has its origins in humanities computing but it is now at the forefront of developments in human language technology and has implications way beyond academia. This research is making it possible for people, not just academics but everyone (students, citizens and those with little education), to speak to computers and to ask questions of computers in natural language.

I will to return to these research issues later, but wanted to include them now so that you will have some idea where, with my European background, I am coming from. With academic origins in Classics and Ancient Near Eastern Languages, I have worked in a research computing laboratory (where I was the only non-scientist), in an academic computing service where I taught courses and managed a variety of national resources for humanities computing, in a library where I also learnt a lot about the delivery and management of information, and now I am the Director of a Research Institute in Arts Computing and am half-time in an English Department, at present teaching academic courses in that department. I have also had plenty of experience of project design and management in humanities computing. Since my husband is an accountant in a large firm dealing with corporate and business tax advice, I have some knowledge of the world of work beyond universities and academia. I have also now lived and worked in three countries, all very different from each other.

After this background material, it is perhaps useful to look at the purpose of a degree in the humanities. In the institution where I work now, the emphasis is very much on critical thinking. The Dean's message on the University of Alberta Faculty of Arts Web pages (http://www.humanities.ualberta.ca/arts/) lists a number of skills "that are enduringly associated with liberal arts education". These include:

I have listed all of these not only because I think that all of them are important but also because I believe that humanities computing can help to develop all of them, not just the last one. It offers so much scope for problem-solving, for communication, and for conducting original research as well as for time management, team work and the tolerance of cultural diversity. It offers the possibility of creating things as well as for "exercising independent judgement" in assessing the success and effectiveness of what has been created. The challenge is how to harness these possibilities into an academic programme that serves the needs of all the constituencies that might need to be involved.

One of the exciting aspects of being in at the beginning of something new is that you can define it for posterity. I assume that is one major purpose of these seminars. But it also means that you are the ones who make the mistakes and the ones who have to fight to make the new thing happen. Sometimes those who start something fall by the wayside because they are ahead of their time - I could cite a number of examples in humanities computing here. General acceptance of what they have started only comes much later after the early adopters have lost momentum often because they did not have a large enough base for sustainability. Because of the infrastructure requirements, which I shall come to later on, computing activities can be very susceptible to loss of sustainability. Trying to predict the future in computing is very difficult, but planning does need to look at what happens when a plateau is reached after the development phase, and how to maintain the plateau whilst also being very flexible and responsive to changing requirements.

I have already touched on interdisciplinary work, which is very popular at the moment. Humanities computing is clearly that, but it also cuts across almost all disciplines including many that have not traditionally worked together so much. It can be about applying solutions developed in the sciences to humanities material, or it can be about humanistic approaches to science and information technology. It can also involve the social sciences where there is substantial expertise in evaluation and assessment of systems, and it is certainly about the management and organization of knowledge which fits into library and information studies. There are thus endless possibilities for turf wars to ignite and so it seems to me that flexibility in institutional structures and effective procedures for the management of change are crucial for starting a new activity that is seriously interdisciplinary.

It is perhaps useful now to look at the ingredients of a successful academic programme. I will take as a given that there are enthusiastic and knowledgeable faculty. Otherwise there would not be any plans to start such a programme. The programme also needs students and an appropriate infrastructure for teaching the students, both of which I shall come to later on. But in my view the real focus of an academic programme is its research agenda. This is what gives it the intellectual excitement to keep moving and developing, and where ultimately the sustainability can reside. If there are faculty and students who find this intellectually stimulating, they will work to maintain and develop the programme.

In humanities computing, we have an extremely large research agenda. However, I do not think we have worked as hard as we could have to promote this agenda, since too many of our colleagues seem to think that humanities computing is about creating web pages or making concordances, which they tend to regard as mechanical processes. Of course those of us in this field know that these are not entirely mechanical processes, but they are rather simple processes which are reasonably well understood. We need to look well beyond these to much more advanced research areas and promote these areas in relation to a range of humanities disciplines.

In my view, the core of this research agenda is knowledge representation and manipulation, or, to use a term that has a broader application in the world beyond academia, it is about information management. If you are going to make an electronic representation of something, you have to understand what that thing is, and to know what are its important properties and functions. This may sound obvious but it is often only when you start to convert something into electronic form that you realize the complexity of the material and the decisions that need to be made about the nature of the material. In the humanities we are dealing with a variety of extremely complex information (Hockey 1996). This information can have a very long life-span and so electronic representations of it must also last for a long time, far longer than the computer system on which they were created. The information can also be studied for many different purposes and we therefore need to research and develop computer tools not just to deliver the information to people but also to help them study and manipulate the information in different ways. We need also to find out how effective these tools are and to feed the results of evaluations of the tools into new and better systems.

With the TEI, humanities computing has already made some significant contributions to knowledge representation. I have been privileged to be present at some of the TEI technical meetings where the atmosphere and discussion were some of the most intellectually stimulating experiences of my life. I am aware that not everybody agrees with the TEI's view of text, but I think that what the TEI has done is to define in detail and to concretize a many-faceted view of text on which there was some consensus - and nothing better was available at the time. Nothing like this had ever been done before. A markup scheme that is something of a dinosaur, but a very powerful and flexible dinosaur, emerged from this project, and in a broad community it has fostered a lively debate on the nature and representation of humanities information. The existence of the TEI scheme has also encouraged a good deal of experimentation with the TEI view of text. I would prefer to call this research, rather than experimentation, since it has led to a recognition than more complex data models are needed in order to deal with various representations including overlapping structures as well as images and other multimedia formats. And, the more complex the data model, the more complex the software needed to handle it. More research will then be needed on how to present this software to users in such a way that they can get started and achieve their research objectives easily.

Even without the added complexity of overlap, there are still many research issues associated with SGML/XML encoded material. We do not have any affordable desktop software to process this material. Before this software gets written we need to do a lot more research on what humanities scholars want to do with electronic information and on the cognitive models needed to present that information to users. Too many tools seem to me to be based on the model of the book, that is a linear sequence of material with links to what would appear as footnotes or bibliography in a printed book. With its hierarchic structure, SGML tends to promote this model, but SGML also allows the encoding of pieces of information that can be mixed, matched and linked in many different ways. This is what we are doing in the Orlando Project at the University of Alberta where biographies and historical events related to British women's writing are being written with deep SGML tagging to permit reordering and restructuring of the material in many different ways (Brown, et al. 1997). This will make it possible to create many different subsets of the material organized by time or place or person or cultural formation or by any other feature that is tagged. But we are now facing some tough challenges in working out how to deliver this material to literary scholars. Most of the material is at present in Dynatext, but, although this is much better than our earlier SGML tool Panorama, it by no means does justice to the research and scholarship embedded in the tagging. The literary scholars on the project want to do a lot more than is possible with Dynatext, and so we need to do more research on the design of the computational structures to model their needs and to deliver them efficiently.

The research that is developing in corpus linguistics, which I mentioned earlier, is going to play a significant role in the development of better tools for manipulating text. Most writers about humanities computing mention Father Busa's pioneering work on the Index Thomisticus. Given the practicalities of working with computers at the time he started, this was indeed an achievement, but I find Busa's earlier papers on this project very illuminating. He recognized that his concordance would be a much more useful to scholars if it was lemmatized and so he carried out this lemmatization process of around 11,000,000 words mostly by hand with only a little machine assistance. If he was starting now he might have some more machine assistance for this process, but he would need to look to groups like the Pisa team to do this. Busa's more recent publications lament the fact that so little intellectual progress has been made in this area. (Busa 1992; 1997). I quote from his 1997 publication:

"... we derived the lexicological map of the vocabulary documented by these 11 million words. .... I would stress its usefulness: for me, the necessary foundation on which all hermeneutical enquiry has to be built is, I repeat, the text in vertical with inner hypertexts - summarized into forms - summarized into lemmas.
I did no more than that. Conceptually it is very simple, but it is time-consuming. Its logic did not allow me any short cuts. If someone could find a way of shortening the human labour it demands and getting the same results, I would be the first one to applaud and thank God." (p.8)

This kind of lemmatization facility is unlikely to be available in desktop concordance programs for some time. I have recently been re-reading the proceedings of a conference on literary data processing that was held in 1964 (Bessinger and Parrish 1965), also some essays in Studies in Bibliography (which, incidentally is very nicely available in electronic form from the e-text centre here in Virginia) and other papers from the 1960's (Parrish 1962; Fogel 1962; Wisbey 1963; Markman 1964). It is interesting to note how many of the problems discussed in these early essays still exist. We are much further ahead than we were in the 1960's for getting text into and out of computers, but we are not very much further ahead in processing the text, except that we have learnt that, in order to do more intelligent processing, we have to go through the costly process of putting intelligence in the form of markup into the text. These early papers discuss problems of variant spellings, homographs, variant readings, proper names, numerals and the like as well as lemmatization. I do not see much discussion of these issues now in humanities computing circles except in terms of markup. I suspect this is because everyone recognizes that they are so difficult to deal with automatically that we will have to live with them. These issues are very similar to problems that the natural language processing people are addressing and these groups have made some progress. However 98% or so accuracy might work for a language understanding system, but it is almost certainly not enough for humanities scholarship.

We also have a big research agenda in the area of digital imaging. I think that this area is more problematic as the technology is still advancing very fast. At the lowest level to represent text we really have the choice only between ASCII and Unicode and conversion between them is possible. We have also made some advances in markup systems for text, even though these have raised a whole new set of unanswered questions. For images the technical capacity of scanners and cameras is increasing all the time and there will still be developments in the technology of working in colour. But as well as the actual creation of images, there is much research to be done in the analysis and delivery of images. Image manipulation programs can help to identify previously illegible material as for example in the Beowulf project (Kiernan 1991). But we are still a very long way from being able to offer a computer program a collection of images and ask the program to select certain images by inspecting their content. This can only be done at present by associating pieces of text with every image and thus we are back again with the big question, that of the organization of knowledge. I know much less about other multimedia formats (video, audio etc), but I imagine that with these media that are even newer, these problems are compounded and thus offer more opportunities for research.

Metadata is another area where humanities computing research is beginning to have an impact, but where I believe that much work is still to be done. This is again related to the organization of knowledge where most of the expertise lies in library and information studies. There are recognized and widely used standards for organizing knowledge that is disseminated in print form: the library catalogue and the front matter in modern books. But the library catalogue model is intended to help you find the object on the assumption that you know what to do with the object when you have got it - to read it, for example. The librarian does not ask why you want the object or interfere with what you do with the object (unless I suppose if you damage it or lose it). Neither the publisher nor the bookstore gets involved with what you do with the object, when they have sold it to you. With electronic information the situation is completely different. The electronic properties of the information (file format, encoding scheme, software requirements) determine what you can do with that information. These need to be described in the metadata which must also provide enough detail about the source of the electronic information if it is a transcription or digitized version of something that already exists in another format.

The TEI header was the first systematic attempt to provide this kind of metadata for electronic texts and many of the elements in the header were requested by scholars who had used electronic texts. The humanities scholars who were involved in the initial specification of the Dublin Core metadata elements also pressed hard for the inclusion of provenance data so that enough information was available about the source of the object. It is not difficult to end up with metadata that is larger than the object it describes. Many research issues on the nature, size, and usability of metadata still remain to be addressed and our understanding of the complex material we work with in the humanities can contribute a lot to this research.

I have only touched on a few research topics - there are many more - and so there is definitely plenty to do here. But we must also consider the political aspects of getting the research agenda moving. Our colleagues in other humanities disciplines need to understand that there is research to be done here, rather than just accept the computer systems they are using now as what there is and what happens normally. This attitude certainly seems to be prevalent towards web pages, where HTML is being used in all kinds of ways for which it was never intended. In its current implementation the Web is an extremely cumbersome way of doing most things. In many ways it reminds me of the pre-SGML markup days when the only markup schemes that did exist were far too simple for the material that was being encoded. The inevitable result of this was simplification and therefore loss of information before any data was processed. But can we get our colleagues past the stage of either being excited that so many things can come to their desktop or of not using the Web because they perceive the quality of the material on it as being too poor? Can we begin to encourage them to think imaginatively of what they want to do?

The funding of research in humanities computing is also an issue. Many funding agencies seem to prefer to fund the creation of electronic resources, especially if they have a Web presence. This gives them something visible and concrete to attach their name to, and of course institutions like this too for the same reasons. The funding of specific resources fits neatly into existing disciplines. The impetus for these projects usually comes from those disciplines and there is often no "technology transfer" into other disciplines. The TEI was one of the few examples of research infrastructure funding in the humanities, but NEH had enough previous experience of failed projects to recognize the value of developing and promoting guidelines and standards. Otherwise, there has been less emphasis at least in North America on funding basic research that focusses on how to do it, and that creates and evaluates methodologies that could be applicable across disciplines. I would like to see more emphasis and value placed on this kind of infrastructure research. I would also like to see more value placed on publications that provide a serious critical assessment of electronic projects including their shortcomings as well as their successes. These assessments need to be made from various standpoints: technical, scholarly content, and cost-effective sustainability and preservation. Without these, we are going to make progress only very slowly.

Research in humanities computing is also mostly collaborative. It has to be, because almost never can any one person can be competent in all aspects of a project. Collaborative work is also a new experience for the humanities, but one that I think an academic programme with a laboratory practical component can help to foster. Collaborative research projects are also much favoured by funding agencies at present, even though it must be recognized that inter-institutional projects can be expensive in terms of the time needed to keep everyone up to speed on what is happening in the project. But these models provide an opportunity for PhD work in the humanities which is much more like that in the sciences where students work on part of a larger research project in a lab-based environment. They learn a lot from being side-by-side with their peers and can see the benefit of how their part of the project fits into the larger whole. All this experience will equip them well for their future employment.

Why might students want to pursue humanities computing? On the basis of the fairly small samples I have taught in different institutions, I would offer two reasons here: (1) intellectual curiosity and (2) employability. The intellectually curious will find endless satisfaction (or perhaps frustration) in engaging with computers. It is a challenge to make a new computer-based project work and a source of much gratification when it does. Then there are more and more ideas about how to make the project better. The intellectually curious will also be able to place a critical perspective on what they are doing, to sit back and assess the results and the effectiveness of the work, and to make informed suggestions for possible improvements. Simply because of their endless flexibility, computers can consume a lot of time - a point which our colleagues who prefer more traditional ways of work are well aware of, and which can lead them to warn their students away from doing computer-based work. But the possibilities for the imaginative student are boundless.

Many of the students in my 400-level course last year (mostly English majors) were concerned about employability. This was obviously uppermost in their minds as they reached the end of their undergraduate career. We have to accept that most of our students will not end up as academics, or indeed in universities. A few will become academics and a few more will have other positions in universities, but many more will take up positions in business or govenment. At present there is a tremendous skills shortage in information technology with many vacant positions and high salaries being offered. I understand that this shortage of skilled staff is expected to get worse in the next few years. In the courses I have designed at the University of Alberta I have tried to come up with something that is firmly grounded in the humanities, that satisfies the Dean's set of skills but that also helps to make my students more employable. In these courses the students look at various applications, they carry out a short practical project and they write a critical assessment of the project. For the critical assessment I am really looking for how well they have understood the potential and the limitations of the technology. They build a web site to act as a shell for their work (I am grateful to Willard McCarty for this idea) and they use Powerpoint in their class presentation. I feel quite strongly that the focus of my teaching is on critical thinking applied to computer-based work, but in the process of the course and in order to apply this critical thinking, the students also acquire a range of computing skills. They will also have experience of completing a computer-based project. We all know the problems of time management with such projects, how they can easily expand in all directions and become unmanageable, which in the business world means uncontrollable costs probably leading to a company going out of business. If the students should choose to take up a career in information technology, they will have on their resumes what employers in this area look for most: a list of software that they have used and a description of projects that they have carried out.

The range of possibilities for employment for students who have done humanities computing is very broad. There will be some positions in teaching and further research in humanities computing, and also in support and development work for humanities computing projects in universities. Another area is electronic publishing, which has been taken up by a number of my students as well as by people who have worked in my group. In the humanities the employee will have the rare capabilities of being able to approach work in design, development, marketing and customer support, from the perspective of both the creator and the user of the product. At present there seem to be many more digital library projects than people to carry them out and so there are plenty of possibilities for development work in digital library systems. Museums and other cultural heritage organizations are increasingly using electronic systems in exhibitions as well as general documentation and management systems. Another area is technical writing where a background in humanities computing will really help the writer translate the technical specifications of a computer system into language that the user can understand. Web site design and development has become an industry in itself, and there are also many other opportunities in more general information technology and knowledge management in marketing, customer support and training.

To finish, I have a few remarks on infrastructure for an academic programme in humanities computing. These remarks perhaps address the topic of the seminar less directly but I think that an efficient infrastructure is fundamental to a successful programme. This does not come without some ongoing costs that are rather different from the other costs for supporting programmes in the humanities. I am assuming now that the programme will have a practical component and the infrastructure thus means computers, software and people to support them. Organizing all of this can take up a lot of time and human resources and it is very different from the more limited resources needed to support the traditional "reading list and chalk" approach to teaching. There seems to be a tendency, at least among people who have not been involved in this before, to think that all you need to start such a programme is one or two new faculty members and a room full of hardware. Money can be got for the hardware from some kind of startup fund, and then everything else will happen of its own accord. The provision of computer hardware is probably the least demanding of these requirements, but it is also an ongoing cost. To be up to date, the computers really need to be replaced every three years and it thus makes sense for equipment to be part of the recurrent budget. Computers do not run themselves and the technical support people, in my view, are the most important thing here. In too many institutions faculty members are doing the technical support as well as teaching and research, but this is obviously not the best use of their time nor indeed what they are employed to do.

Whether students should be required to own a computer or not is a subject of some political debate at my institution. Some of the science departments are moving in that direction, but the cost of a computer in the Canadian public-funded higher education system is not insignificant when added to student fees. It may be much easier for this to happen in a high-fee private institution where it does not add so much to the overall cost of a degree. In practice I have found that many of my students do own computers, but very few have the same configuration as the ones in the classrooms. Last year's graduate course was based on PCs and Windows95 software, but over half the students already owned Macintosh computers, about which I know very little. My university does at present recommend some hardware and software, but the recommendations are very general. They include both PCs and Macintosh systems and for software, they mention only Microsoft Office and some basic network tools (Eudora, Telnet, Netscape and Internet Explorer). A rather smaller set of more detailed hardware and software configurations might be more appropriate but the indications are that some people are wedded to their existing tools and might view this as an infringement of their academic freedom. Most of the students preferred to work at home when they could, but they did not have all the software we used, and it was rather expensive for them to buy it. One or two did their work using other tools that they already knew. Those who did not own a computer definitely felt disadvantaged.

There are a whole lot of questions about the choice of software to use in a course. In the ideal world a course might be built around a superior tool which very well illustrates the functionality of the application and offers the students plenty of opportunity for critical thinking and assessment of the methodologies. In the real world practical issues tend to come to the fore, especially if the computer lab is being used by other courses as well. I am required by Computing and Network Services to purchase a software licence for every workstation in the classroom, not for every student in the class, which may be a much smaller number. This is the only practical way in which a lab can operate. I managed to run both my courses in the same lab last year and thus saved the cost of twenty licences for two pieces of software.

I chose to use most of the components of Microsoft Office, mostly because this is what students are most likely to encounter outside the university, but it was also already in the lab and I did not have to find money for it. Some of the students already knew a lot about some of the Office programs, because they had used them in jobs outside the university. Others owned computers that were too old to run Office97 and thus they could not easily transport data from the classroom to home. I also bought licences for Microsoft FrontPage and for a concordance program, and I have just bought another set of licences for a different concordance program for this year.

There is something to be said for providing every student with a CD-Rom of shareware tools, if suitable tools exist. They probably only exist in the Unix world and I do not want to use Unix in my courses as the learning time before you can doing anything intellectually interesting is so long. The students also expect their instructor to know everything about every piece of software being used and can be somewhat surprised if the instructor cannot answer a "how do I do this?" question. Students are always very good at testing and experimenting with software, but my experience is that you need to use a piece of software in about six different courses before you feel really comfortable with it. These days it is very rare for software to last that long without upgrades and revisions. In general more sharing of experiences in managing these issues would be very helpful.

I could continue with an endless catalogue of practical issues, but should just perhaps note all the tasks needed for this term's course, and which are in addition to the tasks for a course taught in a more traditional way: set up course web site (I am now competent enough to do this quickly) and check links periodically; various e-mails with computing and network services (CNS) about specific software requirements; purchase of extra licences for concordance program and various e-mails with the author of the program about the online mechanism for paying for it; morning in the lab with a CNS person to do software installation (it was much better for me to be there for testing as he did not know my software); more time in the lab to see how the machines have been configured and check that the other software I plan to use is there and that it is the version I know; set up course e-mail discussion list and change its configuration to be a private list (this took some time as I had to revisit the discussion list documentation because I had only done this once before); organize accounts for the students on the humanities NT server; set up empty Front Page webs for the students on the humanities server. And we have yet to get into acquiring data from elsewhere and no doubt reformatting it so that it will work with our software.

However I do all of this because, as has no doubt become apparent from my remarks, I think that it is very important to have computers in the class. I find that real engagement with the machines encourages the intellectual curiosity and excitement as well as the critical thinking and analytic skills that are so important in the humanities. The doing makes the talking about it so much more interesting, relevant and useful. I hope that those of us who are the pioneers in developing these new areas of teaching and research can continue to share our experiences and that we can work towards some commonality, whilst still respecting the traditions and practices that have grown up in our different institutions and higher education systems. I feel that I have been privileged to be one of the pioneers in this new field and look forward to seeing it develop in the future. There is no doubt that we still have plenty to do.

References

Bessinger, Jess B. and Stephen M. Parrish. Eds. (1965). Literary Data Processing Conference Proceedings. White Plains: IBM.

Biber Douglas. (1988). Variation Across Speech and Writing. Cambridge: Cambridge University Press, 1988.

Brown, Susan, et al. (1997). "SGML and the Orlando Project: Descriptive Markup for an Electronic History of Women's Writing." Computers and the Humanities, 31: 271-84.

Busa, Roberto S J. (1992). "Half a Century of Literary Computing: Towards a 'New' Philology." Literary and Linguistic Computing, 7: 69-73.

Busa, Roberto SJ. (1998). "Concluding a Life's Safari From Punched Cards to World Wide Web." The Digital Demotic: Selected Papers From DRH97, Digital Resources for the Humanities Conference, St Anne's College, Oxford, September 1997. Eds Lou Burnard, Marilyn Deegan, and Harold Short. London: Office for Humanities Communication, 3-11.

Corns, Thomas N. (1991). "Computers in the Humanities: Methods and Applications in the Study of English Literature." Literary and Linguistic Computing, 6: 127-30.

Fish. Stanley. (1980). Is There a Text in This Class? The Authority of Interpretive Communities. Cambridge, Mass: Harvard University Press, 1980.

Fogel, Ephim G. (1962). "Electronic Computers and Elizabethan Texts." Studies in Bibliography 15: 15-31.

Hockey, Susan. (1996). "Knowledge Representation." Research Agenda for Networked Cultural Heritage. Ed David Bearman. Santa Monica: Getty Art History Information Program, 31-34.

Kiernan, Kevin S. (1991). "Digital Image Processing and the Beowulf Manuscript." Literary and Linguistic Computing, 6: 20-27.

Leech, Geoffrey. (1991). "The State of the Art in Corpus Linguistics." English Corpus Linguistics: Studies in Honour of Jan Svartvik. Eds Karin Aijmer and Bengt Altenberg. London: Longman, 8-29.

Markman, Alan. (1964). "A Computer Concordance to Middle English Texts." Studies in Bibliography, 17: 55-75.

McEnery, Tony and Andrew Wilson. (1996). Corpus Linguistics. Edinburgh: Edinburgh University Press.

Murray, Janet H. (1997). Hamlet on the Holodeck; The Future of Narrative in Cyberspace. Free Press: New York.

O'Donnell, James J. (1998). Avatars of the Word; From Papyrus to Cyberspace. Cambridge, Mass: Harvard University Press.

Parrish, Stephen M. (1962). "Problems in the Making of Computer Concordances." Studies in Bibliography, 15: 1-14.

Wisbey, R. (1963). "The Analysis of Middle High German Texts by Computer - Some Lexicographical Aspects." Transactions of the Philological Society, 28-48.