Christof Schöch – Linked Open Data for Literary Historyip [ID:57770]
50 von 484 angezeigt

Thanks a lot.

Thanks for the invitation and the occasion to be here.

It's really wonderful to witness the start of digital humanities in earnest at FAU.

Very happy to be here.

And, yes, I will talk about linked open data for literary history and how linked open data

may help us think differently about literary history, but also about how wanting to do

literary history differently helps us think about linked open data as well.

You can follow along with the slides if you'd like using the link shown here and at the

bottom of the slide.

So this is what I would like to cover.

First of all, I will talk a little bit about text and data mining or machine learning,

linked open data and literary history as like separate bits and pieces.

And what I then will try to show is how we brought these three pieces together in a project

that ran until last year, mining and modeling text.

And mining and modeling, you already have text and data mining and linked open data

modeling in there.

I will just show one example of what kinds of questions you can address with this kind

But there's a huge variety of data in the resulting database.

So let's get right into it.

Text and data mining plus LOD plus literary history equals what?

But first of all, a little bit of background.

I think that you could say that there are three modes of data and digital humanities.

Qualitative digital humanities, where data sets are typically small, flawlessly curated,

very specific domains, often heavily annotated.

And this could be called smart data, because there are many intelligent, nuanced ways that

we can interact with them.

And the prototype of this kind of data is digital scholarly editions, especially genetic

editions or critical editions, where every comma is marked up in every different variant.

And so the prototypical example for me is the Faust edition that really pushed us to

the limits, I think.

And then there's quantitative digital humanities, where data sets are typically large, more

or less annotated or only simply annotated.

They may have errors, they may have all kinds of biases, but they're big and generic, so

they have their uses.

And a typical way of doing this kind of digital humanities would be, for example, doing a

topic modeling on Project Gutenberg, the whole thing, with all its strange biases and lacking

metadata and you know.

But I think there's a third way for digital humanities, which is bigger, smarter data

in the humanities.

And I don't mean this as a compromise, you know, a little bit bigger, but not quite as

well annotated.

I really think we can bring together scale and nuance or detail.

And one approach to do that is to use text mining, machine learning for annotation, for

information extraction, and to model everything in linked open data, so that you really have

contextualized knowledge and richly contextualized analysis when you perform your analysis.

And one attempt to do this is mining and modeling text.

So a little bit background.

What is text and data mining?

What is machine learning?

Basically what we're trying to do with machine learning is to discover relations between

Zugänglich über

Offener Zugang

Dauer

00:30:55 Min

Aufnahmedatum

2024-11-22

Hochgeladen am

2025-05-12 09:06:03

Sprache

en-US

Presented on November 22, 2024 during the Digital Humanities Training Day at FAU Erlangen-Nürnberg. Prof. Dr. Christof Schöch, Professor of Digital Humanities at the University of Trier, discusses the use of Linked Open Data in literary history research.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen