Hello, in this video nugget, we will introduce the area, AI sub area of knowledge representation,
which is an area which is concerned about representing knowledge, things the agent has
perceived and or knows a priori about the world, and then working with that knowledge.
In AI, this is typically thought of as a logic based endeavor.
If we look around what other people think about knowledge, you very often hear about
the knowledge ladder. One of these versions I have a picture of basically says, well,
we have different levels of information that basically range from glyphs, which is characters,
little pictures and so on. Think of having digits here and comatah and so on. Then you
go to the level of syntax, you add syntax to that, which is rules that allow you to
combine all these little pieces into bigger things. For instance, in this case here, 0.95
that these rules make things to data. If you add context to that, for instance, if you
have this exchange rate definition that says something like one dollar is actually 0.95
euros, that gives you something they call information. Then if you have networking,
a network of other information like you know about market mechanisms that work with exchange
rates and what it means to be one dollar, what you can buy for that, then you end up
with knowledge. For the purpose of this course, we will think of as knowledge to be the information
that's necessary to support intelligent reasoning. That's something that the networking part
really brings in, this last tier of this knowledge level. If we think of what these things here
are, like at the grammar level, so if we have, for instance, in the grammar, a set of words,
then you can check whether a word is admissible. If you go to a more elaborate structure, like
a list of words, you can rank a word, you can say it's the fifth or the seventh or something
like that. If you go from that to a lexicon, you can translate or determine the grammatical
function and if you have the structure of words, then you can actually get to the function
of a word. Same idea of having tiered levels of information that in the end go to knowledge.
You can ask yourselves, what is knowledge representation? For instance, with respect
to data structures. So, representation can give you, as we say, structure and function.
So the representation determines the content theory, so what is the data? And the function
determines the process model. What can we do with the data? So again, we have these
two aspects. What is the data? That's one of the lower levels. And then what can we
do with it? So we use the term knowledge representation rather than say data structures, where we
have sets and lists and so on above. But we, because the data structures only give us so
much. And we don't call it knowledge representation instead of information representation, even
though it is information, but it builds a separate layer on that. There's no really
good reason for this other than that this is what AI does. The intuition here is that
data is simple in general, if you think of lists and grammars and all of those kind of
things. So it supports many algorithms, but knowledge really is complex. And so it has
a distinguished process model that it adheres to. So there are a couple of paradigms for
representing knowledge in AI, and that's a nice contrast always here, in natural language
processing, which is a part of AI or at least used to be. So there's the good old fashioned
AI, which is what we're doing, the symbolic AI that we're talking about this semester.
So here we have a symbolic knowledge representation. And the process model is based on heuristic
research. Remember, we have these in all we've done so far, we have this knowledge that we
represent the states and then search for certain goal states. There are others, there are statistical
and corpus based approaches where we still have a symbolic representation, but the process
model is based on machine learning. So we have the knowledge divided into a symbolic
part and we have the search knowledge is actually statistical based on machine learning. We're
going to see some of that in the next semester. And finally, there's what it's called the
connectionist approach. This is a sub symbolic representation. Think of neural networks where
we don't really have cannot really localize the concept of a chair or something like this.
And the process model again is based on primitive processing elements, the neurons and links
Presenters
Zugänglich über
Offener Zugang
Dauer
00:13:01 Min
Aufnahmedatum
2020-12-30
Hochgeladen am
2020-12-30 16:19:22
Sprache
en-US