In this video nugget, we are going to lay the foundations for logic-based knowledge
representation.
Remember that in our introduction, we had two kind of intuition setting examples.
We had semantic networks, kind of capitalized on the network-like structure of knowledge,
which actually brings two facts, the networking layer of knowledge.
We now want to actually do it in a logic-based way.
Remember that in semantic networks, we had lamented the fact that all these relations
mean something to the human reader, but not the machine.
We want to follow up on this idea that if we actually implement things in logic, then
we can use deduction as an inference procedure.
Maybe we can optimize logic and calculate to do this.
Indeed that's what has been done for knowledge representation and the semantic web, where
we basically treat the whole web as a huge A-box where we need highly optimized reasoning
procedures to do this.
The nice things about logic is that we have a well-defined semantics, meaning we have
a specification in terms of a class of models that makes knowledge explicit.
This gives more understanding than statistical or neural network that does so transparently.
So the symbolic methods we're employing are monotonic in a way that if something has been
true, then it will remain true.
Of course, it does so systematically.
We can prove theorems about our systems.
We know, for instance, if we've proven soundness and completeness of a calculus, then we know
that we can make all the predictions that are true in the world, a certain class of
world with certain properties, and all the predictions we make will turn out to be true
in the world.
Of course, there are problems.
Of course, there's the problem of where the world knowledge comes from.
That's a largely unsolved problem because there is a huge amount of knowledge, and most
of the things we've tried so far have been largely unsuccessful.
There's been the project called PSYCH, CYC, in the 80s where a person called Doug Leonard
says, well, if you give me a couple of, a large-ish couple of millions of dollars, I
will actually have a project that writes down all the estimated one million frames of known
by an average American adult.
They're actually still doing PSYCH.
They actually didn't.
The large couple of millions of dollars actually weren't enough, and they are now basically
selling ontologies.
They're doing knowledge representation, but just getting together a lot of PhDs and just
writing down the knowledge of just one American has proven to be too big an effort.
There are other things.
For instance, the DVPedia project I already mentioned said, oh, we'll just screen-scribe
Wikipedia.
That has all the knowledge of the world, and they've done nice things.
There's of course also the problem if we want to do inference by essentially a search process,
then we have to know how to guide the search.
If we don't have good ways of guiding the search induced by logical calculi, then we
run into the problem of combinatorial explosion, which we've already also seen in our logic
chapters.
If we balance all of those and optimize the logic towards that, we'll come up with something
that we now call description logic, which is something that we will now take a better
Presenters
Zugänglich über
Offener Zugang
Dauer
00:24:24 Min
Aufnahmedatum
2021-01-02
Hochgeladen am
2021-01-02 14:59:37
Sprache
en-US
Explanation how propositional logic can be used to describe sets using concept axioms. Also, translation examples are given.