Okay, let's restart. We're learning Prologue today. Yay, I hear, yes. There's a good chance
you'll hate it at first. It grows on you eventually. And if it doesn't, you have to do more of
it until it does. So we started out yesterday with an overview of what we would do. The
general thing in the first semester is that we're going to use symbolic methods, which
is essentially that we will have representations of the world, of the state of the world, which
we can write down as symbols. Symbols being things like that, anything we can give a name,
which means it's kind of an object we can do things with, which I can communicate to
you, or put into my pocket, give a name, store somewhere, get it back. Okay? That's kind
of a very simple conceptualization of what goes on in the brain, which is basically anything
out there in the world has a physical symbol, which we can actually connect to some kind
of a structure in our brain in being in some kind of a state. And the idea of early in
AI was that we would manipulate physical symbols in a clever way and reach AI that way. And
those techniques we're going to look at in this semester. In the next semester, we're
going to do the same thing, only that the states in the world, the world is in might
not be known because our sensors are buggy, our actions don't work, there are things we
can't observe, and many things are non-deterministic anyway. Okay? But still, essentially, we're
doing state-based stuff. And then we'll go use the techniques we make next semester to
go to what is often called sub-symbolic AI, where you give up this idea that you have
these physical symbols that you can manipulate, but otherwise, we have kind of the symbols
smeared over neural structures. And that can do other things well. And a different conception,
it's more oriented to learning from brains. Turns out it can do different things. There
are some things that symbolic AI can do well, and other things that machine learning and
neural structures can do well. And so, you get to learn both. We do the easy stuff, or
at least the stuff I consider easy, in this semester. Yeah, so we're going to start out
from kind of giving ourselves a framework to work and to program in. Programming will
be in Prolog. The conceptual framework will be intelligent agents. And then we're going
to kind of go through a series of algorithms that become increasingly more complex, and
that actually get more complex when we add more structure to our world representations.
That's going to be the recurring theme. The more we know about the world, the more guided
our algorithms can be. And sometimes, the price we have to pay for more complex systems
actually pays off because our algorithms become more guided. And that's kind of what's been
happening in symbolic AI. More and more and more knowledge about the world. Okay. First
step, we know nothing about the world, except of course world states have that form. They
have a name, essentially. We can recognize that we've been in a state already, sometimes.
And then we use that paradigm for game playing. Then we add a little bit of structure to the
world representations, which we call factor representations. That drives interesting new
algorithms. Then we add a lot of structure to our world representations, and we get into
the realm of logics. Essentially, from doing computation at the level of world states,
we actually do computations at the level of descriptions. We got one meta level up and
gain efficiency that way. And in the last, in January or so, we're going to add time.
Because the real world in AI has time, so we better do something about it. Good. We
add uncertainty to it in the summer semester, which essentially means throwaway logic, which
really has these states true or false, get real and say, well, we know this with 70%
probability. Okay. Kind of smear the truth values into a whole interval. Means we have
to kind of look at everything again. And then we use the statistics techniques to do machine
learning. Any questions? We have our toy. Okay. We talked about strong AI versus narrow
AI, just to give a kind of a set of categories to the discussion that is currently going
on. And I gave you a couple of caveats about strong AI. What we will be doing is narrow
AI. Limited situations where we can actually do good stuff by computer science methods.
Okay. And the last thing we talked about was what is research like in AI? And I've given
Presenters
Zugänglich über
Offener Zugang
Dauer
01:23:17 Min
Aufnahmedatum
2018-10-25
Hochgeladen am
2018-10-27 06:58:31
Sprache
en-US