The following content has been provided by the University of Erlangen-Nürnberg.
Welcome to AI in 2019. I wish you a very happy and successful New Year,
which of course, as you're aware, I'm sure has the AI1 exam. We will give you a mock exam this week,
I hope, in the next days, just so that you can see what the exam will be like. That is completely
optional. You can solve it. There will be a master solution and you can see how you're doing,
or you can only glance at the master solution, which is not a good idea, but we're not going to
enforce any of that. There will not be points for it because we are going to give the master
solution out along with the exam. Any questions to that? Okay. Any other questions? What is the
course I'm in right now or something like this? It's been a break. Let me give you a very brief
localization where we are at the moment. We're doing artificial intelligence one,
which means symbolic artificial intelligence. Remember, we had these two ways of looking at
artificial intelligence. One is where we represent objects or world states and all of those
kinds of things, and then use techniques that think about changing world states and inferring
knowledge and all of those kinds of things to solve the problems. There is a different kind of
artificial intelligence, which is sometimes called subsymbolic artificial intelligence,
or neural-inspired artificial intelligence, which is often today in the press when they say,
oh, AI is great, then that's what they mean. Machine learning, which is essentially
throw a huge bit of data at the problem and throw another bunch of neurons that are connected well
and see what happens. The interesting bit is that often good things happen. I'm convinced
if we want to do AI, we need both. We need learning-like things and we need symbolic methods.
One of the places where we've seen this succeed is AlphaGo or AlphaZero, where we're combining
a search method, Monte Carlo Tree Search, which we looked at, with learning the right heuristics,
learning the right evaluations, learning when to sample a little bit more information or when to
actually act. It's just one of the successful combinations and I think we're going to see more
of those. It's important that you get a feeling for both of these, which is why we're going to do
all the statistics stuff, which then goes into machine learning because those are the techniques
there. The other thing that we're going to do is we're going to do a little bit of
the analysis, which then goes into machine learning because those are the techniques there.
In the next semester, we're doing good old-fashioned AI or symbolic AI this semester.
To localize a little bit further, we have this notion of an agent, which is a big box.
We have sensor data coming in,
action going out. In between there, something happens if we had looked at different models
of what might actually happen there. We started out with rule-based perception action rules.
If we see this, then we do that. If this is too hot, I'll pull back my arm.
If I'm feeling hungry, I eat. Very simple kind of agent for some things like keeping
the temperature in this room constant. That's all we need. Is this intelligence? Debatable.
We very quickly graduated to a new kind of an agent, which had a very important component,
which we can either call the state, which had some kind of a model of the world,
which we maintain and use to determine the next best action.
It's a central innovation of a fruit fly over a bacteria. You need a world model or a state.
All we've done is this. We've done this. We've done this. We've done this. We've done this.
So you need a world model or a state. All we've done is essentially with this architecture.
We have three kinds of state world models we've seen. We had
atomic world models, world models where we only know this state follows on that state,
and we cannot look into them. That essentially leads you to search all kinds of search,
informed and uninformed search strategies. We've seen that even though it looks good what we're
doing, once you do it in practice, exponential growth of search trees actually comes and hits
you very hard. I had this URL. Who watched these search strategies actually go around in mazes?
For the rest, do it. It gives you a feeling for what's happening.
Everything is wonderful for depth first five, and then you kind of grow a beard before it comes to
Presenters
Zugänglich über
Offener Zugang
Dauer
01:24:34 Min
Aufnahmedatum
2019-01-09
Hochgeladen am
2019-01-09 16:17:17
Sprache
en-US