The following content has been provided by the University of Erlangen-Nürnberg.
Okay, welcome to the second week of AI.
The course is still bigger than last year.
People are still signing up on Stodown. I think we're somewhere near 165.
Which is good. The more, the merrier.
There might be slight delays in grading. After all, your TAs are humans as well.
If you know somebody from last year's course who can be cursed to help with TAing, we have money we can hire.
That would help.
If there are delays in grading, it's only you to blame. So that's easy.
Good.
As always, I'm starting off with a review. We started off with an overview of the course.
We looked at the components of AI, what you at least need to build AI.
From a kind of phenomena point of view.
I tried to convince you that we needed learning. Certainly, AI has to adapt to changing environments.
It's also much nicer to have a system that is kind of born as a baby and then can learn from that.
We need to be able to draw inferences, make new knowledge from old knowledge.
Something is not really working. We need perception.
AI's need to be embodied in the world.
Which means they need to perceive the world and be able to act on it.
Language plays a big role and emotion. We discussed that in a little bit of detail.
Feel free to ask questions, especially during reviews. That's what they're for.
I tried to convince you that AI is here today. We have lots and lots of applications that have AI in them.
Many of the stuff where we have advanced programs, these techniques have actually been pioneered in AI and trickled down to computer science.
Where, of course, they've been developed and the engineering got right and so on.
But many of those things are at the core AI techniques. That's one of the things that AI does for computer science.
Since AI is kind of the science or the art of the two difficult questions, things we can't really answer right now.
Many interesting things are invented in AI because we need something completely new often to even address the problems.
That actually drives forward CS as a whole research area. Applications in medicine, in households, again in medicine, healthcare, security.
If you name it, self-driving cars, personal digital assistants and so on.
Yes, and of course we're seeing advances in AI. One of the last is in the last two or three years that we've seen leaps and bounds in machine learning.
Where we've seen a program called AlphaGo beat the Go Grand Master, which is kind of the last board game that was holding out against AI dominance.
We're using combinations. No, I'll talk about that in a while.
Last thing we looked at was that there are two general ways of doing AI. One is where you look at situations where you have very restricted situations.
And try to do deep analysis like humans can do it. Whereas you can also use statistical methods to build very wide coverage solutions.
The current state of the art is that if you have deep analysis you have to restrict yourself to narrow situations where you have a limited vocabulary, a limited set of concepts.
But where you then can do deep analysis. Or if you insist on having wide coverage, which means huge vocabulary, huge sets of concepts, then you're going to only do shallow analysis.
Of course, that's not what we want. What we want is to reach humans. We're not there yet. I'm counting on you.
My private belief is that we will need some cooperation of shallow data driven statistical methods, which is something we're going to look at in the next semester.
Deep, narrow, knowledge-based methods, symbolic AI, which is what we're going to do this semester.
There is research here. And there are many people who kind of try to understand that at the framework level, much of it I consider underwhelming, which means I don't think very highly of it.
I can't do it better either. And there's one thing where I have the feeling that could be it.
Unfortunately, I don't understand the mathematics. But I know the people who are actually working on this quite well.
So that's fascinating. But that's kind of ten years out until something very useful, at all useful, will come out of that.
So this is an example where this cooperation works very well. And it is indeed AlphaGo.
So AlphaGo uses a symbolic technique, Monte Carlo tree search, which is something we will learn in a couple of weeks and discuss,
which is kind of a symbolic framework where we think of game playing as a special case of search.
And the problem is that search is an exponential problem. All algorithms we know are exponential,
which means that you have to look through huge spaces of possible solutions. And you can do that if you have a very good sense of smell.
You know at each intersection, oh, I'm going to take the middle one because it smells the best.
If you have the intuitions of where to go, this is easy. And what they were doing is they're using neural networks, learning methods next semester
Presenters
Zugänglich über
Offener Zugang
Dauer
01:23:18 Min
Aufnahmedatum
2018-10-24
Hochgeladen am
2018-10-24 16:12:29
Sprache
en-US