2 - FAU MoD Course (2/5): Frequency Principle/Spectral Bias [ID:57604]
50 von 1034 angezeigt

Thanks a lot for coming and today I'm going to talk about the first phenomena we actually

observe it is at early like 2018 where which is we call it frequency principle and so yeah

so it's a kind of the whole purpose of talk is to give you some sense that deep learning is

box and stage I also have some questions or some problems if you can really answer them

now like get just from all these understandings you can answer these problems and you will have

a feeling that okay now I have predict certain aspect of the steep learning algorithm particular

from these phenomena you already have very rich idea about how that works and first of all still

let's go back to my first lecture that why we don't really have a nice framework help us

understand the any is because we may not really accumulate enough these pieces therefore there's

no hope as a whole and therefore for example in particular at the time of 2018 we understand

much less yeah we collect pieces just like like these ones like we have a little bit hint about

something underlying however there's no piece that really points towards how that thing exactly

looks or we don't really have these key pieces however throughout all these years the seven

years of effort and we we uncover much more than we have at that time and particularly there are

two of these pieces that are really really important the one is a frequency principle as

I will tell you about it this time and the next one is a condensation and I will give three lectures

on condensation because it's so rich phenomena that not only shows how what is not only shows

one piece but tell us there's a lot of neighboring pieces you should and then so there are lots of

different pieces probably we can uncover right you see there are lots of theorems about this

deep neural network and each may contain some aspect of the properties of that problem however

what kind of piece shall we kind of search for that could be more informative right and I don't

know how many of you know this Edmund Husserl yeah raise your hand how many of you I think it's a

very famous German philosopher yeah and he opened up a new direction in philosophy that is called

phenomenology and so there's a famous quote that yeah probably yeah yeah that's true you can see

that it's a really it's a very famous German philosopher so he said natural objects for example

must be experienced before any theorizing about them can occur so he emphasized the fact that

particularly that applies to the field of physics right we first observe the phenomena and then this

phenomenon guides us towards something deep underlying and however for example from the

time of Einstein probably there's a different approach that okay you you can't theorize that

okay they are something that should be and then you derive the consequence but for the very long

time we first observe lots of phenomenon so we have a strong feeling that there are kind of a loss

underlying all these phenomena we observe and then we find ways or find some theoretical basis for

that and why I kind of particularly try to emphasize that because there are lots of things

you can do about deep learning because it's a so complicated system you can try to analyze this

system or look at this system from all kinds of different angles yeah be it like from the

mathematical viewpoint or optimal transport viewpoint or physics viewpoint or like even

a psychology viewpoint you you can see all these kind of different kind of view and then each of

that may give you a little bit of that thing is just like blind man to try to understand an

elephant however I always said it's a it's a danger to impose certain framework to that new

object and you kind of keep trying to move all these kind of things that is effective for certain

problems however you kind of impose them to that new object the Hussle suggests that or actually

he warns against this inversion of the process where we first have a theoretical framework right

we start from some theory and then we try to to say okay and probably it's this theory can help us

understand this picture however so this this inversion of the process the series actually

can eclipse miss shape or entirely ignore the vital qualities that encounters indirect perception

that's why I think for me it's kind of easy to judge many theories are actually irrelevant in the

sense that because I do lots of experiment I observe lots of things initially I don't really

understand them but I observe enough of them so that I have an instinct a kind of intuition about

what kind of theory really explains the things I could observe instead of I'm looking at supposed

into all these proofs to say whether they are technically correct yeah I'm not doing that I'm

Zugänglich über

Offener Zugang

Dauer

01:43:23 Min

Aufnahmedatum

2025-05-05

Hochgeladen am

2025-05-06 07:37:48

Sprache

en-US

Date: Fri. – Thu. May 2 – 8, 2025
FAU MoD Course: Towards a Mathematical Foundation of Deep Learning: From Phenomena to Theory
Session 2: Frequency Principle/Spectral Bias
Speaker: Prof. Dr. Yaoyu Zhang
Affiliation: Institute of Natural Sciences & School of Mathematical Sciences, Shanghai Jiao Tong University
Organizer: FAU MoD, Research Center for Mathematics of Data at FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
Overall, this course serves as a gateway to the vibrant field of deep learning theory, inspiring participants to contribute fresh perspectives to its advancement and application.
Session Titles:
1. Mysteries of Deep Learning
2. Frequency Principle/Spectral Bias
3. Condensation Phenomenon
4. From Condensation to Loss Landscape Analysis
5. From Condensation to Generalization Theory
 

Tags

FAU FAU MoD FAU MoD Course deep learning Bias Frequency artificial intelligence
Einbetten
Wordpress FAU Plugin
iFrame
Teilen