Welcome back to Beyond the Patterns.
It's my great pleasure to introduce today's speaker, Professor Xiaolin Huang, whose exceptional
career and groundbreaking research makes him a true leader in the field of machine learning
and optimization.
Professor Huang began his academic journey at Xiangzhao Tong University, where he earned
his bachelor's degrees in 2006, followed by his PhD at the prestigious Tsinghua University
in Beijing.
His passion for advancing the frontiers of knowledge took him to East Stadius KU Leuven,
where he excelled as a postdoctoral researcher.
Soon after, he was honored with an Alexander von Humboldt Fellowship, which brought him
to our very own pattern recognition lab at Friedrich-Alexander University along Nürnberg.
In 2016, Professor Huang joined Shanghai Xiaotong University, a role that has only grown in
prominence culminating in his appointment as full professor in 2023.
Today he serves as the Vice Dean of the Department of Automation, guiding the next generation
of scholars and engineers.
Professor Huang's research encompasses a diverse range of topics with a primary focus
on generalization in machine learning, bridging the theoretical insights with a practical application
approach.
His works have appeared in top tier venues such as JMLR, IEEE transactions on pattern
analysis and machine intelligence, NeurIPS, ICLR, CVPR and IEEE transactions on medical
imaging.
Notably, his surveys on piecewise linear neural networks in Nature Reviews Methods Primers
underscores his ability to distill complex concepts into profound insights.
Beyond his scholarship, Professor Huang contributes actively to the community, serving as an editor
for machine learning and as area chairs for leading conferences such as ICLR, CVPR and
ICCV.
Today, Professor Huang will guide us through a fascinating exploration of generalization
in machine learning and the emerging concept of machine unlearning, a critical topic in
the age of AI, ethics, privacy and trust.
Dear Xiaolin, the stage is yours.
Okay, thanks a lot for the introduction.
And actually, I always say that I like the time I spend in Erlangen and I always want
to have a chance to go back.
But before I going back physically, now I go back, how say, online to have this talk.
So I'm very glad to have this opportunity to introduce some recent ideas on machine learning
and machine unlearning.
So maybe it's not very, actually, you know, very starting point for understanding something.
So I hope that we can together to think about something and maybe this is the main purpose
of this talk and we can hope we can have a lot of opportunity to discuss that.
So machine learning and unlearning, the view of generalization, I will talk about actually
four parts.
First is why we actually rethink the generalization capability for deep learning.
There are a lot of interesting and actually some puzzles we cannot understand very well.
And from that puzzle, we propose some new ideas.
Firstly, the dynamic load dimensionality for learning and also for unlearning.
We have some works and then conclusion.
So I think we have now very powerful machine learning methods.
However, think about the basic idea that the way we communicate with the computer, with
AI is the same as the previous times that we are still doing something like first sample
some data from unknown distribution.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:00:55 Min
Aufnahmedatum
2025-01-20
Hochgeladen am
2025-01-20 18:26:05
Sprache
en-US
It’s a great pleasure to welcome Prof. Dr. Xiaolin Huang back at the Pattern Recognition Lab!
Title: Machine Learning and Machine Unlearning in the View of Generalization
Abstract
Generalization is a critical challenge in machine learning, particularly for deep learning models, which often achieve high training accuracy but exhibit varying performance on new data. Our research focuses on improving the generalization capability of deep learning models and has revealed that training dynamics can be effectively captured within a low-dimensional space. This insight has led to advancements in training speed and generalization performance. Together with sharpness-aware minimization (SAM), another efficient method to enhance the generalization, we have successfully applied these approaches to training deep neural networks in industrial applications. Exploring generalization also contributes to the field of machine unlearning, an emerging and intriguing topic with both practical and theoretical implications.
Short Bio
Xiaolin Huang received his BS degree from Xi’an Jiaotong University, Xi’an, China, in 2006, and his PhD degree from Tsinghua University, Beijing, China. From 2012 to 2015, he worked as a postdoctoral researcher with ESAT-STADIUS, KU Leuven, Leuven, Belgium. After that he was selected as an Alexander von Humboldt fellow and working with Pattern Recognition Lab, the Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany. In 2016, he joined the Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China, where he became a Full Professor in 2023.
His research focuses on machine learning and optimization, with a particular emphasis on generalization analysis through both practical and theoretical approaches. He has authored dozens of papers in top-tier journals and conferences, including JMLR, IEEE TPAMI, ACHA, NeurIPS, ICLR, CVPR, IEEE TMI, etc. Additionally, he has published a survey on piecewise linear neural networks in Nature Reviews Methods Premiers. Currently, he serves as Vice Dean of the Department of Automation at Shanghai Jiao Tong University, an Editor for Machine Learning, and an Area Chair for prestigious conferences such as ICLR, CVPR, ICCV, etc.
Video released under CC 4.0 BY.
For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.
Music Reference:
Damiano Baldoni - Thinking of You (Intro)
Damiano Baldoni - Poenia (Outro)