Thanks for tuning in again and welcome to deep learning. In this small video we will
look into the organizational matters and conclude the introduction. Okay, so let's look at the
organizational matters. Now the module that you can obtain here at FAU consists in total
of five ECTS and this is the lecture plus the exercises. So it's not just sufficient
to watch all of these videos. You have to pass the exercises and the exercises you will
implement everything that we're talking about here also in Python and we'll start from scratch.
So you will implement perceptrons, neural networks up to deep learning and in the very end we
will even move ahead towards GPU implementation and also large deep learning frameworks. So
this is a mandatory part. It's not sufficient only to pass the oral exam. The content of
the exercise is Python. We'll do an introduction to Python if you have never used it because
Python is one of the main languages that deep learning implementations use today and you
will really develop a neural network from scratch. There will be feed-forward neural
networks, there will be convolutional neural networks. You will look into regularization
techniques, how you can actually adjust weights such that they have specific properties. You
can see how you can beat overfitting with certain regularization techniques and we will
of course also implement recurrent networks. Later we will use the PyTorch framework and
use that on large scale classification. For the exercises you should bring a basic knowledge
of Python and NumPy. You should know about linear algebra such as matrix multiplication.
Image processing is a definite plus. You should know how to process images and of course requirements
for this class are actually pattern recognition fundamentals and that you have attended the
other lectures of pattern recognition already. You should bring a passion for coding. You
have to code quite a bit but you can also learn it during the exercises. If you have
not done a lot of programming before this class you will spend a lot of time with the
exercises. But if you complete those exercises you will be able to implement things in deep
learning frameworks and this is a very very good training such that you cannot just download
code from GitHub and run it on your own data but you also understand the inner workings
of the networks, how to write your own layers and how to extend deep learning algorithms
also on a very low level. So pay attention pay attention to detail and if you are not
very well used to programming it will cost a bit of time. There will be five exercises
throughout the semester and there's unit tests for all but the last exercise so these unit
tests should help you with the implementations and in the last exercise there will be a PyTorch
implementation and you will be facing a challenge and you have to solve this image recognition
task in order to pass the exercise. Exercise deadlines are announced in the respective
exercise sessions so you have to register to them in StutOn and yeah what we've seen
in the lecture so far is that deep learning is more and more present in the daily life
so it's not just a technique that's done in research we've seen this emerging really into
many many different applications from speech recognition, image processing and so on, autonomous
driving is something that is going to happen probably very soon. Very very quickly maybe
in towards the end of this year but I'd say I'd be shocked if it's not next year at the
latest that having the person having a human intervene will decrease safety. And it's a
very active area of research if you're doing this lecture you have a very good preparation
for a research project with our lab or industry or other partners and yeah we looked into
the perceptron as Bernoulli classifier and their relation to biological neurons. But
it's all going to happen I mean we are going to get to human level intelligence or whatever
you want what you will artificial general intelligence at some point and that's certainly
going to change our place in the food chain because a lot of the tedious things that we
do now we're going to have machines do. So next time on deep learning we will actually
start with the next lecture blog which means we will extend the perceptron to a universal
function approximator. We will look into gradient based training algorithms for these models
and then we also look into the efficient computation of gradients. Now if you want to prepare for
Presenters
Zugänglich über
Offener Zugang
Dauer
00:06:29 Min
Aufnahmedatum
2020-04-14
Hochgeladen am
2020-04-14 11:26:02
Sprache
en-US
Deep Learning - Introduction Part 5 This video introduces the topic of Deep Learning and presents the course's requirements, grading procedures, and summarises the first unit. Video References: Lex Fridman's Channel References
[1] David Silver, Julian Schrittwieser, Karen Simonyan, et al. “Mastering the game of go without human knowledge”. In: Nature 550.7676 (2017), p. 354.
[2] David Silver, Thomas Hubert, Julian Schrittwieser, et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”. In: arXiv preprint arXiv:1712.01815 (2017).
[3] M. Aubreville, M. Krappmann, C. Bertram, et al. “A Guided Spatial Transformer Network for Histology Cell Differentiation”. In: ArXiv e-prints (July 2017). arXiv: 1707.08525 [cs.CV].
[4] David Bernecker, Christian Riess, Elli Angelopoulou, et al. “Continuous short-term irradiance forecasts using sky images”. In: Solar Energy 110 (2014), pp. 303–315.
[5] Patrick Ferdinand Christ, Mohamed Ezzeldin A Elshaer, Florian Ettlinger, et al. “Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields”. In: International Conference on Medical Image Computing and Computer-Assisted Springer. 2016, pp. 415–423.
[6] Vincent Christlein, David Bernecker, Florian Hönig, et al. “Writer Identification Using GMM Supervectors and Exemplar-SVMs”. In: Pattern Recognition 63 (2017), pp. 258–267.
[7] Florin Cristian Ghesu, Bogdan Georgescu, Tommaso Mansi, et al. “An Artificial Agent for Anatomical Landmark Detection in Medical Images”. In: Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. Athens, 2016, pp. 229–237.
[8] Jia Deng, Wei Dong, Richard Socher, et al. “Imagenet: A large-scale hierarchical image database”. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference IEEE. 2009, pp. 248–255.
[9] A. Karpathy and L. Fei-Fei. “Deep Visual-Semantic Alignments for Generating Image Descriptions”. In: ArXiv e-prints (Dec. 2014). arXiv: 1412.2306 [cs.CV].
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. “ImageNet Classification with Deep Convolutional Neural Networks”. In: Advances in Neural Information Processing Systems 25. Curran Associates, Inc., 2012, pp. 1097–1105.
[11] Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, et al. “You Only Look Once: Unified, Real-Time Object Detection”. In: CoRR abs/1506.02640 (2015).
[12] J. Redmon and A. Farhadi. “YOLO9000: Better, Faster, Stronger”. In: ArXiv e-prints (Dec. 2016). arXiv: 1612.08242 [cs.CV].
[13] Joseph Redmon and Ali Farhadi. “YOLOv3: An Incremental Improvement”. In: arXiv (2018).
[14] Frank Rosenblatt. The Perceptron–a perceiving and recognizing automaton. 85-460-1. Cornell Aeronautical Laboratory, 1957.
[15] Olga Russakovsky, Jia Deng, Hao Su, et al. “ImageNet Large Scale Visual Recognition Challenge”. In: International Journal of Computer Vision 115.3 (2015), pp. 211–252.
[16] David Silver, Aja Huang, Chris J. Maddison, et al. “Mastering the game of Go with deep neural networks and tree search”. In: Nature 529.7587 (Jan. 2016), pp. 484–489.
[17] S. E. Wei, V. Ramakrishna, T. Kanade, et al. “Convolutional Pose Machines”. In: CVPR. 2016, pp. 4724–4732.
[18] Tobias Würfl, Florin C Ghesu, Vincent Christlein, et al. “Deep learning computed tomography”. In: International Conference on Medical Image Computing and Computer-Assisted Springer International Publishing. 2016, pp. 432–440. Further Reading A gentle Introduction to Deep Learning