5 - Deep Learning - Plain Version 2020 [ID:20840]
50 von 79 angezeigt

Thanks for tuning in again and welcome back to Deep Learning. In this video we want to talk about

the organizational matters, if you want to obtain the certificate, and we want to conclude the

introduction. So looking forward to a couple of exciting minutes on the topic of Deep Learning.

So let's have a look at the organizational matters. Now the module consists of five ECTS.

This is the lecture plus exercises. So it's not just sufficient to watch all of those videos.

You have to pass the exercises. In the exercises you will implement everything that we are talking

about here also in Python. We will start from scratch. So we will need to implement perceptrons,

neural networks, back propagation up to Deep Learning. In the very end we will even move

ahead towards GPUs and also large Deep Learning frameworks. So this is a mandatory part and it's

not just sufficient to pass the written exam. The content of the exercise is Python. You will do

an introduction to Python if you have never used it because Python is one of the main languages

that are used in Deep Learning implementations today. You will really develop a neural network

from scratch. There will be feed-forward neural networks. There will be convolutional neural

networks. You will look into regularization techniques, how you can adjust the weights

such that they have specific properties and you will see how you can be overfitting with those

regularization techniques. Of course you will also implement recurrent neural networks.

Later we use the Python framework and also go for large-scale classification. So for the exercises

you should bring basic knowledge of Python and NumPy. You should know about linear algebra and

very important things. Image processing is a definite plus. You should know how to process

images and of course requirements for this class are actually pattern recognition fundamentals and

you should have attended other lectures of our lab already. If you haven't you might want to consult

additional references to follow this class. We are also providing the recordings of the other classes

starting from this semester by the way. So you should bring passion for coding and you will have

to code quite a bit but you can also learn quite a bit about coding in Python during the exercises.

If you haven't done a lot of programming before this class you might need quite a bit of time in

the exercises but if you complete the exercises you will also be able to implement things in deep

learning frameworks and I think this is a very good training. So it's not just theory but you will

also do all these things practically. After this course you can not just download code from Github

and run it on your own data but you will also understand the inner workings of networks,

how to write your own layers and how to extend deep learning algorithms also on a very low level.

So pay attention to detail and if you're not well used to programming it will cost some additional

time. There will be five exercises throughout the semester. The unit tests for all the exercises

except the last one. In the last exercise there will be a PyTorch implementation and you will be

facing a challenge. You have to solve image recognition tasks in order to pass the exercise.

Deadlines are announced on the respective exercise sessions but you will have to register for them

in our online platform on StoodOn. In order to participate in the exercises you have to be an

enrolled student of the Friedrich-Alexander-Universität at Langen-Nürnberg. So this will not be accessible to

everybody who is watching these videos on different other sources. We're sorry about that but we

provide some open source training assignments and I will put the link into the description of this video.

So what we've seen in the lecture so far is that deep learning is more and more present in daily

life. It's not just a technique that's done in research. We've seen this emerging really into

many many different applications ranging from speech recognition, image processing and so on

up to autonomous driving. It's a very active area of research. If you're doing this lecture

you have a very good preparation for research projects with our lab but also for industry

and other partners. So typically students who graduated from this class are very popular with

industry partners, with research partners and so on. So if you manage to pass this class you will

also see that the name deep learning will appear in your transcript of records and you will see

that there is considerable interest in your skills. Okay so far we looked into the perceptron and its

relation to biological neurons. Next time on deep learning we will actually start with the next

lecture block which means that we will extend the perceptron to a universal function approximator.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:10:17 Min

Aufnahmedatum

2020-10-04

Hochgeladen am

2020-10-04 16:16:18

Sprache

en-US

Deep Learning - Introduction Part 5   This video introduces the topic of Deep Learning and presents the course's requirements, grading procedures, and summarises the first unit.   For reminders to watch the new video follow on Twitter or LinkedIn.   References
[1] David Silver, Julian Schrittwieser, Karen Simonyan, et al. “Mastering the game of go without human knowledge”. In: Nature 550.7676 (2017), p. 354.
[2] David Silver, Thomas Hubert, Julian Schrittwieser, et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”. In: arXiv preprint arXiv:1712.01815 (2017).
[3] M. Aubreville, M. Krappmann, C. Bertram, et al. “A Guided Spatial Transformer Network for Histology Cell Differentiation”. In: ArXiv e-prints (July 2017). arXiv: 1707.08525 [cs.CV].
[4] David Bernecker, Christian Riess, Elli Angelopoulou, et al. “Continuous short-term irradiance forecasts using sky images”. In: Solar Energy 110 (2014), pp. 303–315.
[5] Patrick Ferdinand Christ, Mohamed Ezzeldin A Elshaer, Florian Ettlinger, et al. “Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields”. In: International Conference on Medical Image Computing and Computer-Assisted Springer. 2016, pp. 415–423.
[6] Vincent Christlein, David Bernecker, Florian Hönig, et al. “Writer Identification Using GMM Supervectors and Exemplar-SVMs”. In: Pattern Recognition 63 (2017), pp. 258–267.
[7] Florin Cristian Ghesu, Bogdan Georgescu, Tommaso Mansi, et al. “An Artificial Agent for Anatomical Landmark Detection in Medical Images”. In: Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. Athens, 2016, pp. 229–237.
[8] Jia Deng, Wei Dong, Richard Socher, et al. “Imagenet: A large-scale hierarchical image database”. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference IEEE. 2009, pp. 248–255.
[9] A. Karpathy and L. Fei-Fei. “Deep Visual-Semantic Alignments for Generating Image Descriptions”. In: ArXiv e-prints (Dec. 2014). arXiv: 1412.2306 [cs.CV].
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. “ImageNet Classification with Deep Convolutional Neural Networks”. In: Advances in Neural Information Processing Systems 25. Curran Associates, Inc., 2012, pp. 1097–1105.
[11] Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, et al. “You Only Look Once: Unified, Real-Time Object Detection”. In: CoRR abs/1506.02640 (2015).
[12] J. Redmon and A. Farhadi. “YOLO9000: Better, Faster, Stronger”. In: ArXiv e-prints (Dec. 2016). arXiv: 1612.08242 [cs.CV].
[13] Joseph Redmon and Ali Farhadi. “YOLOv3: An Incremental Improvement”. In: arXiv (2018).
[14] Frank Rosenblatt. The Perceptron–a perceiving and recognizing automaton. 85-460-1. Cornell Aeronautical Laboratory, 1957.
[15] Olga Russakovsky, Jia Deng, Hao Su, et al. “ImageNet Large Scale Visual Recognition Challenge”. In: International Journal of Computer Vision 115.3 (2015), pp. 211–252.
[16] David Silver, Aja Huang, Chris J. Maddison, et al. “Mastering the game of Go with deep neural networks and tree search”. In: Nature 529.7587 (Jan. 2016), pp. 484–489.
[17] S. E. Wei, V. Ramakrishna, T. Kanade, et al. “Convolutional Pose Machines”. In: CVPR. 2016, pp. 4724–4732.
[18] Tobias Würfl, Florin C Ghesu, Vincent Christlein, et al. “Deep learning computed tomography”. In: International Conference on Medical Image Computing and Computer-Assisted Springer International Publishing. 2016, pp. 432–440.   Further Reading A gentle Introduction to Deep Learning Transcript (Summer 2020 !) Free LME Deep Learning Ressources
Einbetten
Wordpress FAU Plugin
iFrame
Teilen