24 - Deep Learning - Plain Version 2020 [ID:21120]
45 von 45 angezeigt

Welcome back to deep learning. Today we want to continue talking about our common practices.

The methods that we are interested in today are about class imbalance. So a very typical

problem is that one class, in particular the very interesting one, is not very frequent.

So this is a challenge for all the machine learning algorithms. Let's take the example

of fraud detection. Out of 10,000 transactions, 9,999 are genuine and only one is fraudulent.

So if you classify everything as genuine, you get 99.99% accuracy. Obviously, we run into trouble

also in less severe situations. Let's say if you only have one out of a hundred fraudulent

transactions, then you still would very easily construct a model with 99% accuracy if you

classify everything into non-fraudulent. This is of course a very hard problem. In particular,

in screening applications, you have to be very careful because just classifying everything

to the most common class would still get you very, very good accuracy. It doesn't have

to be credit cards. For example, here detecting mitotic cells is a very similar problem. A

mitotic cell is a cell undergoing cell division. These cells are very important, as we already

heard in the introduction. If you count the cells under mitosis, you know how aggressive

the associated cancer is. So this is a very important feature, but you have to detect

them correctly. They make up only a very small portion of the cells and tissues. So the data

of this class has been seen much less during the training and measures like the accuracy,

L2 norm and cross entropy don't show this imbalance. So they are not very responsive

to this. One thing that you can do, for example, is resampling. The idea is that you balance

the class frequencies by sampling classes differently. So you can understand this means

that you have to throw away a lot of the training data of the most frequent class. This way

you get to train a classifier that will be balanced towards both of the classes. Now

they are seen approximately as frequent as the other class. The disadvantage of this

approach is that you are not using all the data that is being recorded. And of course,

you don't want to throw away data. So another technique is oversampling. You can just sample

more often from the underrepresented classes. In this case, you can use all of the data.

The disadvantage is of course, that it can lead to heavy overfitting towards the less

frequent seen examples. Also possible are combinations of over and under sampling. This

then leads to advanced resampling strategies that try to avoid the shortcomings of under

sampling by a synthetic minority over sampling. It's rather uncommon in deep learning. Underfitting

caused by under sampling can be reduced by taking a different subset after each epoch.

This is quite common and you also can use data augmentation to help reducing overfitting

for underrepresented classes. So you essentially augment more of the samples that you have

seen less frequently. Instead of fixing the data of course, you can also try to adapt

the loss function to be stable with respect to class imbalance. Here you then choose a

loss with the inverse class frequency. You can then create the weighted cross entropy

where you introduce an additional weight W which is simply determined as the inverse

class frequency. More common in segmentation problems are things like the dice loss based

on the dice coefficient. Here you adjust the loss according to the area overlap. It is

a very typical measure for evaluating segmentations instead of class frequency. Weights can also

be adapted with regards to other considerations but we are not discussing them here in this

current lecture. This already brings us to the end of this part and in the final section

of common practices we will now discuss measures of evaluation and how to evaluate our models

appropriately. So thank you very much for watching this small video and I hope you enjoyed

it. Looking forward to see you in the next one. Bye bye.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:05:13 Min

Aufnahmedatum

2020-10-12

Hochgeladen am

2020-10-12 13:46:39

Sprache

en-US

Deep Learning - Common Practices Part 3

This video discusses the problem of class imbalance and how to compensate for it.

For reminders to watch the new video follow on Twitter or LinkedIn.

Further Reading:
A gentle Introduction to Deep Learning

Einbetten
Wordpress FAU Plugin
iFrame
Teilen