Hi, it's time for an explicit algorithm for computing the SVD.
We know that the SVD has form u times sigma times v transpose, where this is n times m
times m, m times n and n times n, where a has dimensions m times n.
So how do we get u, how do we get v and how do we get sigma?
Sigma has form sigma1 sigma m and then the zeros and u has u1 um and v transpose carries
v1 and vn as row vectors.
So that's what we want to find out.
We want to find out the singular values, which are the diagonal entries of sigma and we want
to find out the left singular vectors and the right singular vectors.
We will just say that m is less than or equal to n for convenience, for concreteness here.
You can do the same things for matrices which have more rows than columns.
The easiest way to do this is by just using, by taking the transpose of the matrix.
The new matrix will have the correct dimensionality.
How do we proceed?
A transposed A, what's that?
Well if we had the SVD then this would be v times sigma transposed times u transposed
times u times sigma times v transposed.
This is v times sigma transposed sigma times v transposed.
And by orthogonality of v this is the same thing as v times sigma transposed sigma times
v minus 1.
And this is the diagonalisation of A transposed A, which exists because A transposed A is
symmetrical and positively semi-definite.
So you can see that sigma transposed sigma is the diagonalisation
of A transposed A. And similarly we can write A times A transposed,
which is u times sigma times v transposed times v times sigma transposed times u transposed,
which is again v transposed v is the identity matrix.
So this is u times sigma sigma transposed times u minus 1.
So this means that sigma sigma transposed is the diagonalisation of the also symmetrical
and positively semi-definite matrix A A transposed.
So now it depends on the specific case we're in, m smaller or equal than n means that one
of these two matrices is easy to work with.
And the best way to approach here is the following.
So we know the diagonalisation of those two matrices and we also know the eigenvectors
and eigenvalues of that.
So that's the best way to look at that.
Well, maybe the following.
Let's call this D. Let's call this D for diagonal matrix.
It is a diagonal matrix because well, what does it look like?
Sigma 1 sigma m and there are zeros here.
Zeroes times sigma transposed is sigma 1 sigma m but now the padding is down here.
And this is the diagonal matrix sigma 1 squared sigma m squared.
And we can also look at sigma transposed sigma.
So this is in R m times m and sigma times sigma transposed well now it's this matrix
times this matrix so the other way around.
And this is sigma 1 squared sigma m squared but now there are more entries on the diagonal.
So there are always zeros of other elements of course.
This is the same matrix but padded to n times n dimensions.
So if that is the diagonalisation of A transpose A or of A A transpose then this means that
the vectors in V and in U respectively are the eigenvectors of A transpose A or of A
A transposed and the diagonal entries of sigma transposed sigma or sigma sigma transposed
Presenters
Zugänglich über
Offener Zugang
Dauer
00:41:15 Min
Aufnahmedatum
2021-11-15
Hochgeladen am
2021-11-15 12:06:03
Sprache
en-US