- Matrix B is the
best rank one approximation to A if the
error matrix B - A has the minimal Frobenius norm.
-
Frobenius norm of matrix X: |X| = \sqrt{\sigma_{ij} x_{ij}^2} (note: square root)
- This norm is just the Eculidean norm \|x\|_2 of
the matrix considered as a vector in R^{mn}
~~~~~~~~~ inner product related ~~~~~~~~~~~~~~~~
*
Inner product of two matrices X\cdot Y = \sigma_{ij}x_{ij}y_{ij} (
pointwise multiplication)
- |X|^2
= X\cdot X (squared Frobenius norm = self inner product)
- inner product of matrices can be
viewed in three ways
=1= sum of the inner product of
corresponding rows=2= sum of the inner product of corresponding columns=3= sum of corresponding entries
* For rank one matrices xy^T, uv^T
xy^T \cdot uv^T
= [xy_1, ..., xy_n] \cdot [uv_1, ..., uv_n] ~~~~~ write in columns form
= \sigma_i xy_i \cdot uv_i ~~~~~~ inner product of corresponding cols
= \sigma_i (x \cdot u)(y_iv_i) ~~~~~~ move scalar out
= (x \cdot u)(y \cdot v) ~~~~~~ write in vector inner product form
