卖便当的少年吧 关注:2贴子:701
  • 6回复贴,共1


IP属地:广东1楼2016-02-12 16:19回复
    2L


    IP属地:广东2楼2016-02-12 16:19
    回复
      2026-01-19 09:44:52
      广告
      不感兴趣
      开通SVIP免广告
      Data compression using reduced rank approximations
      - the rank of a matix specifies the number of linearly independent columns (or rows)
      - a measure of redundancy, matrix of low rank has a large amount of redundancy
      * a matrix B of rank one, columns are all multiples of one another
      - u as a basis in R^m, then B = [v_1u, ..., v_nu] = uv^T
      - m+n elements vs. mxn


      IP属地:广东本楼含有高级字体3楼2016-02-12 16:31
      回复
        - Matrix B is the best rank one approximation to A if the error matrix B - A has the minimal Frobenius norm.
        - Frobenius norm of matrix X: |X| = \sqrt{\sigma_{ij} x_{ij}^2} (note: square root)
        - This norm is just the Eculidean norm \|x\|_2 of the matrix considered as a vector in R^{mn}
        ~~~~~~~~~ inner product related ~~~~~~~~~~~~~~~~
        * Inner product of two matrices X\cdot Y = \sigma_{ij}x_{ij}y_{ij} (pointwise multiplication)
        - |X|^2 = X\cdot X (squared Frobenius norm = self inner product)
        - inner product of matrices can be viewed in three ways
        =1= sum of the inner product of corresponding rows
        =2= sum of the inner product of corresponding columns
        =3= sum of corresponding entries
        * For rank one matrices xy^T, uv^T
        xy^T \cdot uv^T
        = [xy_1, ..., xy_n] \cdot [uv_1, ..., uv_n] ~~~~~ write in columns form
        = \sigma_i xy_i \cdot uv_i ~~~~~~ inner product of corresponding cols
        = \sigma_i (x \cdot u)(y_iv_i) ~~~~~~ move scalar out
        = (x \cdot u)(y \cdot v) ~~~~~~ write in vector inner product form


        IP属地:广东本楼含有高级字体4楼2016-02-12 17:00
        回复
          - Matrix A in SVD form

          - Frobenius norm of matrix A ( the 4th equality => 4L )


          IP属地:广东5楼2016-02-13 11:15
          回复
            matlab mathworks moler eigs.pdf
            ===================================
            定义:eigenvalue a, eigenvector x
            Ax = ax
            定义:singular value o, 2 sigular vector u,v
            Av = ou (1)
            AHu = ov (2)
            ===================================
            H:hermitian
            由(2)得AAHu = oAv = o2u,也就是AAHu = o2u,即
            o2是AAH的特征值,u是对应的的特征向量
            同理
            由(1)得AHAv = oAHu = o2v,也就是AHAv = o2v,即
            o2是AHA的特征值,v是对应的特征向量
            ===================================


            IP属地:广东6楼2016-09-26 22:47
            收起回复