jagomart
digital resources
picture1_Matrix Pdf 88167 | Math416 Leastsquares


 161x       Filetype PDF       File size 0.07 MB       Source: uregina.ca


File: Matrix Pdf 88167 | Math416 Leastsquares
math 416 abstract linear algebra fall 2011 section e1 least squares solution 1 curve tting the least squares solution can be used to t certain functions through data points example ...

icon picture PDF Filetype PDF | Posted on 15 Sep 2022 | 3 years ago
Partial capture of text on file.
                                           Math 416 - Abstract Linear Algebra
                                                     Fall 2011, section E1
                                                    Least squares solution
             1. Curve fitting
             The least squares solution can be used to fit certain functions through data points.
             Example: Find the best fit line through the points (1,0),(2,1),(3,1).
             Solution:     We are looking for a line with equation y = a + bx that would ideally go through
             all the data points, i.e. satisfy all the equations
                                                         
                                                         
                                                           a+b(1)=0
                                                         
                                                           a+b(2)=1
                                                         
                                                         
                                                           a+b(3)=1.
                                                                        
             In matrix form, we want the unknown coefficients a to satisfy the system
                                                                        b
                                                                      
                                                          1 1     a       0
                                                                        
                                                          1 2     b   = 1
                                                          1 3             1
             but the system has no solution. Instead, we find the least squares fit, i.e. minimize the sum of
             the squares of the errors
                                                         3
                                                        X                    2
                                                            |(a + bx ) − y |
                                                                     i     i
                                                        i=1
             which is precisely finding the least squares solution of the system above. Writing the system as
             A~c = ~y, the normal equation is
                                                             T         T
                                                           A A~c = A ~y
             and we compute                                             
                                                                 1 1               
                                                T       1 1 1                 3   6
                                              A A= 1 2 3 1 2 = 6 14
                                                                    1 3
                                                                           
                                                            1 1 1       0       2
                                                    T                  
                                                  A ~y = 1 2 3          1 = 5 .
                                                                        1
             The normal equation has the unique solution
                                              −1       1                 1          1
                                  ~c =   3   6      2 =        14 −6 2 =            −2 = −3
                                         6 14       5      6 −6 3          5     6   3         1
                                                                                               2
             so that the best fit line through the data points is y = −1 + 1x.
                                                                              3    2
                                                                  1
           Remark: If we hate the formula for the inverse of a 2 × 2 matrix, or if we need to solve a
           bigger system, we can always use Gauss-Jordan:
                                                                    1
                              3  6  2 ∼ 3 6 2 ∼ 3 0 −1 ∼ 1 0 −3 .
                              6 14 5      0 2 1       0 2   1      0 1    1
                                                                          2
                                                    1
           The unique solution to the system is indeed −3 .
                                                     1
                                                     2
           2. Arbitrary inner product spaces
           Just like Gram-Schmidt, the least squares method works in any inner product space V, not just
            n     n
           R (orC ). AssumethatthesubspaceE ⊆ V ontowhichweareprojectingisfinite-dimensional.
           Example: Consider the real inner product space C[0,1] := {f: [0,1] → R | f is continuous }
           with its usual inner product           Z
                                                    1
                                           (f,g) =    f(t)g(t)dt.
                                                    0
                                                    2
           Find the best approximation of the function t by a polynomial of degree at most one.
           Solution using least squares:  Wearelooking for a polynomial of degree at most one a+bt
           that would ideally satisfy
                                                         2
                                                a+bt=t
           which is clearly impossible, i.e. t2 ∈/ Span{1,t}. The best approximation is the vector in
           Span{1,t} minimizing the error vector
                                                         2
                                                 a+bt−t
           which is achieved exactly when the error vector is orthogonal to Span{1,t}. This imposes two
           conditions:                     (
                                                      2
                                             (a+bt−t ,1)=0
                                                      2
                                             (a+bt−t ,t) = 0
           which we can rewrite as       (
                                                            2
                                          a(1,1)+b(t,1) = (t ,1)
                                                           2
                                          a(1,t) + b(t,t) = (t ,t)
           or in matrix form:                               
                                                             2
                                         (1,1) (t,1)  a = (t ,1) .
                                                              2
                                         (t,1) (t,t)  b     (t ,t)
           (This is the normal equation. The coefficient matrix here plays the role of ATA in the previous
           example, i.e. the square matrix of all possible inner products between vectors in the basis of E,
           in this case {1,t}. Likewise, the right-hand side plays the role of AT~y in the previous example,
           i.e. the list of all possible inner products between the basis vectors {1,t} of E and the vector
           2
           t not in E which we want to project down to E.)
           Computing the inner products involved, the system can be written as
                                                 1 1 1
                                                     2  3
                                                  1  1  1
                                                  2  3  4
                                                     2
             which we now solve:
                                        1   1         1   1         1   1              1
                                      1 2 3 ∼ 1 2 3 ∼ 1 2 3 ∼ 1 0 −6 .
                                      1  1   1       0   1   1       0 1 1          0 1      1
                                      2  3   4          12   12
                                                     1
             The least squares solution is a = −6 so that the best approximation of t2 by a polynomial
                                               b        1
             of degree at most one is −1 + t.
                                            6
             Remark: If we hate Gauss-Jordan, we can always use the formula for the inverse of a 2 × 2
             matrix, so that the unique solution to the system is
                                              1 1−11           1    1    −11
                                                   2       3  =           3     2    3
                                                1  1       1     1/12 −1       1     1
                                                2  3       4               2         4
                                                                  1      1 
                                                              = 31 −2 4
                                                                  −2     1     3
                                                                 1           
                                                              =      2    −3 4
                                                                 6 −3 6       3
                                                              =1 −1
                                                                 6   6
                                                                  1
                                                              = −6 .
                                                                   1
             http://www.youtube.com/watch?v=lBdASZNPIv8
             Solution using Gram-Schmidt: In a previous exercise, we obtained the orthonormal basis
             {u =1,u =√3(2t−1)} of Span{1,t}. Using this, we compute the projection
                1        2
                                                 2                  2
                                      Proj     (t ) = Proj        (t )
                                           {1,t}           {u ,u }
                                                             1  2
                                                        2             2
                                                    =(t ,u )u +(t ,u )u
                                                            1  1         2   2
                                                        2           2 √             √
                                                    =(t ,1)1+(t , 3(2t−1)) 3(2t−1)
                                                       1         2        2   
                                                    =3+3 2(t ,t)−(t ,1) (2t−1)
                                                    =1+32−1(2t−1)
                                                       3       4    3
                                                    =1+1(2t−1)
                                                       3    2
                                                    =−1+t.
                                                         6
                                                                  3
The words contained in this file might help you see if this file matches what you are looking for:

...Math abstract linear algebra fall section e least squares solution curve tting the can be used to t certain functions through data points example find best line we are looking for a with equation y bx that would ideally go all i satisfy equations b in matrix form want unknown coecients system but has no instead nd minimize sum of errors x which is precisely nding above writing as c normal and compute unique so remark if hate formula inverse or need solve bigger always use gauss jordan indeed arbitrary inner product spaces just like gram schmidt method works any space v not n r orc assumethatthesubspacee ontowhichweareprojectingisnite dimensional consider real f continuous its usual z g dt approximation function by polynomial degree at most one using wearelooking bt clearly impossible span vector minimizing error achieved exactly when orthogonal this imposes two conditions rewrite coecient here plays role ata previous square possible products between vectors basis case likewise right ha...

no reviews yet
Please Login to review.