Thursday, March 6, 2008

March 7 2008

MAIN POINTS
We discussed in class how Taylor polynomials aren't suitable for interpolation. Thus, polynomials are used. Finding a function passing through (x0,y0) and (x1,y1) is like solving for a function such that f(x0)=y0 and f(x1)=y1. The text defines L0 and L1 and then defines p(x) off of these two functions. They are formulated in a way that makes P(x0)=y0 and P(x1) = y1. This is generalized to any number of data points on p.105. Thm 3.2 says that this approximating function can exist with a degree of most n, and defines the function based off of the form given on p.105. Example 1 shows this method making much better predictions than Taylor polynomials did, and then shows the interp MAPLE command.

CHALLENGES
I understand why the numerator in the L{n,k} function is the way it is (to define the zeroes), but why is there also this very long denominator?

Also, I was expecting to see gigantic matrices like we saw in class... how is what they're doing here different?

REFLECTIONS
Lagrange sure has a lot of things named after him. This method seems more convenient than Taylor polynomials in a lot of ways— but I'm curious about situations in which this might be suboptimal.

No comments: