Sunday, March 30, 2008

March 31 2008

MAIN POINTS
The text describes how we can estimate the definite integral of a function through "numerical quadrature," which is basically performed by selecting distinct points within the desired interval, evaluating them at the function, finding the Lagrange Interpolating Polynomial of these points, and then integrating the resulting polynomial. The error term for this integral looks nearly impossible to estimate, because it is inside both a product notation, and an integral.
The text describes the trapezoidal rule next, which is taught in high school, and just creates neighboring trapezoids out of function values. Simpson's rule finds the second-degree Lagrange polynomial between two function values. Its error term includes a fourth derivative, so knowing the fourth-derivative behavior of the function can give an estimate of how exact the answer will be.
Finally, the text makes a generalization of this class of Newton-Cotes' methods. I don't really understand how this generalization works, and how the n=... values of these methods are built up.

CHALLENGES
As written above, how are Newton-Cotes formulas built up? What's the difference between open and closed? Is it basically the Lagrange interpolating polynomial of n points being integrated? Is the basic integration model with the rectangles inside the function the model for n=0?

REFLECTION
This is more interesting than the normal approximation we learned in HS calc. As n increases, the amount of calculation seems to increase. The error term may or may not drop, it seems to depend on h and the upper derivative behavior of f(x).

Sunday, March 9, 2008

March 10 2008

MAIN POINTS
The reading begins with Thm 3.3, which defines the error function in much the same way that it was defined for Taylor polynomials. Xi(x) is within the bounds (a,b). The proof for this takes up a page and uses Rolle's Theorem. Ex2 I don't quite understand. Ex3 I understand more— they're comparing the error when using higher and higher degree polynomials, and the steps theyve explained already. The text also explains that one often has to calculate successive degree polynomials to find the one that reaches the correct accuracy.

CHALLENGES
I don't understand what they're doing in Example 2... so they're taking a table of values of e^x, and interpolating between them... but using a Lagrange polynomial of degree 1— does this mean they select two points from the function and make a polynomial out of them? What is the Bessel function?

REFLECTION
Lagrange strike me as being useful in the computational domain because computer data deals with discrete data points (in sound, pictures, video) and any time we want to try and think of these as continuous data, we have to make a transformation. The error terms here seem sort of difficult to calculate exactly, so that could be a problem in more open-ended applications of Lagrange polynomials. However, in A/V applications, it's probbaly possible to map out the domain of the problem really well to predict what sort of errors could be encountered.

Thursday, March 6, 2008

March 7 2008

MAIN POINTS
We discussed in class how Taylor polynomials aren't suitable for interpolation. Thus, polynomials are used. Finding a function passing through (x0,y0) and (x1,y1) is like solving for a function such that f(x0)=y0 and f(x1)=y1. The text defines L0 and L1 and then defines p(x) off of these two functions. They are formulated in a way that makes P(x0)=y0 and P(x1) = y1. This is generalized to any number of data points on p.105. Thm 3.2 says that this approximating function can exist with a degree of most n, and defines the function based off of the form given on p.105. Example 1 shows this method making much better predictions than Taylor polynomials did, and then shows the interp MAPLE command.

CHALLENGES
I understand why the numerator in the L{n,k} function is the way it is (to define the zeroes), but why is there also this very long denominator?

Also, I was expecting to see gigantic matrices like we saw in class... how is what they're doing here different?

REFLECTIONS
Lagrange sure has a lot of things named after him. This method seems more convenient than Taylor polynomials in a lot of ways— but I'm curious about situations in which this might be suboptimal.