Thursday, April 24, 2008

April 25 2007

MAIN POINTS
Sarah defined chaos as behavior that is aperiodic and unpredictable over the long-term, yet deterministic (i.e, based on differential equations), and very sensitive on initial conditions. In 3-Space, for instance, a starting point of (0.99, 1, 1) will have very different long-term behavior than (1, 1, 1), with the difference between the two models growing exponentially as t increases. We saw this in our last lab. The Lorentz equations were derived to model weather systems, so this is evidence for how only close forecasts of the weather are accurate, because small errors in measurements of the initial condition quickly snowball out of control. Sarah used this sensitivity in composing music. She mapped a piano piece to a pitch sequence, and assigned that pitch sequence onto an existing strange attractor (I'm assuming it was the attractor starting at (1,1,1) with h=0.01, for instance). Then she chose a very similar starting point, and created the piece by approximating the differential equation at each discrete step t using the Runge-Kutta numerical method. The new piece started similarly to the old piece, but quickly deviated. However, it maintained some of the same tonal qualities. Due to the limitations of the model, rhythm stayed the same, in discrete 8th notes or whatever they were. It sounded great and the talk was packed with music people.

CHALLENGES
I didn't quite see how she was able to map the pitch sequences onto the Lorentz attractor, and then shift the notes based on the new attractor... She must have carefully chosen the mapping to make the new piece have similar key and chords, because I would think that the key would start to jump all over the place. I'd like to know how many pieces she created before she came across the one she settled on.

REFLECTIONS
The project raises some questions— when a composer creates variations on a theme, how are these variations different? What is the creative process there? Can chaos be used to model this creative process? This project sidestepped probabilistic methods entirely and used deterministic behavior built into the Lorentz equations instead.

Thursday, April 10, 2008

April 11 2008

MAIN POINTS
The text defines truncation error, a measure between the approximation and the exact solution of the differential equation. This error is shown to have behavior O(h) for Euler's method, where h is the value at which the function is computed.
The text then extends Euler's method. Euler's method is based on the n=1 Taylor polynomial, and the text extends this method to any value of n. This creates the "Taylor method of order n," which is just a substitution of the nth Taylor polynomial (sans error term) into Euler's method. This involves calculating n-1 derivatives, which the book is nice enough to carry out for 2nd and 4th order Taylor methods. The error of the 4th order method is impressively low.

CHALLENGES
How is this truncation error different from errors we've studied before? How can it measure the difference between the exact solution and the approximation if we don't know the exact solution? What is the significance of the final theorem— why does error seem to grow larger as the order of the method increases by (O(h^n))?

REFLECTION
Do you ever use this method when you're solving differential equations with a computer, or do you use the Runge-Kutta method from the next chapter? I'd like to see some real uses of differential equations, because I've only seen somewhat trivial ones. Are there any demos?

If you're doing anything like this with your lab over the summer, I'd like to check it out some time— I'll be staying here doing an internship (in scientific comp!) at the U.

Tuesday, April 8, 2008

April 9 200000008

MAIN POINTS
A differential equation describes the change of some variable with respect to another. Because they are often difficult or impossible to solve exactly, they are a good candidate for numerical methods. This chapter aims to find approximations at certain points, rather than the entire solution. The text defines the "Lipschitz condition," whos significance I don't immediately see. The further theorems arising that deal with convex and Lipshitz conditions confuse me.
Euler's Method is said to be seldom used in practice. It gives an approximation to a well-posd initial value problem. The method is derived through Taylor's theorem and is a fairly simple algorithm that uses recursion. The error is shown to grow as we get away from the initial value (typical of things that use a Taylor-style of thinking). A lot of this reading was lost on me because the reading defines theorem's without first showing why they are important, and sometimes without sufficiently defining terms.

CHALLENGES
The concept of "convex" works for me, but I have a hard time imagining what the "set" looks like. Pages 251-253 really confuse me. I still don't have a very good grasp of how a differential equation works. I will definitely need to go over a lot of these things in class, and I predict other people in the class will feel the same. Euler's method seems fairly simple, though.

REFLECTION
I've never done anything with differential equations. I'm not even really sure what we'd be approximating by using Euler's method— could we be approximating the location of a pendulum, for instance? In the Euler's method we see again the use of Taylor polynomials to derive a theorem. Most of what we've used in the class has been either polynomial interpolation or using Taylor... are there any other schools of thought in approximating functions like this?

Thursday, April 3, 2008

April 4 2008

MAIN POINTS
Simply using one instance of Newton-Cotes over a large domain is impractical and full of error. Dividing the domain up into sections and doing a Newton-Cotes integration of those pieces is much more accurate. The approximation is thus a sum of however many polynomials there are from splitting up the domain. The error term is the sum of each error term. The composite midpoint rule is described by Thm 4.6 as having a pretty error term. The midpoint rule is a common rule in calculus classes. This rule is run on a sample equation, and they practice on some Maple code.

CHALLENGES
Why do they show the Composite Midpoint Rule after the Composite Simpson's rule? I'd think the Midpoint rule would be the simpler one... everything else seems to follow directly from the Newton-Cotes formulas (error terms etc).

REFLECTION
I'd like to try this is discrete data points with no explicit formula. There is then a limit to the number of divisions you can make, and there is no error bound because you can't be sure of the derivative at any given point.
This seems similar to what Excel might use to draw continuous graphs out of discrete points— but the graph in Fig 4.7 has abrupt edges in the graph (instantaneous derivative change), whereas Excel produces continuous graphs.

Sunday, March 30, 2008

March 31 2008

MAIN POINTS
The text describes how we can estimate the definite integral of a function through "numerical quadrature," which is basically performed by selecting distinct points within the desired interval, evaluating them at the function, finding the Lagrange Interpolating Polynomial of these points, and then integrating the resulting polynomial. The error term for this integral looks nearly impossible to estimate, because it is inside both a product notation, and an integral.
The text describes the trapezoidal rule next, which is taught in high school, and just creates neighboring trapezoids out of function values. Simpson's rule finds the second-degree Lagrange polynomial between two function values. Its error term includes a fourth derivative, so knowing the fourth-derivative behavior of the function can give an estimate of how exact the answer will be.
Finally, the text makes a generalization of this class of Newton-Cotes' methods. I don't really understand how this generalization works, and how the n=... values of these methods are built up.

CHALLENGES
As written above, how are Newton-Cotes formulas built up? What's the difference between open and closed? Is it basically the Lagrange interpolating polynomial of n points being integrated? Is the basic integration model with the rectangles inside the function the model for n=0?

REFLECTION
This is more interesting than the normal approximation we learned in HS calc. As n increases, the amount of calculation seems to increase. The error term may or may not drop, it seems to depend on h and the upper derivative behavior of f(x).

Sunday, March 9, 2008

March 10 2008

MAIN POINTS
The reading begins with Thm 3.3, which defines the error function in much the same way that it was defined for Taylor polynomials. Xi(x) is within the bounds (a,b). The proof for this takes up a page and uses Rolle's Theorem. Ex2 I don't quite understand. Ex3 I understand more— they're comparing the error when using higher and higher degree polynomials, and the steps theyve explained already. The text also explains that one often has to calculate successive degree polynomials to find the one that reaches the correct accuracy.

CHALLENGES
I don't understand what they're doing in Example 2... so they're taking a table of values of e^x, and interpolating between them... but using a Lagrange polynomial of degree 1— does this mean they select two points from the function and make a polynomial out of them? What is the Bessel function?

REFLECTION
Lagrange strike me as being useful in the computational domain because computer data deals with discrete data points (in sound, pictures, video) and any time we want to try and think of these as continuous data, we have to make a transformation. The error terms here seem sort of difficult to calculate exactly, so that could be a problem in more open-ended applications of Lagrange polynomials. However, in A/V applications, it's probbaly possible to map out the domain of the problem really well to predict what sort of errors could be encountered.

Thursday, March 6, 2008

March 7 2008

MAIN POINTS
We discussed in class how Taylor polynomials aren't suitable for interpolation. Thus, polynomials are used. Finding a function passing through (x0,y0) and (x1,y1) is like solving for a function such that f(x0)=y0 and f(x1)=y1. The text defines L0 and L1 and then defines p(x) off of these two functions. They are formulated in a way that makes P(x0)=y0 and P(x1) = y1. This is generalized to any number of data points on p.105. Thm 3.2 says that this approximating function can exist with a degree of most n, and defines the function based off of the form given on p.105. Example 1 shows this method making much better predictions than Taylor polynomials did, and then shows the interp MAPLE command.

CHALLENGES
I understand why the numerator in the L{n,k} function is the way it is (to define the zeroes), but why is there also this very long denominator?

Also, I was expecting to see gigantic matrices like we saw in class... how is what they're doing here different?

REFLECTIONS
Lagrange sure has a lot of things named after him. This method seems more convenient than Taylor polynomials in a lot of ways— but I'm curious about situations in which this might be suboptimal.