Thursday, April 24, 2008

April 25 2007

MAIN POINTS
Sarah defined chaos as behavior that is aperiodic and unpredictable over the long-term, yet deterministic (i.e, based on differential equations), and very sensitive on initial conditions. In 3-Space, for instance, a starting point of (0.99, 1, 1) will have very different long-term behavior than (1, 1, 1), with the difference between the two models growing exponentially as t increases. We saw this in our last lab. The Lorentz equations were derived to model weather systems, so this is evidence for how only close forecasts of the weather are accurate, because small errors in measurements of the initial condition quickly snowball out of control. Sarah used this sensitivity in composing music. She mapped a piano piece to a pitch sequence, and assigned that pitch sequence onto an existing strange attractor (I'm assuming it was the attractor starting at (1,1,1) with h=0.01, for instance). Then she chose a very similar starting point, and created the piece by approximating the differential equation at each discrete step t using the Runge-Kutta numerical method. The new piece started similarly to the old piece, but quickly deviated. However, it maintained some of the same tonal qualities. Due to the limitations of the model, rhythm stayed the same, in discrete 8th notes or whatever they were. It sounded great and the talk was packed with music people.

CHALLENGES
I didn't quite see how she was able to map the pitch sequences onto the Lorentz attractor, and then shift the notes based on the new attractor... She must have carefully chosen the mapping to make the new piece have similar key and chords, because I would think that the key would start to jump all over the place. I'd like to know how many pieces she created before she came across the one she settled on.

REFLECTIONS
The project raises some questions— when a composer creates variations on a theme, how are these variations different? What is the creative process there? Can chaos be used to model this creative process? This project sidestepped probabilistic methods entirely and used deterministic behavior built into the Lorentz equations instead.

Thursday, April 10, 2008

April 11 2008

MAIN POINTS
The text defines truncation error, a measure between the approximation and the exact solution of the differential equation. This error is shown to have behavior O(h) for Euler's method, where h is the value at which the function is computed.
The text then extends Euler's method. Euler's method is based on the n=1 Taylor polynomial, and the text extends this method to any value of n. This creates the "Taylor method of order n," which is just a substitution of the nth Taylor polynomial (sans error term) into Euler's method. This involves calculating n-1 derivatives, which the book is nice enough to carry out for 2nd and 4th order Taylor methods. The error of the 4th order method is impressively low.

CHALLENGES
How is this truncation error different from errors we've studied before? How can it measure the difference between the exact solution and the approximation if we don't know the exact solution? What is the significance of the final theorem— why does error seem to grow larger as the order of the method increases by (O(h^n))?

REFLECTION
Do you ever use this method when you're solving differential equations with a computer, or do you use the Runge-Kutta method from the next chapter? I'd like to see some real uses of differential equations, because I've only seen somewhat trivial ones. Are there any demos?

If you're doing anything like this with your lab over the summer, I'd like to check it out some time— I'll be staying here doing an internship (in scientific comp!) at the U.

Tuesday, April 8, 2008

April 9 200000008

MAIN POINTS
A differential equation describes the change of some variable with respect to another. Because they are often difficult or impossible to solve exactly, they are a good candidate for numerical methods. This chapter aims to find approximations at certain points, rather than the entire solution. The text defines the "Lipschitz condition," whos significance I don't immediately see. The further theorems arising that deal with convex and Lipshitz conditions confuse me.
Euler's Method is said to be seldom used in practice. It gives an approximation to a well-posd initial value problem. The method is derived through Taylor's theorem and is a fairly simple algorithm that uses recursion. The error is shown to grow as we get away from the initial value (typical of things that use a Taylor-style of thinking). A lot of this reading was lost on me because the reading defines theorem's without first showing why they are important, and sometimes without sufficiently defining terms.

CHALLENGES
The concept of "convex" works for me, but I have a hard time imagining what the "set" looks like. Pages 251-253 really confuse me. I still don't have a very good grasp of how a differential equation works. I will definitely need to go over a lot of these things in class, and I predict other people in the class will feel the same. Euler's method seems fairly simple, though.

REFLECTION
I've never done anything with differential equations. I'm not even really sure what we'd be approximating by using Euler's method— could we be approximating the location of a pendulum, for instance? In the Euler's method we see again the use of Taylor polynomials to derive a theorem. Most of what we've used in the class has been either polynomial interpolation or using Taylor... are there any other schools of thought in approximating functions like this?

Thursday, April 3, 2008

April 4 2008

MAIN POINTS
Simply using one instance of Newton-Cotes over a large domain is impractical and full of error. Dividing the domain up into sections and doing a Newton-Cotes integration of those pieces is much more accurate. The approximation is thus a sum of however many polynomials there are from splitting up the domain. The error term is the sum of each error term. The composite midpoint rule is described by Thm 4.6 as having a pretty error term. The midpoint rule is a common rule in calculus classes. This rule is run on a sample equation, and they practice on some Maple code.

CHALLENGES
Why do they show the Composite Midpoint Rule after the Composite Simpson's rule? I'd think the Midpoint rule would be the simpler one... everything else seems to follow directly from the Newton-Cotes formulas (error terms etc).

REFLECTION
I'd like to try this is discrete data points with no explicit formula. There is then a limit to the number of divisions you can make, and there is no error bound because you can't be sure of the derivative at any given point.
This seems similar to what Excel might use to draw continuous graphs out of discrete points— but the graph in Fig 4.7 has abrupt edges in the graph (instantaneous derivative change), whereas Excel produces continuous graphs.