Welcome to the homepage for Numerical Analysis (Math 4500/6500)! I will post all the homework assignments for the course on this page. Our text for the course is Cheney and Kincaid, Numerical Mathematics and Computing (seventh edition). If you want to save money and get the 6th edition for a lot less money, you’ll be fine. During Fall 2013, the class meets twice a week at 11:00-12:15 TTh in Boyd 221.
Numerical Analysis is the art and science of doing mathematics with a computer. At this point in your mathematical career, you have been trained to think of things like row-reducing a matrix, taking the derivative of a simple function, and computing an integral as simple, slightly boring, computational procedures. It may come as a surprise to learn that doing each of these operations in practice (i.e. using a computer or calculator) is actually quite subtle and interesting.
The essential problem is that while mathematicians generally prove theorems over the real numbers, computers work in a finite approximation to the real numbers, storing “floating point”; numbers in a fixed number of binary digits. This means that only a finite collection of real numbers are actually representable. The computer tries to define addition and multiplication of these numbers so that the result is the closest representable number to the actual sum or product that one is trying to compute. Unfortunately, the difference between the correct answer and the number returned by the computer (called “roundoff error”) is not always insignificant, especially when thousands or millions of computations are performed in sequence.
In this class, we will learn the fundamentals of numerical mathematics, covering the basics of numerical arithmetic, finding the roots of equations, and numerical differentiation and integration. The next semester (MATH 4510) will cover more advanced and current topics including numerical linear algebra (a key ingredient in data compression schemes like MP3), numerical solution of ordinary and partial differential equations (a fundamental part of fluid and climate models), and numerical minimization of functions of many variables (used in everything from engineering to mathematical biology).
Mathematica will be an integral part of the course. The mathematics department has a site license for Mathematica and you can get a copy for your laptop or home computer free of charge. I encourage you to work through some introductory material on Mathematica (Programming with Mathematica: An Introduction looks pretty good) in order to get grounded. Some of the homework assignments and projects will require you to write Mathematica programs, and we’ll start the course with an introduction to Mathematica programming.
Syllabus and Policies
Please examine the course syllabus. If you think you can get by with this copy, save a tree! Don’t print it out.
The course syllabus lists the various policies for the course (don’t miss the attendance policy). But one that’s worth stating up front is the WF/WP policy. It’s this:
At the time of withdrawal, you must have received 50% of the total points assigned in the class before the date of withdrawal to be considered “passing” and get a grade of WP. An exception to this rule is made for the midterm: for two weeks after the date of the midterm, the midterm grade will not be counted. (That is, you can WP the class regardless of your midterm grade if you have 50% of the homework points assigned before the first exam.)
Here are links to my lecture notes for the course. Each lecture is usually accompanied by one or more Mathematica notebooks explaining and demonstrating the concepts from class. These were originally Mathematica 7 notebooks, but I’m updating as we go.
- Introduction. This lecture has one demonstration: Precision and the solution of linear equations.
- Floating Point Numbers. There are several demonstrations for this lecture: Machine Numbers, Conversion to single and double precision floating point numbers (try this with some digits of Pi, such as 3.141592653589793238462643).
- Mathematica Programming. This set of lectures covers: Entering and evaluating expressions. Functions. Plots. Calculus. Series Expansions. Patterns and Transformation Rules. Iteration. Operations on Lists. Interactivity.
- Finding the roots of Equations. This set of lectures comes with demonstrations of the bisection method, bisection versus false position, Newton’s method in one dimension, Newton’s method in N dimensions, convergence of 2-dimensional Newton’s method to various roots, and includes some geometric example problems for Newton’s method . We also discuss comparison of Newton’s method and the secant method.
- Interpolation of Data. In this lecture, we discuss the surprisingly difficult and subtle problem of estimating the value of a function between given data points. We include a demonstration of interpolating polynomials.
- Error in Interpolation of Functions. We dig deeper into interpolation errors, discussing interpolating polynomials and nodes , the effect of choice of nodes on interpolation error, and link to a web demo of choice of nodes vs interpolation error.
- Estimating Derivatives and Richardson Extrapolation. We give a rigorous procedure, called Richardson extrapolation, for bounding approximation errors. We include demonstrations of symbolic Richardson extrapolation, and Richardson extrapolation and derivatives.
- Estimating Derivatives by Polynomial Fitting. We can use interpolating polynomials to estimate derivatives. This is how we derive the approximation for the second derivative.
- There is an optional unit here on numerical differentiation with noise. I don’t think we’re going to cover it this time around, but I’m leaving the resources linked here if you’re interested. The first resource is the paper Numerical Differentiation of Noisy, Nonsmooth Data by Chartrand. The second is a demonstration of Chartrand’s method.
- I don’t think we’re going to cover this either, but I’m leaving the resources in place if we have time. Computing Derivatives of Polylines. This lecture is based on the paper Asymptotic Analysis of Discrete Normals and Curvatures of Polylines. These methods are implemented in curvature and torsion of polylines with the data set trefoil data.dat.
- Numerical Integration and the Trapezoid Rule. We now return to Math 2260 and reintroduce our old friend the Trapezoid rule with a demonstration.
- Advanced Error Analysis of the Trapezoid Rule. This is basically a lecture for the graduate students, but the math is so beautiful and cool that I can’t resist it. We have demonstrations of the Peano kernel error estimate and the Bernoulli polynomials.
- The Romberg integration algorithm is our first really modern integrator. It comes with a demonstration showing how well it works.
- Adaptive Integration and Simpson’s Rule. This doesn’t have a demonstration (yet).
- Newton-Cotes Rules. This lecture is different from the others because the entire lecture is given in Mathematica.
- Gauss Quadrature. Another integration rule, which is based on the Legendre polynomials.
- Minimization of functions of one variable. This is the start of our last big topic: numerical optimization. We have some examples (including Brent’s method).
- Theory of Brent’s method. The source material here is Chapter 5 of Brent’s book, which contains the original algorithm.
- Minimization with derivatives. If you can compute derivatives of your function symbolically (this can be a big if in applications!) you can do somewhat better than just using a straight Brent’s method. See Section 10.3 of Numerical Recipes in C (Press, Flannery, Teukolsky, Vetterling) and our Mathematica implementation of Brent’s method with derivatives.
- We probably won’t cover this (optional) lecture on Inexact Minimization, but I’m leaving it here in case we get there. The source material is Section 4.8 of Practical Optimization by Antoniou and Lu. We give a Mathematica implementation of Fletcher’s method for inexact minimization.
- Function Minimization without Derivatives: The Nelder-Mead Simplex Algorithm. The Nelder-Mead algorithm isn’t particularly fast, but it’s incredibly robust, and it’s easy to code. The paper Convergence Properties of the Nelder-Mead Algorithm in Low Dimensions proves that Nelder-Mead converges for strictly convex functions on the plane. There’s also a good discussion of Nelder-Mead in this section of Numerical Analysis with MATLAB. Here’s a Nelder-Mead Mathematica demonstration.
- The Nelder-Mead Algorithm in N dimensions. We can fairly easily extend Nelder-Mead to arbitrary dimensions. Here’s the Nelder-Mead Mathematica Demonstration (Nd).
- Compass Search and Other 0th order methods. The Nelder-Mead algorithm can be extended to various algorithms which search in particular directions.
- Simulated Annealing and Markov Chain Methods.
- Minimization of Functions of N Variables I: Introduction to Direction Set Methods. Direction set methods use the equivalent of knowing the derivative of your function for multivariable problems. Unlike one variable problems where knowing the derivative is a convenience, this can be almost a necessity in order to make progress with some multivariable problems. The demonstration gives the basic algorithm, we also link to optimizing the Rosenbrock Banana Function.
- Direction Set Methods for Minimization of Functions of N variables II: Conjugate Directions and the Conjugate Gradient Algorithm. This has source material from Numerical Recipes in C online (you want section 10.6), but I’ll try to replace it with a better reference before we get here.
- This is an extra lecture on constrained optimization. I don’t think we’ll get to it this time around.
Homework will be due on Thursdays at more-or-less weekly intervals. Each assignment will contain some regular problems and some challenge problems. The regular problems are required. No individual challenge problem is required, but undergraduates must complete a total of 10 challenge problems during the semester (you pick ’em!) and graduate students must complete a total of 15 challenge problems during the semester (again, you pick ’em!).
- Homework 1. Covers Taylor’s Theorem. Floating Point Representation. Loss of Significance. Due 8/29/2013.
- Homework 2. Covers our notes on Mathematica programming. You’ll need the data files The Adventures of Sherlock Holmes and (if you do the extra credit problem) The Divine Comedy. Due 9/5/2013.
- Homework 3. Covers our notes on Finding Roots of Equations. Due 9/12/2013.
- Homework 4. Covers our notes on Interpolation and Interpolation Errors. Due 9/26/2013.
- Homework 5. Covers our notes on Richardson Extrapolation and Numerical Differentiation. Due 10/10/2013.
- Homework 6. Covers our notes on the Trapezoid Rule and its Advanced Error Analysis. Due 10/17/2013.
- Homework 7. Covers our notes on Romberg Integration and Adaptive Integration. Due 10/24/2013.
- Homework 8. Covers our notes on Newton-Cotes Rules and Gauss Quadrature. Due 11/7/2013.
- Homework 9. Covers our notes on One-variable optimization. Due 11/21/2013.
- Homework 10. Covers our notes on the Nelder-Mead algorithm. Due 12/3/2013.
- Homework 11. Covers our notes on Direction Set Methods and Conjugate Gradient.
- Homework 12. Covers the notes on Simulated Annealing.
Exams and Final Projects
This course has a midterm exam and a final project. The exams from 2009 and 2010 are here. This year’s exam is a practical project to be completed using Mathematica: Midterm exam for the 2013 course. Due Thursday, November 7, 2013, at midnight, by email.
The final project will be a 3-5 page research project leading to a calculation. The purpose of the course has been to show you some of the issues (theoretical and practical) involved in doing these calculations, together with a standard framework for comparing the accuracy of various methods by finding and plotting the number of correct digits. Your final project should draw from all of the class notebooks as needed. In accordance with UGA policy on final exams, this final project will be considered a take-home final exam for the course. However, it is specifically permitted to talk with other students about the project, and to use web resources and textbooks as you work.
The final project changes every year. The 2013 project was the Taco-Related Mountain Navigation challenge. Here is the Project Description: Taco-Related Mountain Navigation Challenge, and TerrainPackage.m along with TerrainPackageTester.nb. In the TRMNC, you are given a function z(x,y) describing the topography of the L.A. basin and are asked to find the shortest overland route between the “El Chato” taco truck and a set of coordinates in the desert. The 2013 winner was Scott Barnes, whose Bezier curve solution is shown below. This solution has length 16.403 km, which is the new number to beat. Congratulations, Scott!