In this text, students of applied mathematics, science and engineering are introduced to fundamental ways of thinking about the broad context of parallelism. The authors begin by giving the reader a deeper understanding of the issues through a general examination of timing, data dependencies, and communication. These ideas are implemented with respect to shared memory, parallel and vector processing, and distributed memory cluster computing. Threads, OpenMP, and MPI are covered, along with code examples in Fortran, C, and Java. The principles of parallel computation are applied throughout as the authors cover traditional topics in a first course in scientific computing. Building on the fundamentals of floating point representation and numerical error, a thorough treatment of numerical linear algebra and eigenvector/eigenvalue problems is provided. By studying how these algorithms parallelize, the reader is able to explore parallelism inherent in other computations, such as Monte Carlo methods.
Ronald W. Shonkwiler is a Professor in the School of Mathematics at the Georgia Institute of Technology. He has authored or co-authored over 50 research papers in areas of functional analysis, mathematical biology, image processing algorithms, fractal geometry, neural networks and Monte Carlo optimization methods. His algorithm for monochrome image comparison is part of a US patent for fractal image compression. He has co-authored two other books, An Introduction to the Mathematics of Biology and The Handbook of Stochastic Analysis and Applications. Lew Lefton is the Director of Information Technology at the Georgia Institute of Technology where he has built and maintained several computing clusters which are used to implement parallel computations. Prior to that he was a tenured faculty member in the Department of Mathematics at the University of New Orleans. His academic interests are in differential equations, applied mathematics, numerical analysis (in particular, finite element methods) and scientific computing.
Part I. Machines and Computation: 1. Introduction - the nature of high performance computation; 2. Theoretical considerations - complexity; 3. Machine implementations; Part II. Linear Systems: 4. Building blocks - floating point numbers and basic linear algebra; 5. Direct methods for linear systems and LU decomposition; 6. Direct methods for systems with special structure; 7. Error analysis and QR decomposition; 8. Iterative methods for linear systems; 9. Finding eigenvalues and eigenvectors; Part III. Monte Carlo Methods: 10. Monte Carlo simulation; 11. Monte Carlo optimization; Appendix: programming examples.