This is an advanced textbook based on lectures given at the Moscow Physico-Technical Institute. The lectures are characterized by brevity, logical organization, and occasionally a lighthearted approach. It aims to involve the reader by asking questions, hinting, giving recommendations, comparing different methods, and discussing "optomistic" and "pessemistic" approaches to numerical analysis. Since matrix analysis underlies numerical analysis, this text emphasizes methods and algorithms of matrix analysis. Function approximations, methods of solving nonlinear equations, and minimization methods are also considered. As well as considering classical methods, new methods and approaches are discussed - such as spectral distribution theory and what it offers for design and proof of modern preconditioning strategies for large-scale linear algebra problems.
Lecture 1: metric space; some useful definitions; nested balls; normed space; popular vector norms; matrix norms; equivalent norms; operator norms. Lecture 2: scalar product; length of a vector; isometric matricies; preservation of length and unitary matricies; Schur theorum; normal matricies; positive definite matricies; the singular value decomposition; unitarily invariant norms; a short way to the SVD; approximations of a lower rank; smoothness and ranks. Lecture 3: perturbation theory; condition of a matrix; convergent matricies and series; the simplest iteration method; inverses and series; condition of a linear system; consistency of matrix and right-hand side; eigenvalue perturbations; continuity of the polynomial roots. Lecture 4: diagonal dominance; Gerschgorin disks; small perturbations of eigen values and vectors; condition of a simple eigenvalue; analitic perturbations. Lecture 5: spectral distances; "symmetric" theorums; Hoffman-Wielandt theorum; permutation vector of a matrix; "unnormal" extension; eigenvalues of Hermitian matrices; interlacing properties; what are clusters?; singular value clusters; eigenvalue clusters. Lecture 6: floating-point numbers; computer arithmetic axioms; round-off errors for the scalar product; forward and backward analysis; some philosophy; an example of "bad" operation; one more example; ideal and machine tests; up or down; solving the triangular systems. Lecture 7: direct methods for linear systems; theory of the LU decomposition; round-off errors for the LU decomposition; growth of matrix entries and pivoting; complete pivoting; the Cholesky method; triangular decompositions and linear systems solution; how to refine the solution. Lecture 8: the QR decomposition of a square matrix; the QR decomposition of a rectangular matrix; householder matrices; elimination of elements by reflections; Givens matricies; elimination of elements by rotations; computer realizations of reflections and rotations; orthgonalization method; loss of orthogonality; modified Gram-Schmidt algorithm; bidiagonalization; unitary similarity reduction to the Hessenberg form. Lecture 9: the eigenvalue problem; the power method; subspace iterations; distances between subspaces; subspaces and orthoprojectors; distances and orthoprojectors; subspaces of equal dimension; the CS decomposition; convergence of subspace iterations for the block diagonal matrix; convergance of subspace iterations in the general case. Lecture 10: the QR algorithm; generalised QR algorithm; basic formulas; the QR iteration lemma; convergance of the QR iterations; pessimistric and optimistic; Bruhat decomposition; what if the inverse matrix is not strongly regular; the QR iterations and the subspace iterations. Lecture 11: quadratic convergence; cubic convergence; what makes the QR algorithm efficient; implicit QR iterations; arrangement of computations; how to find the singular value decomposition. Lecture 12: function approximation; (Part contents)