Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014

Lecturers and Courses

There follows the list of the main lecture courses of the school. Schedule and course titles may be subject to changes.At the moment, the professors who will teach courses in the school are:

S. Goreinov, N. Zamarashkin, E. Tyrtyshnikov, from INM-RAS of Moscow,

D. Vetrov from Lomonosov Moscow State University,

D. Bertaccini, C. Di Fiore, M. Picardello, R. Peirone, P. Zellini, from Tor Vergata University,

D. Fasino, from University of Udine,

M. Botchev from University of Twente

Moscow Part

Rome Part

Note: The bus-navetta leaves CampusX at 7,45 - 8,30 - 9,30 - 10,45 - 13,10 - 17,00 and goes to Anagnina Metro station each working day (Monday-Friday). One of the bus-navetta stops is Faculty of Sciences, where there is the Math. Department with the room of lectures.

Lecture notes of Prof. Di Fiore

8 + 1 lectures in Moscow.

- A glimpse on Perron-Frobenius theory. Lecture Notes (Part 1)

- Eigenvector-based centrality indices. Lecture Notes (Part 2)

- Graph matrices: adjacency, Laplacian, modularity.

- Nodal domais. Lecture Notes (Part 3)

3 + 3 lectures in Moscow, 2 + 2 lectures in Rome.

4 lectures in Moscow.

The subject of the course is constructive approximation of functions.
Unlike classic approach where the central part is occupied by
theorems characterizing some constructive approximation devices like
rational functions or splines, we primarily discuss algorithms useful
for approximation while theorems (rather beautiful ones, like
Carathodory-Fejr, Markov \(L_1\) interpolation or Nikolski duality)
appear as a complement required for better understanding of the
algorithms. We mostly consider spaces \(\mathbb C\) and \(L_1\) where approximation
is more complicated and more algorithms are available, due to Remez,
Takagi, Schur and Lanczos, just to name a few.

3 lectures in Moscow.

5 lectures in Moscow.

Definition and construction of self-similar fractals. Self-similar energies
on finitely ramified self-similar fractals. Eigenforms.
Existence and specially uniqueness of eigenforms.
The uniqueness problem is treated by means of the Perron-Frobenius theory.

4 lectures in Rome.

Preliminary Notes on Krylov subspaces

Excersises of lecture 3

Lecture course notes

**Lecture 1. Introduction to Krylov subpace iterative methods for solving linear systems.**

Krylov subspace iterative methods were a subject of the most cited papers in Mathematics in the years 80s and 90s. We will give a short introduction to the methods and discuss some of them such as GMRES, CG and BiCGSTAB. The emphasis will be on methods for linear systems with nonsymmetric matrices.

**Lecture 2. Basic preconditioners for Krylov subpace iterative methods.**

To work efficiently, Krylov subspace iterative methods often need an additional technique called preconditioning. The ideas behind this technique and basic ways to implement it will be disccussed. A link to the most advanced preconditioners (such as algebraic multigrid - which are beyond the scope of the lecture) will be made. Special attention will be paid to nonsymmetric matrices.

**Lecture 3. Computing actions of the matrix exponentials and other matrix
functions for large matrices.**

Solving a linear system \(Ax=b\) is a particular case where an action of the inverse matrix function \(f(A)b\) is computed, with \(f(x)=\frac 1 x\). Other possible matrix functions occuring in practice include, among other, exponential, square root and cosine. Note that, for large \(A\), computing \(f(A)\) might be too expensive. Therefore, only the action of the matrix function is computed on a given vector \(b\). Several ways to do this will be discussed including the Chebyshev polynomials and Krylov subspace methods.

**Lecture 4. Inexact Newton-Krylov methods for solving large nonlinear systems.**

Solving a system of nonlinear equations typically requires solution of a number of linear systems. This is often done within the framework of inexact Newton (and related) methods, where at every Newton iteration a linear system is solved approximately. If this is done by a Krylov subspace method then we get a Newton-Krylov method. This method belongs to a class of inner-outer iterative methods: outer Newton iterations deploy inner Krylov iterations for linear system solution.

Excersises of lecture 3

Lecture course notes

Krylov subspace iterative methods were a subject of the most cited papers in Mathematics in the years 80s and 90s. We will give a short introduction to the methods and discuss some of them such as GMRES, CG and BiCGSTAB. The emphasis will be on methods for linear systems with nonsymmetric matrices.

To work efficiently, Krylov subspace iterative methods often need an additional technique called preconditioning. The ideas behind this technique and basic ways to implement it will be disccussed. A link to the most advanced preconditioners (such as algebraic multigrid - which are beyond the scope of the lecture) will be made. Special attention will be paid to nonsymmetric matrices.

Solving a linear system \(Ax=b\) is a particular case where an action of the inverse matrix function \(f(A)b\) is computed, with \(f(x)=\frac 1 x\). Other possible matrix functions occuring in practice include, among other, exponential, square root and cosine. Note that, for large \(A\), computing \(f(A)\) might be too expensive. Therefore, only the action of the matrix function is computed on a given vector \(b\). Several ways to do this will be discussed including the Chebyshev polynomials and Krylov subspace methods.

Solving a system of nonlinear equations typically requires solution of a number of linear systems. This is often done within the framework of inexact Newton (and related) methods, where at every Newton iteration a linear system is solved approximately. If this is done by a Krylov subspace method then we get a Newton-Krylov method. This method belongs to a class of inner-outer iterative methods: outer Newton iterations deploy inner Krylov iterations for linear system solution.

2 lectures in Rome

Modern machine learning and decision-making frameworks often rely on
the use of complex probabilistic models that use simpler ones as
building blocks. We will review basics Bayesian approach to modelling
the data and show how it allows to combine different models. Then the
apparatus of graphical models will be considered with many examples of
real problems where such models are applied sucessfully. We will
consider the most important problems which arise in graphical modelling
and show how recently proposed tensor methods can be used to get an
efficient approximate solution.

Lecture 1 Notes

Lecture 2 Notes

Lecture 1 Notes

Lecture 2 Notes

4 lectures in Rome

Tomography refers to the cross-sectional imaging of an object from either transmission or reflection data collected by illuminating the object from many different directions. The impact of this technique in diagnostic medicine has been revolutionary, since it has enabled doctors to view internal organs with unprecedented precision and safety to the patient. The first medical application utilized x-rays for forming images of tissues based on their x-ray attenuation coefficient. More recently, however, medical imaging has also been successfully accomplished with radioisotopes, ultrasound, and magnetic resonance; the imaged parameter being different in each case.

There are numerous nonmedical imaging applications which lend them- selves to the methods of computerized tomography. Researchers have already applied this methodology to the mapping of underground resources via cross- borehole imaging, some specialized cases of cross-sectional imaging for nondestructive testing, the determination of the brightness distribution over a celestial sphere, and three-dimensional imaging with electron microscopy.

Fundamentally, tomographic imaging deals with reconstructing an image from its projections. In the strict sense of the word, a projection at a given angle is the integral of the image in the direction specified by that angle. However, in a loose sense, projection means the information derived from the transmitted energies, when an object is illuminated from a particular angle; the phrase "diffracted projection" may be used when energy sources are diffracting, as is the case with ultrasound and microwaves.

**For all the students:**

The preliminary contents are contained in Chapter 2 of

Book 1 - Kak, Slanley, "Principles of Computerized Tomographic Imaging", Link

It is expected that all of the students know the contents of Chapter 2, sections 2.1 and 2.2.1, 2.2.2, 2.2.3 of this book better than as they are presented in these Sections. The Lecturer will briefly cover in class the rest of Chapter 2 of Kak-Stanley, but the students should cover the material in Kak-Stanley, Chapter 2, sections 2.1 and 2.2.1, 2.2.2, 2.2.3 on your own before the beginning of the course.

**For the Italian students or Russian students who read Italian:**

Book 2 (the online book of the Lecturer), Link

is a comprehensive very detailed source: the sections 2.1 and 2.2.1, 2.2.2 there are the most relevant and the students should read them. Moreover, the students should know:

- the definitions of Fourier series and integral (there are two chapters in book 2 devoted to these topics; the student must know these topics, to be able to attend the course)

- Convolution on \(\mathbb R\) (Sect. 6.1 of book 2)

- Convolution on \(\mathbb Z\) and DFT (Sect. 14.1, 14.2, 14.4 of book 2)

- Shannon sampling theorem (Sect. 13.1, 13.2, 13.3 of book 2, and the necessary prerequisites contained, with a different presentation, in chapter 10 of book 2).

**For the russian students who do not read Italian:**

certainly all or almost all prerequisites are in the book by Briggs and Henson, "An Owners' Manual for the Discrete Fourier Transform", published by SIAM.

There are numerous nonmedical imaging applications which lend them- selves to the methods of computerized tomography. Researchers have already applied this methodology to the mapping of underground resources via cross- borehole imaging, some specialized cases of cross-sectional imaging for nondestructive testing, the determination of the brightness distribution over a celestial sphere, and three-dimensional imaging with electron microscopy.

Fundamentally, tomographic imaging deals with reconstructing an image from its projections. In the strict sense of the word, a projection at a given angle is the integral of the image in the direction specified by that angle. However, in a loose sense, projection means the information derived from the transmitted energies, when an object is illuminated from a particular angle; the phrase "diffracted projection" may be used when energy sources are diffracting, as is the case with ultrasound and microwaves.

The preliminary contents are contained in Chapter 2 of

Book 1 - Kak, Slanley, "Principles of Computerized Tomographic Imaging", Link

It is expected that all of the students know the contents of Chapter 2, sections 2.1 and 2.2.1, 2.2.2, 2.2.3 of this book better than as they are presented in these Sections. The Lecturer will briefly cover in class the rest of Chapter 2 of Kak-Stanley, but the students should cover the material in Kak-Stanley, Chapter 2, sections 2.1 and 2.2.1, 2.2.2, 2.2.3 on your own before the beginning of the course.

Book 2 (the online book of the Lecturer), Link

is a comprehensive very detailed source: the sections 2.1 and 2.2.1, 2.2.2 there are the most relevant and the students should read them. Moreover, the students should know:

- the definitions of Fourier series and integral (there are two chapters in book 2 devoted to these topics; the student must know these topics, to be able to attend the course)

- Convolution on \(\mathbb R\) (Sect. 6.1 of book 2)

- Convolution on \(\mathbb Z\) and DFT (Sect. 14.1, 14.2, 14.4 of book 2)

- Shannon sampling theorem (Sect. 13.1, 13.2, 13.3 of book 2, and the necessary prerequisites contained, with a different presentation, in chapter 10 of book 2).

certainly all or almost all prerequisites are in the book by Briggs and Henson, "An Owners' Manual for the Discrete Fourier Transform", published by SIAM.