Lecturers and Courses
The course schedule of the Moscow part is available here:
(
DOC)
(
PDF)
The course schedule of the Rome part is available here:
(
DOC)
(
PDF)
Daniele Bertaccini
Iterative Methods and
Preconditioning for Large and
Sparse Linear Systems with Applications
Abstract will be soon available
2 lectures in Rome
Lieven De Lathauwer
Introduction to tensor methods
Higher-order tensor methods are intensively studied in many disciplines nowadays. The developments gradually allow us to move from classical vector and matrix based methods in applied mathematics and mathematical engineering to methods that involve tensors of arbitrary order. This transition is pivotal for signal processing, data analysis and many related fields. In our introductory lectures, we will focus on the multilinear singular value decomposition and the canonical polyadic decomposition, which are the two most basic decompositions of higher-order tensors.
Exercises:
Analyze the data set
SynCP2. Explain and motivate your approach and findings. Send a short report including figures to Lieven.DeLathauwer@kuleuven.be and Nico.Vervliet@kuleuven.be by September 30 at the latest.
The slides of the lectures will become available one of the next days at
https://homes.esat.kuleuven.be/~sistawww/biomed/biotensorssummerschool18/program.php (Monday program).
A supporting survey paper:
https://ieeexplore.ieee.org/document/7891546/
2 lectures + 1 lab in Rome
Arrives in Rome September 9. Lectures will take place September 10
Carmine Di Fiore
Matrix algebras
We investigate the properties that a subspace $\mathcal L$ of matrices of $\mathbb C^{nxn}$ should have in order to preserve spectral properties of a matrix $A$ when projecting $A$ on $\mathcal L$.
Exercises can be found
here and
here (exercises by Stefano Cipolla)
3 lectures in Moscow
1 lecture in Rome
Dario Fasino
Nonnegative and spectral matrix theory with applications to network analysis
Concepts from matrix theory and techniques borrowed from numerical linear algebra have been revealed to be convenient tools to analyze the structure of complex systems in the real world that are modeled as graphs or networks. For example, matrix functions and eigenvector computations allow to understand roles or to quantify the importance of network nodes and links; spectral techniques yield efficient solutions to community detection and partitioning problems; and linear iterations are at the heart of the standard tools for modelling spreading processes in social and biological systems. This lecture series introduces the main results of the Perron-Frobenius theory and some of their applications to the analysis of complex networks: eigenvector-based centrality indices, random walks on graphs, nodal domains, graph partitioning and community detection problems.
Lecture notes (
PDF)
Final exam (
PDF)
4 lectures in Moscow, August 3 - 12
2 lectures in Rome, September 17 - 21
Alexander Gasnikov
Modern numerical methods of convex and non convex
optimization
Lecture 1. Foundation of Convex analysis
(link)
Lecture 2. Convex optimization and Big Data
applications
(link)
Lecture 3. Complexity of optimization problems and
Optimal methods for convex optimization problems
(link)
Lecture 4. Stochastic optimization. Randomized
methods
(link)
Lecture 5. Primal-duality, regularization, restarts
technique, mini-batch & Inexact oracle. Universal
methods
(link)
Lecture 6. Gradient-free methods. Coordinate descent
(link)
Lecture 7. Distributed optimization
(link)
Lecture 8. Nonconvex optimization
(link)
The slides of all the lectures can be found at
www.mccme.ru/dubna/2017/courses/gasnikov.html
Please find the exercises
here
4 lectures in Moscow
4-5 lectures in Rome, September 8 - 16
Sergei Goreinov
Extremal polynomials in logarithmic potential theory
For didactic material and exercises see this
page
4 lectures in Moscow
Vladimir Lyashev
Application of Matrix Algebra in Wireless Communication Problems
Mini-course contains 4 lectures and one home task for school participants.
The first part (1,2 lectures) includes basic introduction in wireless channel
and its modeling, matrix description of the channel, compact representation of
the channel and possibility to use tensor decomposition, channel matrix
properties and its connection with spatial properties of environment,
spatial filtering.
The second part (3,4 lectures) takes into account spatial filter design
in limited channel knowledge, issues of resource management via matrix
calculations, environment clustering.
It is assumed small home task between part#1 and part#2.
For all didactic material and exercises see this
page
2 lectures + home task setup in Moscow (July 30 - August 2)
2 lectures in Rome (September 16 - 18)
Tom Lyche
Multivariate splines and simplex splines
Splines are functions consisting of pieces of smooth functions glued together in a certain smooth way. Besides their theoretical interest,
they have numerous applications including geometric modeling, signal processing, data analysis, visualization, numerical simulation, and
probability, just to mention a few. In these lectures I will give an introduction to splines focusing on the polynomial case. In the first part I consider basic properties of B-splines and knot insertion of splines using a matrix formulation to derive properties. In the second part I treat the true multivariate version of B-splines known as simplex splines, due to a geometric property of these functions.
5 lectures in Rome
Nicola Mastronardi
Eigendecomposition of matrices: theory and computation
**Non attivato**
3 lectures in Rome, September 9 - 14
Roberto Peirone
Math curiosities
Michela Redivo-Zaglia
Extrapolation methods and their applications
Aim: These lectures are intended to students and researchers in pure and applied mathematics,
in numerical analysis, and in scientific computing. An introduction to extrapolation methods and
several applications will be described. Free numerical software packages will also be presented,
allowing the users to try their applications easily.
Outline
- Sequence transformations and convergence acceleration
- What is an extrapolation method?
- Various extrapolation methods
- Vector sequence transformations
- Software
- Applications
More detail can be found
here.
On Wednesday August 1 the Lecturer (michela(AT)math.unipd.it) has sent by email to the students the slides of her first two lectures and a file on implementation of Wynn algorithm in diagonal ascending way. On Sunday August 5 she has sent to the students the two files corresponding to the Third lecture. Contact the Lecturer in order to receive from her the above didactic material, 5 files, related with the three lectures she has held in Moscow.
During the lectures, she asked to the students to implement Aitken and
epsilon-algorithm. More precisely, the activities required for passing the exam (that can be also made in group) will be collected in a brief document that will be soon ready.
Detailed exercises for this course can be found
here
3 lectures in Moscow (July 28 - August 6)
1-2 lectures in Rome
Eugene E. Tyrtyshnikov
Tensor-Based Algorithms
Abstract
Homeworks, joint with seminar course of Sergei Matveev can be found
here and
here
(Rome, September 18--22)
André Uschmajew
Optimization methods for low-rank approximation
Exercises
3 lectures + 1 exercise in Moscow (August 6--10)
Seminars
Stefano Cipolla
Matrix Algebras
See the
abstract for details
Exercises
here
1 seminar in Moscow
Fabio Durastante
A Numerical Linear Algebra Perspective on Fractional Differential Equations
Slides and Exercises
The origins of
fractional calculus date back to the same
years of calculus.
When Leibniz developed his version of calculus and invented the
notations
$$
\frac{d^n f}{dx^n}, \quad \text{and} \quad \left(\frac{d}{dx}\right)^n f,
$$
the question about the necessity of $n$ being a natural number had
been asked in a letter by Leibniz to de L'Hospital, dated September
30-th, 1695. In this seminar, we are interested in giving suitable definitions
of fractional derivatives, i.e., changing $n$ to an arbitrary real number $\alpha$, and
building example of differential equations in terms of these new entities.
But why should one bother, from the point of view of the applications,
with the use of fractional calculus? The first answer in this
direction can be hinted by thinking at the link between second order
elliptic operators and Gaussian diffusion. The diffusion modeled
with fractional operators is
non--universal, i.e., involves a parameter $\alpha$, and
non--local. As one could expect, there are in nature several
phenomena that often violates universality and locality mirrored in
the Gaussian models. Fractional diffusion equations have been
developed to account for this
anomalous features.
The (not so short) plan of this seminar is then the following:
- introduce an example of Fractional Differential Equation in space,
- discuss a suitable discretization scheme of finite difference type,
- analyze the properties of the obtained matrix sequences.
References:
D. Bertaccini and F. Durastante,
Iterative Methods and Preconditioning for Large and Sparse Linear Systems with Applications, (2018), Chapman and Hall/CRC.
D. Bertaccini and F. Durastante,
Solving mixed classical and fractional partial differential equations using short-memory principle and approximate inverses, Numer. Algorithms 74 (2017), no. 4, 1061--1082.
D. Bertaccini and F. Durastante,
Limited memory block preconditioners for fast solution of fractional partial differential equations, J. Sci. Comput., (2018), 1-21.
1 seminars in Rome
Sergei Matveev
Applications of tensor trains to numerical simulation
These seminars are devoted to series of practical examples when low rank matrix
and tensor decompositions allow users to obtain efficient numerical algorithms.
In particular, we will discuss examples of efficient algorithms for evaluation of
bi-linear and multi-linear convolutions, discuss the matrix cross interpolation
algorithm and compare complexities of straight-forward algorithms with
tensor-based approaches. The main practical example would be class of
Smoluchowski-typed aggregation-fragmentation equations with number of possible
modifications (e.g. consideration multi-particle collisions).
Plan:
Seminar 1 (Moscow): Matrix cross interpolation algorithm, complexity of method,
cross decomposition, low-rank matrix matvec operation, low-rank bi-linear
convolution operation. Application of proposed algorithms to solution of
aggregation equations and analysis of complexity.
Seminar 2 (Moscow): Multi-particle aggregation equations and exponential growth
of complexity of straight-forward computations. Operations in pure-tensor (CP)
format -- their complexity, problem with finding the decomposition. The same
operations for right-hand side in tensor train (TT) format -- higher complexity
but existence of algorithms constructing the decomposition for kernel
coefficients. Small examples for low-rank TT-functions
Seminar 3 (Rome): Basic operations in TT format: elementwise sum, product, scalar
product. Complexities of operations in TT format. Some ideas about TT-cross
algorithm and its "cost" (without strict algorithm). "Naive" TT-cross based
algorithm for multicomponent Smoluchowski coagulation equation.
Seminar 4 (Rome): More clever algorithm for multi-component Smoluchowski
equation, analysis of its complexity. Examples of low-rank kernel coefficients.
Materials:
1. Kolda, Tamara G., and Brett W. Bader. "Tensor decompositions and
applications." SIAM review 51.3 (2009): 455-500
2. Matveev, Sergey A., Alexander P. Smirnov, and E. E. Tyrtyshnikov. "A fast
numerical method for the Cauchy problem for the Smoluchowski equation." Journal
of Computational Physics 282 (2015): 23-32.
3. Matveev, S. A., Zheltkov, D. A., Tyrtyshnikov, E. E., and Smirnov, A. P.
(2016). Tensor train versus Monte Carlo for the multicomponent Smoluchowski
coagulation equation. Journal of Computational Physics, 316, 164-179.
4. Tyrtyshnikov, Eugene. "Incomplete cross approximation in the mosaic-skeleton
method." Computing 64.4 (2000): 367-380.
5. Oseledets, Ivan, and Eugene Tyrtyshnikov. "TT-cross approximation for
multidimensional arrays." Linear Algebra and its Applications 432.1 (2010):
70-88.
Homeworks, joint with course of prof. Tyrtyshnikov, can be found
here and
here
2 seminars in Moscow
2 seminars in Rome, 9--14 September
Yuri Nesterenko
The Laplacian Matrix
Abstract
pdf
The Lecturer in his first lecture in Moscow asked to the students to prove
the Binet-Cauchy theorem and to implement an algorithm solving the minimal cut problem of
this graph. These two exercises have been solved in the second lecture, where he has also asked to the students to write an algorithm which computes the number of possible ways to fill a rectangle
4x14 with little rectangles 1x2.
Please write to the Lecturer for more precise explanations on exercises to solve in order to be evaluated for this course.
A list of exercises can be found
here
2 seminars in Moscow
2 seminars in Rome, 18--23 September
Francesco Tudisco
Nonlinear Perron-Frobenius theorems
The Perron-Frobenius theory addresses existence, uniqueness and maximality of positive eigenpairs for matrices with nonnegative (positive) entries.
This theory extends elegantly to nonlinear order-preserving and homogeneous operators and takes the name of nonlinear Perron-Frobenius theory. In these
seminars I survey some results of the nonlinear Perron-Frobenius theory and discuss how this can be applied to compute matrix and tensor norms and to define centrality measures for multi-layer networks.
2 seminars in Rome - Sept 17-21
Alexei Ustimenko
Learning to rank and information retrieval
**Non attivato**
1. General/Probabilistic formulation of machine learning task. Loss/gain (MSE, Accuracy,
Logloss) functional and overfitting problem. Regression and classification tasks. Examples
of machine learing models (i.e. linear, neural net, tree). [Maybe: Ensemble methods (i.e
gradient boosted decision trees).]
2. Information Retrieval and learning to rank problem formulation.
Typical relevance? grades and gain functionals used in Information Retrieval/LtR
(Precision-Recall as classification quality measures versus DCG@k as IR quality measure).
Unsupervised models for IR/LtR (i.e. TF-IDF, BestMatch, etc). Critisism of unsuperwised
models.
3. Point-wise learning to rank and critisism of point-wise method. Pair-wise learning to
rank and list-wise learning to rank definition. Comparison of all LtR methods (I will prepare
dataset for presentation with comparison of ALL described LtR methods using some ML
library).
3 seminars in Rome - Sept 17-21
Student session talks
Sandra Boschert (in Moscow),
Multigrid methods based on high order tensor product
B-Splines for the
valuation of American options with stochastic
volatility and their Greeks
See the
abstract for details
Gianluca Ceruti (in Rome),
Introduction to Dynamical Low Rank Approximation
See the
abstract for details
Stefano Cipolla (in Rome)
Extrpolation methods for multilinear pagerank
See the
abstract for details
Alice Cortinovis (in Moscow),
Minimizing the optimality residual for algebraic Riccati
equations
See the
abstract for details
Giuseppe Alessio D'Inverno (in Rome),
Singular value decomposition
and its applications on 3D models
reconstruction
See the
abstract for details
Dmitrij Emelianov (in Moscow),
Lomov Regularization in Degenerate Differential
Equations
See the
abstract for details
Vitalii Zankin (in Moscow),
Gradient Descent Approach to the Experimental
Design with Application to Multivariate Function Approximation
See the
abstract for details