M26: Interplay Between Machine Learning and Modern Regularization Theory: Multi-Parameter Regularization

Learning theory has a long tradition of using regularization methods for constructing and analyzing learning algorithms. Close cooperation of the researchers working in learning and regularization theories has resulted in a series of very interesting and important findings in a wide range of applications. However, new challenges in data-based learning still call for the close attention of both the learning and regularization communities. Among these challenges are learning from the so-called Big Data, Manifold learning, multitasks and multiple kernel learning. Applications in signal and image processing often deal with situations when the signal or image one is interested in can be modeled as a combination of several components of different nature that one wishes to identify and try to separate. In this case, the “reconstruction problem” can be interpreted as an inverse problem of unmixing type. The separation, or unmixing, can for instance be achieved by solving a variational optimization problem with multiple penalty terms, each of which favors a specific component of the solution. As the number of the components increase, a proper choice of the parameters affecting the solution of the optimization problem becomes a big challenge. However, this issue has been studied and addressed only to a very moderate degree by researchers in the different disciplines involved.

Inspired by all these challenges, potential benefits and applications, this mini-symposium aims at:

• providing an opportunity for the experts and young reserachers from the fields of Regularization and Learning to present their achievements and discuss and identify "hot" topics for possible future cooperations and,
• discussing recent theoretical and numerical developments and advances in multi-penalty regularization as an important tool for component separation.

Organizers:
Sergei V. Pereverzyev, Johann-Radon-Institute for Computational and Applied Mathematics, RICAM, Linz, Austria. This email address is being protected from spambots. You need JavaScript enabled to view it.
Ruben D. Spies, Instituto de Matemática Aplicada del Litoral IMAL, CONICET-UNL, Santa Fe, Argentina. This email address is being protected from spambots. You need JavaScript enabled to view it.

Invited Speakers (in alphabetical order) :
Uno Hämarik, University of Tartu, Estonia, This email address is being protected from spambots. You need JavaScript enabled to view it.
On solution of ill-posed problems combining different regularization methods

Bangti Jin, University College London, UK, This email address is being protected from spambots. You need JavaScript enabled to view it.
Stochastic gradient descent for linear inverse problems

Johannes Maly, Technical University of Munich, This email address is being protected from spambots. You need JavaScript enabled to view it.
Matrix sensing using combined sparsity and low-rank constraints

Sergiy Pereverzyev Jr., Applied Mathematics Group, Department of Mathematics, University of Innsbruck, Innsbruck, Austria, This email address is being protected from spambots. You need JavaScript enabled to view it.
Regularized integral operators in two-sample problem

Ruben D. Spies, Instituto de Matemática Aplicada del Litoral IMAL, CONICET-UNL, Santa Fe, Argentina, This email address is being protected from spambots. You need JavaScript enabled to view it.
Mixed penalization for enhancing class separability of evoked related potentials in Brain-Computer Interfaces