Tuesday, June 25 |
07:00 - 09:00 |
Breakfast (Vistas Dining Room) |
08:30 - 09:15 |
Gabriele Steidl: Regularization of Inverse Problems via Time Discrete Geodesics in Image Spaces ↓ This talk addresses the solution of inverse problems in imaging given an additional reference image. We combine a modification of the discrete geodesic path model of Berkels, Effland and Rumpf with a variational model, actually the L 2 -T V model, for
image restoration. We prove that the space continuous model has a minimizer and propose a minimization procedure which alternates over the involved sequences of deformations and images. The minimization with respect to the image sequence exploits recent algorithms from convex analysis to minimize the L 2 -T V functional. For the numerical computation we apply a finite difference approach on staggered grids together with a multilevel strategy. We present proof-of-the-concept numerical results for sparse and limited angle computerized tomography as well as for superresolution demonstrating the power of the method. Further we apply the morphing approach for image colorization.
This is joint work with Sebastian Neumayer and Johannes Persch (TU Kaiserslautern). (TCPL 201) |
09:15 - 10:00 |
Uri Ascher: Discrete processes and their continuous limits ↓ The possibility that a discrete process can be closely approximated by a continuous one, with the latter involving a differential system, is fascinating. Important theoretical insights, as well as significant computational efficiency gains may lie in store. A great success story in this regard are the Navier-Stokes equations, which model many phenomena in fluid flow rather well. Recent years saw many attempts to formulate more such continuous limits, and thus harvest theoretical and practical advantages, in diverse areas including mathematical biology, image processing, game theory, computational optimization, and machine learning.
Caution must be applied as well, however. In fact, it is often the case that the given discrete process is richer in possibilities than its continuous differential system limit, and that a further study of the discrete process is practically rewarding. I will show two simple
examples of this. Furthermore, there are situations where the continuous limit process may provide important qualitative, but not quantitative, information about the actual discrete process. I will demonstrate this as well and discuss consequences. (TCPL 201) |
10:00 - 10:30 |
Coffee Break (TCPL Foyer) |
10:45 - 11:30 |
Markus Grasmair: Total variation based Lavrentiev regularisation ↓ In this talk we will discuss a non-linear variant of Lavrentiev regularisation, where the sub-differential of the total variation replaces the identity operator as regularisation term. The advantage of this approach over Tikhonov based total variation regularisation is that it avoids the evaluation of the adjoint operator on the data. As a consequence, it can be used, for instance, for the solution of Volterra integral equations of the first kind, where the adjoint would require an integration forward in time, without the need of accessing future data points. We will discuss first the theoretical properties of this method, and then propose a taut-string based numerical method for the solution of one-dimensional problems. (TCPL 201) |
11:45 - 12:30 |
Andrea Aspri: Analysis of a model of elastic dislocation in geophysics ↓ In this talk we will discuss a model for elastic dislocations describing faults in the Earth’s
crust. We will show how to get the well-posedness of the direct problem which consists in solving a
boundary-value/transmission value problem in a half-space for isotropic, inhomogeneous linear
elasticity with Lipschitz Lamé parameters. Mostly we will focus the attention on the uniqueness
result for the non-linear inverse problem, which consists in determining the fault and the slip vector
from displacement measurements made on the boundary of the half-space. Uniqueness for the
inverse problem follows by means of the unique continuation result for systems and under some
geometrical constrains on the fault. This is a joint work with Elena Beretta (Politecnico di Milano &
NYU – Abu Dhabi), Anna Mazzucato (Penn State University) and Maarten de Hoop (Rice
University). (TCPL 201) |
12:30 - 14:00 |
Lunch (Vistas Dining Room) |
14:15 - 15:00 |
Barbara Kaltenbacher: Regularization of backwards diffusion by fractional time derivatives ↓ The backwards heat equation is one of the classical inverse problems, related to a wide range of applications and exponentially ill-posed. One of the first and maybe most intuitive approaches to its stable numerical solution was that of quasireversibility, whereby the parabolic operator is replaced by a differential operator for which the backwards problem in time is well posed. After a short overview of approaches in this vein, we will dwell on a new one that relies on replacement of the first time derivative in the PDE by a fractional differential operator, which, due to the asymptotic properties of the Mittag-Leffler function as compared to the exponential function, leads to an only moderately ill-posed problem. Thus the order alpha of (fractional) differentiation acts
as a regularization parameter and convergence takes place in the limit as alpha tends to one. We study the regularizing properties of this approach and a regularization parameter choice by the discrepancy principle. Additionally, a substantial numerical improvement can be achieved by exploiting the linearity of the problem by breaking the inversion into distinct frequency bands and using a different fractional order for each.
This is joint work with William Rundell. (TCPL 201) |
15:00 - 15:30 |
Coffee Break (TCPL Foyer) |
15:30 - 16:15 |
Bernd Hofmann: The impact of conditional stability estimates on variational regularization and the distinguished case of oversmoothing penalties ↓ Conditional stability estimates require additional regularization for obtaining stable approximate solutions if the validity area of such estimates is not completely known. The focus of this talk is on the Tikhonov regularization under conditional stability estimates for non-linear ill-posed problems in Hilbert scales, where the case that the penalty is oversmoothing plays a prominent role. This oversmoothing problem has been studied early for linear forward operators, most notably in the seminal paper by Natterer 1984. The a priori parameter choice used there, just providing order optimal convergence rates, has in the oversmoothing case the unexpected property that the quotient of the noise level square and the regularization parameter tends to infinity when the noise level tends to zero. We provide in this talk some new convergence rate results for nonlinear problems and moreover case studies that enlighten the interplay of conditional stability and regularization. In particular, there occur pitfalls for oversmoothing penalties, because convergence can completely fail and the stabilizing effect of conditional stability may be lost. (TCPL 201) |
16:15 - 17:00 |
Antonio Leitao: A convex analysis approach to iterative regularization methods ↓ We address two well known iterative regularization methods for
ill-posed problems (Landweber and iterated-Tikhonov methods)
and discuss how to improve the performance of these classical
methods by using convex analysis tools.
The talk is based on two recent articles (2018):
Range-relaxed criteria for choosing the Lagrange multipliers in
nonstationary iterated Tikhonov method (with R.Boiger, B.F.Svaiter),
and
On a family of gradient type projection methods for nonlinear
ill-posed problems (with B.F.Svaiter) (TCPL 201) |
17:00 - 17:45 |
Lars Ruthotto: Deep Neural Networks motivated by PDEs ↓ One of the most promising areas in artificial intelligence is deep learning, a form of machine learning that uses neural networks containing many hidden layers. Recent success has led to breakthroughs in applications such as speech and image recognition. However, more theoretical insight is needed to create a rigorous scientific basis for designing and training deep neural networks, increasing their scalability, and providing insight into their reasoning.
This talk bridges the gap between partial differential equations (PDEs) and neural networks and presents a new mathematical paradigm that simplifies designing, training, and analyzing deep neural networks. It shows that training deep neural networks can be cast as a dynamic optimal control problem similar to path-planning and optimal mass transport. The talk outlines how this interpretation can improve the effectiveness of deep neural networks. First, the talk introduces new types of neural networks inspired by to parabolic, hyperbolic, and reaction-diffusion PDEs. Second, the talk outlines how to accelerate training by exploiting multi-scale structures or reversibility properties of the underlying PDEs. Finally, recent advances on efficient parametrizations and derivative-free training algorithms will be presented. (TCPL 201) |
18:00 - 20:00 |
Dinner (Vistas Dining Room) |
20:00 - 22:00 |
Poster session (TCPL 201) |