Syndicate content

Program





Monday
August 8

Tuesday
August 9

Wednesday
August 10

Thursday
August 11

Friday
August 12

 

Presentation

10h-10:15h

 

 

 

 

Prof Yoav Freund

10:15-11:45

Prof Yoav Freund

9:30-11:30

Prof. Vladimir Kolmogorov

9:30-11:30

Prof. Lionel Moisan

9:30-11:30

Prof. Justin Romberg

9:30-11:30

Break

11:45-12:00

Break

11:30-12:00

Break

11:30-12:00

Break

11:30-12:00

Break

11:30-12:00

Prof. Lionel Moisan

12:00-13:30

Prof. Vladimir Kolmogorov

12:00-13:30

Prof. Justin Romberg

12:00-13:30

Prof. Julie Delon

12:00-13:30

Invited talks:

12:00-12:45

12:45-13:30

Lunch

13:30-15:30

Lunch

13:30-15:30

Lunch

13:30-15:30

Lunch

13:30-15:30

 

 

Prof. Justin Romberg

15:30-16:30

Prof. Lionel Moisan

15:30-16:30

Prof Yoav Freund

15:30-16:30

Prof. Vladimir Kolmogorov

15:30-16:30

 

Prof. Julie Delon

16:30-17:30

Invited Talk

16:30-17:30

Invited Talk

16:30-17:30

Invited Talk

16:30-17:30

 

 

A pdf version of the schedule can be downloaded here.

 

Abstracts of the courses

 

Julie Delon: Optimal transportation: some results and applications in image processing and computer vision

The theory of optimal transportation is born in the late 18-th century with the work of Monge [3], and has benefited from the keen interest of many mathematicians since the work of Kantorovich [2] in the late 30s. The use of this theory in computer vision has been popularized ten years ago by Rubner et al. [6] for image retrieval and texture classification with the introduction of the so-called Earth Mover's Distance (EMD). Nowadays, this distance is used for applications as various as object recognition and image registration [1]. Another interesting aspect of this theory lies in the transportation map itself, that allows many image modifications, such as color transfer or texture mixing [5].

The aim of this course is to present some recent results and applications of this theory in image processing and computer vision. After a brief recall on optimal transportation, we will focus on optimal transportation on the line and the circle [4], with applications to the comparison of local features between images and to contrast equalization for image sequences. In a second part, we will take interest in higher dimensional applications, in particular in the context of color transfer or mixing between images. To this end, we will introduce a sliced approximation of the classical Monge-Kantorovich distances [5].

Some references

[1] S. Haker, L. Zhu, A. Tannenbaum, and S. Angenent. Optimal mass transport for registration and warping. International Journal of Computer Vision, 60(3):225-240, 2004.
[2] L. Kantorovich. On the translocation of masses. English translation in Journal of Mathematical Sciences, 133(4):1381-1382, 2006.
[3] G. Monge. Mémoire sur la théorie des déblais et des remblais. De l'Imprimerie Royale, 1781.
[4] J. Rabin, J. Delon, and Y. Gousseau. Transportation distances on the circle. Arxiv preprint
arXiv:0906.5499, 2009. http://hal.archives-ouvertes.fr/hal-00399832/fr/.
[5] J. Rabin, G. Peyré, J. Delon, and M. Bernot. Wasserstein barycenter and its application to texture mixing. Technical report, Université Paris-Dauphine, 2010. http://hal.archives-ouvertes.fr/hal-00476064/fr/.
[6] Y. Rubner, C. Tomasi, and L. Guibas. The earth mover's distance as a metric for image retrieval. International Journal of Computer Vision, 40(2):99-121, 2000.

 

Yoav Freund: Quantifying and Utilizing prediction confidence

The success of Support Vector Machines and Adaboost demonstrates that learning classification rules can be done effectively even when the number of parameters is much larger than the size of the training set. This phenomenon is expained by the margin analysis. Intuitively, examples with large margins are associated with a confident prediction because perturbations in the training set is likely not to change their predicted label.

There are other methods for evaluating prediction confidence: bootstrap-based methods such as bagging and random forests and online-learning based methods such as exponential weighting. The quantification of prediction confidence is useful for semi-supervised learning methods.

In this series of lectures I will present the theories behind the quantification of prediction confidence, some ways in which these have been used in real-world applications, and some open problems.

 

Vladimir Kolmogorov: The MAP-MRF approach in computer vision

Many problems in Computer Vision are formulated in the form of a random field over discrete variables, such as Markov on Conditional Random Field (MRF/CRF). Examples range from low-level vision such as image segmentation, optical flow and stereo reconstruction, to high-level vision such as object recognition. The goal is typically to infer the most probable values of the random variables, known as Maximum a Posteriori (MAP) estimation. This has been widely studied in several areas of Computer Science (e.g. Computer Vision, Machine Learning, Theory), and the resulting algorithms have greatly helped in obtaining accurate and reliable solutions to many problems. In many important cases, these algorithms are very efficient and can find the globally optimal solutions (or a provably good approximate solution). Hence, they have led to a significant increase in the use of random field models in computer vision and other fields.

This tutorial is aimed at researchers who wish to use and understand these algorithms for solving new problems in computer vision and information engineering. No prior knowledge of probabilistic models or discrete optimization will be assumed. The tutorial will cover the following topics:

(a) How to formalize and solve some known vision problems using MAP inference of a random field?
 - binary image segmentation
 - stereo reconstruction
 - image restoration
 - texture synthesis
(b) MAP-MRF inference algorithms:
 - different flavors of graph cuts (aka maximum flow algorithm)
 - belief propagation
 - tree-reweighted messages passing
 - dual decomposition
(c) Which algorithm is suitable for which problem?
 - submodular problems
 - metric interactions
(d) What are the recent developments and open questions in this field?
 - how to solve the LP relaxation?
 - tigher LP relaxations
 - higher-order functions

 

Lionel Moisan: Image Processing using Fourier Representation

Shannon Theorem is a well-known foundation of digital signal and image processing, but is often considered as a mere theoretical result that is not really useful in practice, in reason of the slow decay of the cardinal sine function involved in the reconstruction formula. In this course, we will show that the representation of images in Fourier domain (as suggested by Shannon Theorem) can be very efficient and useful, and we will present several mathematical developments in which it plays a central role: the subpixel interpolation and geometric transformation of images, the elimination of edge effects resulting from the Discrete Fourier Transform, the assessment of image sharpness through the coherence of its phase function in Fourier Domain, the sub-pixel total variation restoration of images, and the representation of Gaussian and Random Phase textures. This will also lead us to interpret and discuss four fundamental distortions of digital images: blur, aliasing, ringing, and noise.

 

Justin Romberg: Sparsity in Image Processing and Compressive Sampling

The general theme of this lecture series will be sparsity and its applications in image processing, with a special emphasis on coded imaging and compressive sampling.  The five hours will be broken into five segments of (roughly) the same size:

I. Sparse representations for images.  We will review classical representations for images (including the discrete cosine transform) and the more modern wavelet and curvelet transforms.  We will also discuss representations that adapt to the geometrical structure often found in natural images.

II. The role of sparsity in image processing.  This segment will focus on the advantages of having a sparse representation when performing concrete processing tasks.  In particular, we will focus on the theoretical guarantees directly tied to  sparsity for problems including noise removal, restoration/deblurring, and compression.  We will also discuss how sparsity plays a critical role in practical algorithms for solving these problems.

III. Compressive sampling and sparse solutions to underdetermined systems of equations.  We will show how we can use sparse representations to very effective regularize the solution to underdetermined systems of equations.  This has direct implications for reducing the general cost of sampling an image using a coded acquisition device.  We will cover the basics of the theory of compressive sampling, deriving the essential conditions on the acquisition system (i.e. the matrix in the inverse problem) needed to efficiently sample an image.

IV. Randomness in compressive sampling.  We will show how universally efficient compressive sampling systems can be designed by injecting randomness into the acquisition process.  We will cover the very basics of random matrix theory needed to understand why compressive sampling works from "random measurements".

V. Applications of compressive imaging.  In the final segment, we will discuss a multitude of applications where the theory of the last two lectures has been put into practice.  Examples will include magnetic resonance imaging, super-resolved imaging using coded apertures, seismic exploration, ground-penetrating radar, hyperspectral imaging, and radar pulse detection.