Learning to Reuse Visual Knowledge

Data dell'evento: 
Monday, 19 December, 2016 - 14:30
Luogo: 
B101
Speaker: 
Thomas Mensink
Abstract
In this presentation I focus on two lines of research which share the aim of reusing visual knowledge. First, I will discuss zero-example classification.
While there currently exists big (annotated) image and video datasets, these cannot guarantee sufficient annotations for all possible concepts. When considering more exotic concepts (e.g. lagerphone) or composite concepts (e.g., wooden saxophone; a sunny day on the mountain), annotations are harder to obtain, possibly can only obtained by experts, and the number of (combinations of) concepts is (almost) unbounded. In the absense of object specific annotations one solution is zero-shot learning, where the combination of a) existing classifiers and b) semantic, cross-concept mappings between these classifiers allows for building novel classifiers without expecting any visual examples. In particular I will focus on zero-shot learning for video retrieval and as prior for 
Second, I will discuss distance based classification with metric-learning, where the learned metric encapsulates our visual knowledge. The advantage of metric learning is that it is trivial to add new classes or new concepts or new train data. In particular I will focus on metric learning for online classification of data streams.
 

 

 

 

 

Contatto: 
Tatiana Tommasi, Prof. Barbara Caputo