Yana Hasson

PhD student, INRIA

I am a PhD student in the computer vision and machine learning research laboratory (WILLOW project team) in the Department of Computer Science of École Normale Supérieure (ENS) and in Inria Paris. I am working on understanding first person videos under the supervision of Ivan Laptev and Cordelia Schmid. I have received a MS degree in Applied Mathematics from École Centrale Paris and a MS degree in Mathematics, Vision and Learning from ENS Paris-Saclay.


02 / 2019
CVPR'19 paper accepted on hand-object reconstruction! More details coming soon.
09 / 2018
I coorganized the 5th WiCV workshop which took place in conjunction with ECCV'18 in Munich.
04 / 2018
I visited the Perceiving Systems team at MPI for a month.
11 / 2017
I started my PhD at WILLOW.
05 / 2017
I joined WILLOW project team as a research intern!


Learning joint reconstruction of hands and manipulated objects
Yana Hasson, Gül Varol, Dimitrios Tzionas, Igor Kalevatykh, Michael J. Black, Ivan Laptev, and Cordelia Schmid
CVPR, 2019.
  title     = {Learning joint reconstruction of hands and manipulated objects},
  author    = {Hasson, Yana and Varol, G{\"u}l and Tzionas, Dimitrios and Kalevatykh, Igor and Black, Michael J. and Laptev, Ivan and Schmid, Cordelia},
  booktitle = {CVPR},
  year      = {2019}
Estimating hand-object manipulations is essential for interpreting and imitating human actions. Previous work has made significant progress towards reconstruction of hand poses and object shapes in isolation. Yet, reconstructing hands and objects during manipulation is a more challenging task due to significant occlusions of both the hand and object. While presenting challenges, manipulations may also simplify the problem since the physics of contact restricts the space of valid hand-object configurations. For example, during manipulation, the hand and object should be in contact but not interpenetrate. In this work we regularize the joint reconstruction of hands and objects with manipulation constraints. We present an end-to-end learnable model that exploits a novel contact loss that favors physically plausible hand-object constellations. To train and evaluate the model, we also propose a new large-scale synthetic dataset, ObMan, with hand-object manipulations. Our approach significantly improves grasp quality metrics over baselines on synthetic and real datasets, using RGB images as input.


Port of MANO differentiable hand as a PyTorch differentiable layer.
Port of I3D network for action recognition to PyTorch. Transfer of weights trained on Kinetics dataset.
Utilities for video data-augmentation.
Inflation from image input to video inputs of ResNets and DenseNets. Weights initialized based on ImageNet.
Some useful tips and resources for PhDs in computer vision
Demo, evaluation and training code for Hand-Object reconstruction