HOBI – IEEE RO-MAN Workshop

Workshop on Hand-OBject Interaction: From human demonstrations to robot manipulation.

29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020)

Date: September 7 2020
Address: Online
Zoom ID: 921 3075 2959
Zoom link: https://zoom.us/j/92130752959

The deadline for submitting contributions has been extended to the 7th of August.

NOTE: Because of the COVID-19 outbreak the workshop will take place completely online.

Humans use their hands to interact with the environment, and they do so for a broad spectrum of activities, from the physical manipulation of objects to non-verbal communication. Examples range from simple grasping of everyday objects, tool usage, deictic gestures, or communication via the sign language.

It is therefore of the utmost importance to focus on how humans use their hands with the aim of developing novel robot capabilities to deal with tasks usually considered a human prerogative, and in general being able to interact, collaborate or communicate with humans in a socially acceptable and safe manner. For example, a robot should be able to dexterously make use of tools, to synchronize its movements with the human it is collaborating with, either for joint work or turn-taking, or to manipulate objects such as to enhance a sense of trust in humans.

These examples require robot hands to coordinate with human motions, as well as advanced capabilities to deduce objects’ affordance, their intrinsic characteristics, and to understand how objects can be manipulated according to the social context.

The Workshop aims at gathering new approaches and experience from different fields to discuss which conceptual and engineering tools are better suited to sense human hand motions, to recognize objects and their physical characteristics, as well as to model and encode this knowledge to develop new robot behaviors.

The Workshop aims also at being a starting point for further activities:

  1. We will set up a mailing list including all participants, aiming to build a scientific community interested in the Workshop topics.  The mailing list will be used for the Workshop organization, to foster debate after the workshop and to share the community results.
  2. We aim to write a paper reviewing ideas emerged during the Workshop, asking the contribution of all the participants.  The spirit will be that of synthesizing a contribution putting forth a research agenda for future activities, common challenges to address, and sustaining reproducible research.
  3. A journal special issue will be proposed, open to all Workshop participants, related to the Workshop topics. Possible target journals are Robotics and Autonomous Systems and IEEE Transactions on Human-Machine Systems.

List of topics:

  • Data extraction of human handling tasks
  • Datasets of in-hand manipulation
  • Hand pose estimation and tracking
  • Gesture, action, and intent recognition
  • Learning from demonstration
  • Imitation learning
  • Transfer learning
  • Object modelling, recognition, pose estimation and tracking
  • Object grasping
  • Control of anthropomorphic hands

Workshop Submission

We invite extended abstracts (max 2 pages) followed by camera-ready submission of accepted papers. Submissions can be original research or late-breaking results that fall under the scope of the workshop. All accepted submission will give an oral presentation of their work.

Papers should be submitted through the EasyChair page and should use the IEEE template.

  • Deadline: July 31st (old) – August 7th (new)
  • Notification of acceptance: August 15th (old) – August 21st (new)
  • Camera-ready: August 21st (old) – August 29th (new)

Workshop Program

The time schedule is presented according to the CEST time zone

Invited Talks

Lorenzo Natale Elisa Maiettini

What is this object? On-the-fly learning from a human demonstrator: experiments with a humanoid robot

Abstract: We have proposed a human-robot interaction scenario, in which a human teaches a robot to recognize new objects. In this scenario, image views and labels are given by the user during natural interaction with the robot. We have implemented a system for extracting training images from the user demonstrations and released iCubWorld, a dataset which contains images acquired in this setting. We have also proposed an object detection architecture derived from Faster-R CNN that allows the robot to learn online in a few seconds of interaction with the user. Yet, we also realized that because objects are acquired in a very specific setting, there is a drop in performance when the robot observes the scene in a different context. To address this issue, we have proposed an active strategy which allows the robot to autonomously acquire new examples and ask human intervention only when strictly required, thus reducing the need for external supervision.

Guillem Alenyà

Learning grasping for manipulation of rigids and clothing

Abstract: Inspired from the observation of manipulations performed by people, we developed methods to learn grasping actions. For deformables, we propose a taxonomy to help to express the grasping and, at the same time, the state of the clothing. This has inspired the design of new grippers with some fancy capabilities, and also tools for the explainability of the manipulation sequences.

Aude Billard Baptiste Busch

Towards more fluid in-hand manipulation

Abstract: This talk gives an overview of what we have achieved to enable robots to safely hold object even when subjected to various disturbances, by re-balancing the weight and re-grasping objects in hand. We will see applications of this to control for bimanual grasp in humanoid robot and for robust shared-manipulation with application to prostheses control.  I will close by showing how humans can acquire very fine manipulation skills such as when manipulating tiny screws in watchmaking and discuss the remaining challenges in both hardware and software for robotics to achieve these skills.

Organizers

Alessandro Carfì, University of Genoa, alessandro.carfi@dibris.unige.it
Timothy Patten, Technical University of Wien, patten@acin.tuwien.ec.at
Abraham Itzhak Weinberg, Aston University, a.weinberg@aston.ac.uk
Ali Hammoud, Sorbonne Université, alihammoudd4@gmail.com
Fulvio Mastrogiovanni, University of Genoa, fulvio.mastrogiovanni@unige.it
Markus Vincze, Technical University of Wien, vincze@acin.tuwien.ac.at
Diego Faria, Aston University, d.faria@aston.ac.uk
Véronique Perdereau, Sorbonne Université, veronique.perdereau@sorbonne-universite.fr