Workshop on Hand-OBject Interaction: From human demonstrations to robot manipulation.
29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020)
Date: September 7 2020
Zoom ID: 921 3075 2959
Zoom link: https://zoom.us/j/92130752959
The deadline for submitting contributions has been extended to the 7th of August.
NOTE: Because of the COVID-19 outbreak the workshop will take place completely online.
Humans use their hands to interact with the environment, and they do so for a broad spectrum of activities, from the physical manipulation of objects to non-verbal communication. Examples range from simple grasping of everyday objects, tool usage, deictic gestures, or communication via the sign language.
It is therefore of the utmost importance to focus on how humans use their hands with the aim of developing novel robot capabilities to deal with tasks usually considered a human prerogative, and in general being able to interact, collaborate or communicate with humans in a socially acceptable and safe manner. For example, a robot should be able to dexterously make use of tools, to synchronize its movements with the human it is collaborating with, either for joint work or turn-taking, or to manipulate objects such as to enhance a sense of trust in humans.
These examples require robot hands to coordinate with human motions, as well as advanced capabilities to deduce objects’ affordance, their intrinsic characteristics, and to understand how objects can be manipulated according to the social context.
The Workshop aims at gathering new approaches and experience from different fields to discuss which conceptual and engineering tools are better suited to sense human hand motions, to recognize objects and their physical characteristics, as well as to model and encode this knowledge to develop new robot behaviors.
The Workshop aims also at being a starting point for further activities:
- We will set up a mailing list including all participants, aiming to build a scientific community interested in the Workshop topics. The mailing list will be used for the Workshop organization, to foster debate after the workshop and to share the community results.
- We aim to write a paper reviewing ideas emerged during the Workshop, asking the contribution of all the participants. The spirit will be that of synthesizing a contribution putting forth a research agenda for future activities, common challenges to address, and sustaining reproducible research.
- A journal special issue will be proposed, open to all Workshop participants, related to the Workshop topics. Possible target journals are Robotics and Autonomous Systems and IEEE Transactions on Human-Machine Systems.
List of topics:
- Data extraction of human handling tasks
- Datasets of in-hand manipulation
- Hand pose estimation and tracking
- Gesture, action, and intent recognition
- Learning from demonstration
- Imitation learning
- Transfer learning
- Object modelling, recognition, pose estimation and tracking
- Object grasping
- Control of anthropomorphic hands
We invite extended abstracts (max 2 pages) followed by camera-ready submission of accepted papers. Submissions can be original research or late-breaking results that fall under the scope of the workshop. All accepted submission will give an oral presentation of their work.
Papers should be submitted through the EasyChair page and should use the IEEE template.
- Deadline: July 31st (old) – August 7th (new)
- Notification of acceptance: August 15th (old) – August 21st (new)
- Camera-ready: August 21st (old) – August 29th (new)
The time schedule is presented according to the CEST time zone
- 09.00 – 09.15 Greetings and Opening
- 09.15 – 09.35 1st Oral Session
- 09.35 – 09.55 1st Invited Talk: Towards more fluid in-hand manipulation – Baptiste Busch, EPFL, Switzerland
- 09.55 – 10.10 Coffee break
- 10.10 – 10.40 2nd Oral Session
- Improving Out-of-distribution Distractor Handling through Data Augmentation – Lukas Flatz, Stefan Thalhammer, Timothy Patten and Markus Vincze
- Grasp-type Recognition Leveraging Object Affordance – Naoki Wake, Kazuhiro Sasabuchi and Katsushi Ikeuchi
- Leveraging Touch Sensors to Improve Mobile Manipulation – Luca Lach, Robert Haschke, Francesco Ferro and Jordi Pagès
- 10.40 – 11.40 2nd Invited Talk: Learning grasping for manipulation of rigids and clothing – Guillem Alenyà, IRI, Spain
- 11.40 – 14.00 Lunch
- 14.00 – 14.30 3rd Oral Session
- Grasping with Chopsticks: Fine Grained Manipulation using Inexpensive Hardware by Imitation Learning – Liyiming Ke, Jingqiang Wang, Tapomayukh Bhattacharjee, Byron Boots and Siddhartha Srinivasa
- Teleoperation System for Teaching Dexterous Manipulation – Stefan Zahlner, Matthias Hirschmanner, Timothy Patten and Markus Vincze
- Improving exploration efficiency of single-goal in-hand manipulation reinforcement learning by Progressive Goal Generation – Yingyi Kuang, George Vogiatzis and Diego Faria
- 14.30 – 14.50 3rd Invited Talk: What is this object? On-the-fly learning from a human demonstrator: experiments with a humanoid robot – Elisa Maiettini, IIT, Italy
- 14.50 – 15.00 Coffee break
- 15.00 – 16.00 Discussion
- 16.00 – 16.15 Closing
Lorenzo Natale Elisa Maiettini
What is this object? On-the-fly learning from a human demonstrator: experiments with a humanoid robot
Abstract: We have proposed a human-robot interaction scenario, in which a human teaches a robot to recognize new objects. In this scenario, image views and labels are given by the user during natural interaction with the robot. We have implemented a system for extracting training images from the user demonstrations and released iCubWorld, a dataset which contains images acquired in this setting. We have also proposed an object detection architecture derived from Faster-R CNN that allows the robot to learn online in a few seconds of interaction with the user. Yet, we also realized that because objects are acquired in a very specific setting, there is a drop in performance when the robot observes the scene in a different context. To address this issue, we have proposed an active strategy which allows the robot to autonomously acquire new examples and ask human intervention only when strictly required, thus reducing the need for external supervision.
Learning grasping for manipulation of rigids and clothing
Abstract: Inspired from the observation of manipulations performed by people, we developed methods to learn grasping actions. For deformables, we propose a taxonomy to help to express the grasping and, at the same time, the state of the clothing. This has inspired the design of new grippers with some fancy capabilities, and also tools for the explainability of the manipulation sequences.
Aude Billard Baptiste Busch
Towards more fluid in-hand manipulation
Abstract: This talk gives an overview of what we have achieved to enable robots to safely hold object even when subjected to various disturbances, by re-balancing the weight and re-grasping objects in hand. We will see applications of this to control for bimanual grasp in humanoid robot and for robust shared-manipulation with application to prostheses control. I will close by showing how humans can acquire very fine manipulation skills such as when manipulating tiny screws in watchmaking and discuss the remaining challenges in both hardware and software for robotics to achieve these skills.
Alessandro Carfì, University of Genoa, firstname.lastname@example.org
Timothy Patten, Technical University of Wien, email@example.com
Abraham Itzhak Weinberg, Aston University, firstname.lastname@example.org
Ali Hammoud, Sorbonne Université, email@example.com
Fulvio Mastrogiovanni, University of Genoa, firstname.lastname@example.org
Markus Vincze, Technical University of Wien, email@example.com
Diego Faria, Aston University, firstname.lastname@example.org
Véronique Perdereau, Sorbonne Université, email@example.com