TheEngineRoom (TER) is a research group of the Department of Informatics, Bioengineering, Robotics and System Engineering from the University of Genoa, led by prof. Fulvio Mastrogiovanni. TER advances the research on Human-Robot Interaction and Human-Robot Collaboration by developing and integrating cutting edge artificial intelligence methods with new technologies such as wearable devices, augmented reality and virtual reality.

TheEngineRoom is colocated with the EMARO Lab at the University of Genoa, Italy.


Humans engaged in collaborative activities are naturally able to convey their intentions to teammates through multi-modal communication, which is made up of explicit and implicit cues. Similarly, a more natural form of human-robot collaboration may be achieved by enabling robots to convey their intentions to human teammates via multiple communication channels. In this paper, we postulate that a better communication may take place should collaborative robots be able to anticipate their movements to human teammates in an intuitive way. In order to support such a claim, we propose a robot system’s architecture through which robots can communicate planned motions to human teammates leveraging a Mixed Reality interface powered by modern head-mounted displays. Specifically, the robot’s hologram, which is superimposed to the real robot in the human teammate’s point of view, shows the robot’s future movements, allowing the human to understand them in advance, and possibly react to them in an appropriate way. We conduct a preliminary user study to evaluate the effectiveness of the proposed anticipatory visualization during a complex collaborative task. The experimental results suggest that an improved and more natural collaboration can be achieved by employing this anticipatory communication mode.

In this work, we present a framework for human-robot collaboration allowing the human operator to alter the robot plan execution online. To achieve this goal, we introduce Branched AND/OR graphs, an extension to AND/OR graphs, to manage flexible and adaptable human-robot collaboration. In our study, the operator can alter the plan execution using two implementations of Branched AND/OR graphs for learning by demonstration, using kinesthetic teaching, and task repetition. Finally, we demonstrated the effectiveness of our framework in a defect spotting scenario where the operator supervises robot operations and modifies online the plan when necessary.

In this article, we propose FlexHRC+, a hierarchical human-robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on an in-the-loop decision-making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and the representation level, integrating a hierarchical and/or graph whose online behavior is formally specified using first-order logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.

Selected Pubblications

Gesture-based human-machine interaction: taxonomy, problem definition, and analysis
A. Carfì, F. Mastrogiovanni
IEEE Transaction on Cybernetics, 2022

Exact and bounded collision probability for motion planning under Gaussian uncertainty
A. Thomas, F. Mastrogiovanni, M. Baglietto
IEEE Robotics and Automation Letters, 7(1), 167-174, 2022

OWLOOP: a modular API to describe OWL axioms in OOP objects hierarchies
S.Y. Kareem, L. Buoncompagni, F. Mastrogiovanni
SoftwareX, 2021

Hand-object interaction: from human demonstrations to robot manipulation
A. Carfì, T. Patten, Y. Kuang, A. Hammoud, M. Alameh, E. Maiettini, A. I. Weinberg, D. Faria, F. Mastrogiovanni, G. Alenyà, L. Natale, V. Perdereau, M. Vincze, A. Billard
Frontiers in Robotics and AI, 2021

MPTP: Motion-Planning-aware Task Planning for navigation in belief space
A. Thomas, F. Mastrogiovanni, M. Baglietto
Robotics and Autonomous Systems, 141, 2021

Human activity recognition models in ontology networks
L. Buoncompagni, K. Y. Syed, F. Mastrogiovanni
IEEE Transactions on Cybernetics, 2021

A hierarchical architecture for human-robot cooperation processes
K. Darvish, E. Simetti, F. Mastrogiovanni, G. Casalino
IEEE Transactions on Robotics, 37 (2), 567-586, 2021

An integrated localisation, motion planning and obstacle avoidance algorithm in belief space
A. Thomas, F. Mastrogiovanni, M. Baglietto
Intelligent Service Robotics, 14, 235-250, 2021


Dynamics and timescales of volcanic plumbing systems: a multidisciplinary approach to a multifaceted problem (2021)

Objective: Develop more reliable conceptual models and modelling tools to understand the dynamics of volcanic plumbing systems, from magma
storage conditions to eruption; integrating innovative experiments, state-of-the-art analytical protocols, numerical modelling, and artificial intelligence algorithms.

Partners: University of Perugia, University of Camerino and University of Genoa.

InDex (2019)

Objective: Observe human dexterous in-hand manipulation to extrapolate and learn new skills for robotics platforms.

Partners: Aston University, Technical University Wien, University of Tartu, Sorbonne Université, and University of Genoa.

Artouch-Lab (2019)

Objective: The Artificial Somatosensation for Humans and Humanoids Lab will advance groundbreaking research towards a future in which humans and robots will benefit from synergetic integration between natural and synthetic somatosensation for perception and control.

Partners: Ben-Gurion University of Negev and University of Genoa

TeamUP (2019)

Objective: Design and implement novel Artificial Intelligence models, methods and technologies to improve human-robot collaboration in real-world industrial settings, integrating advanced complementary technologies, including artificial vision, tactile sensing, wearable devices, virtual reality (VR) and augmented reality (AR).



Baxter is a humanoid, anthropomorphic robot from RethinkRobotics. Baxter is equipped with two seven degree-of-freedom arms and it has been designed for close interaction with human co-workers.


TIAGo is a mobile manipulator, from PAL robotics, combining perception, navigation, manipulation and human-robot interaction skills. It is equipped with two seven degree-of-freedom arms, an RGB-D camera, a LIDAR, speakers and microphones.


MiRO is a pet-like robot from Consequential Robotics. It is able to autonomously navigate and interact with humans thanks to its sensorization that comprehends: cameras, microphones and tactile sensors distributed on the body. MiRO is even able to communicate with humans using its speakers and lights.

Husqvarna Automower

This automower from Husqvarna can be controller through ROS and it has been integrated with a Kinect for autonomous navigation. This robot is a flexible mobile platform helpful for education and research.