Videos

Humans engaged in collaborative activities are naturally able to convey their intentions to teammates through multi-modal communication, which is made up of explicit and implicit cues. Similarly, a more natural form of human-robot collaboration may be achieved by enabling robots to convey their intentions to human teammates via multiple communication channels. In this paper, we postulate that a better communication may take place should collaborative robots be able to anticipate their movements to human teammates in an intuitive way. In order to support such a claim, we propose a robot system’s architecture through which robots can communicate planned motions to human teammates leveraging a Mixed Reality interface powered by modern head-mounted displays. Specifically, the robot’s hologram, which is superimposed to the real robot in the human teammate’s point of view, shows the robot’s future movements, allowing the human to understand them in advance, and possibly react to them in an appropriate way. We conduct a preliminary user study to evaluate the effectiveness of the proposed anticipatory visualization during a complex collaborative task. The experimental results suggest that an improved and more natural collaboration can be achieved by employing this anticipatory communication mode.

In this work, we present a framework for human-robot collaboration allowing the human operator to alter the robot plan execution online. To achieve this goal, we introduce Branched AND/OR graphs, an extension to AND/OR graphs, to manage flexible and adaptable human-robot collaboration. In our study, the operator can alter the plan execution using two implementations of Branched AND/OR graphs for learning by demonstration, using kinesthetic teaching, and task repetition. Finally, we demonstrated the effectiveness of our framework in a defect spotting scenario where the operator supervises robot operations and modifies online the plan when necessary.

In this article, we propose FlexHRC+, a hierarchical human-robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on an in-the-loop decision-making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and the representation level, integrating a hierarchical and/or graph whose online behavior is formally specified using first-order logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.