The development of trustworthy human-assistive robots is a challenge that goes beyond the traditional boundaries of engineering. Essential components of trustworthiness are safety, predictability and usefulness. In this paper we demonstrate that the integration of joint action understanding from human-human interaction into the human-robot context can significantly improve the success rate of robot-to-human object handover tasks. We take a two layer approach. The first layer handles the physical aspects of the handover. The robot’s decision to release the object is informed by a Hidden Markov Model that estimates the state of the handover. We then introduce a higher-level cognitive layer that models behaviour to be expected from the human user in a handover situation inspired by human-human handover observations. In particular, we focus on the inclusion of eye gaze / head orientation into the robot’s decision making. Our results demonstrate that by integrating these non-verbal cues the success rate of robot-to-human handovers can be significantly improved, resulting in a more robust and therefore safer system.