Rehabilitation robotics and physical therapy could greatly benefit from engaging and motivating, robotic caregivers which respond in accordance to patients’ emotional and social cues. Recent studies indicate that human-machine interactions are more believable and memorable when a physical entity is present, provided that the machine behaves in a realistic manner. It is desirable to adopt face-to-face communication because it is the most natural and efficient way of exchanging information and does not require users to alter their habits. Towards this end, we describe a process for animating a robot head, based on video input of a human head. We map from the 2D coordinates of feature points into the robot’s servo space using Partial Least Squares (PLS). Learning is done using a small set of keyframes manually created by an animator. The method is efficient, robust to tracking errors and independent of the scale of the face being tracked.