Attention-sharing is matching one’s focus of attention with another person’s, deliberately and for purposes of shared interest, experience, or learning. Although everyone in a theater might be watching the screen, this is a trivial example: researchers are interested in situations where individuals attempt to recruit another person’s attention (for example, by pointing), and situations where an individual shifts their attention in order to see what another person is attending to.
Infants begin sharing attention around the second half of the first year, and get pretty good at it by around their first birthday. How do they learn to share attention? What does infant attention-sharing look like?
Infants share attention through gaze-following and point-following, and a newly studied behavior called action-monitoring. In action-monitoring, parents hold up or handle objects for infants, and infants reliably look. Infants also produce actions that get parents’ attention, like vocalizing and smiling and (after 12-18 months of age) showing them objects.
More about how infants start to follow adults’ gaze and points…Multiple factors affect infants' attention-following.
Our results also showed that infants follow: (1) points more than head-turns/gaze shifts; (2) large actions (ex: big, sweeping head turns) more than small ones; (3) looks/points to interesting targets more than boring targets.
A follow-up (Flom, Deák, Phill & Pick, 2004) found that 9-month-olds show many of the same effects, but only follow an adult’s gaze and pointing cues to targets in their peripheral visual field, not targets behind them. Thus, infants’ gaze-following ‘range’ increases with age.
One factor that matters is distraction.
A theory of how infants learn attention-sharing...
Our simulations suggest that the theory can account a wide range of behaviors seen in infants. For example, Hector Jasso’s thesis showed that the model could explain phenomena including (real) infants’ tendencies to:
- follow gaze but stop and fixate on a distracting target near their midline (“premature capture”),
- alternate gaze between the parent’s face and an object
- look at a parent to encode their emotional response to an ambiguous event, or social referencing.
- follow a parent’s head direction when younger, and only use eye-direction when older
Also, Josh Lewis, Hector and colleagues (2010) developed a platform in which a virtual infant uses computer vision in a 3D environment (like first-person shooter games) to watch what a virtual anthropomorphic ‘caregiver’ does. The infant does not “know” anything about the caregiver’s gaze-direction or manual actions, or the meanings of its behaviors. It has only the simple perceptual and learning abilities stipulated in the theory, and a preference to look at high-contrast, colorful, moving sights, including toys and faces. However, the caregiver not only looked like a person, it also acted like one – specifically, it duplicated the patterns of behaviors we had coded from parents in our naturalistic ethnographic study. If a ‘dumb’ virtual infant could learn to follow the virtual-parent’s gaze direction from natural action patterns, using only very simple learning and perceptual routines, it would suggest that gaze-following is a pretty easy behavior to learn, and it does not require any high-level, specially evolved capabilities. This is, in fact, just what we found.