Attention Sharing in Infants

Attention-sharing is matching one’s focus of attention with another person’s, deliberately and for purposes of shared interest, experience, or learning. Although everyone in a theater might be watching the screen, this is a trivial example: researchers are interested in situations where individuals attempt to recruit another person’s attention (for example, by pointing), and situations where an individual shifts their attention in order to see what another person is attending to.

Infants begin sharing attention around the second half of the first year, and get pretty good at it by around their first birthday. How do they learn to share attention? What does infant attention-sharing look like?

MESAdyadJAInfants share attention through gaze-following and point-following, and a newly studied behavior called action-monitoring. In action-monitoring, parents hold up or handle objects for infants, and infants reliably look. Infants also produce actions that get parents’ attention, like vocalizing and smiling and (after 12-18 months of age) showing them objects.

More about how infants start to follow adults’ gaze and points…

Multiple factors affect infants' attention-following.
By 9 to 12 months infants can follow adults’ gaze and points, but “where” matters. Deák, Flom and Pick (2000) showed that 12-month-olds can follow an adult’s gaze to targets behind them (that is, out of sight). This ability was thought to emerge later because it must require infants to know that another person might see something that they can’t – that is, perspective-taking (but see below). However, 12 and even 18-month-olds are more likely to follow gaze to a target that’s already within their visual field.

Our results also showed that infants follow:  (1) points more than head-turns/gaze shifts; (2) large actions (ex: big, sweeping head turns) more than small ones; (3) looks/points to interesting targets more than boring targets.

A follow-up (Flom, Deák, Phill & Pick, 2004) found that 9-month-olds show many of the same effects, but only follow an adult’s gaze and pointing cues to targets in their peripheral visual field, not targets behind them. Thus, infants’ gaze-following ‘range’ increases with age.

 

One factor that matters is distraction.
We found that 1-year-olds in a setting with lots to look at (like a play room at home or day care) almost never followed parents’ gaze, even when they saw their parent turned towards some toy on a nearby shelf (Deák, Walden, Yale, & Lewis, 2008). It’s not that infants were indifferent to the toys — they followed about half of parents’ pointing gestures, and about half of parents’ verbal requests to look. The results suggest that 1-year-olds are becoming selective about which adult actions to follow, especially in a setting with lots to look at, and following gaze is not very rewarding in such settings.

 

A theory of how infants learn attention-sharing...
has been outlined in several papers (Deák & Triesch, 2006; Deák et al, 2013), and tested in several computer simulations (e.g., Jasso, Triesch, Deák, & Lewis, 2012; Triesch, Teuscher, Deák, & Carlson, 2006) as part of a long-term collaboration with Prof. Jochen Triesch of FIAS. The theory proposes a set of perceptual traits, affective dispositions, and learning processes that are sufficient for infants to learn attention-following behaviors, in the context of structured social input from adults.

Our simulations suggest that the theory can account a wide range of behaviors seen in infants. For example, Hector Jasso’s thesis showed that the model could explain phenomena including (real) infants’ tendencies to:

  • follow gaze but stop and fixate on a distracting target near their midline (“premature capture”),
  • alternate gaze between the parent’s face and an object
  • look at a parent to encode their emotional response to an ambiguous event, or social referencing.
  • follow a parent’s head direction when younger, and only use eye-direction when older

Also, Josh Lewis, Hector and colleagues (2010) developed a platform in which a virtual infant uses computer vision in a 3D environment (like first-person shooter games) to watch what a virtual anthropomorphic ‘caregiver’ does. The infant does not “know” anything about the caregiver’s gaze-direction or manual actions, or the meanings of its behaviors. It has only the simple perceptual and learning abilities stipulated in the theory, and a preference to look at high-contrast, colorful, moving sights, including toys and faces. However, the caregiver not only looked like a person, it also acted like one – specifically, it duplicated the patterns of behaviors we had coded from parents in our naturalistic ethnographic study. If a ‘dumb’ virtual infant could learn to follow the virtual-parent’s gaze direction from natural action patterns, using only very simple learning and perceptual routines, it would suggest that gaze-following is a pretty easy behavior to learn, and it does not require any high-level, specially evolved capabilities. This is, in fact, just what we found.


Return Home

Leave a Reply