Psychologist logo
Cognition and perception, Methods and statistics

A window to the soul and psyche?

Szonya Durant with a primer on eye tracking.

12 October 2016

Back in 1960s Moscow, the pioneering Alfred Yarbus co-opted colleagues and students into wearing uncomfortable contact-lens-like suction caps to record their eye movements. Yarbus (1967) described phenomena that have been consistently replicated with successively more sophisticated set-ups. You may be familiar with the extremely stereotyped patterns of eye movements when scanning faces, focusing our attention mostly on the mouth and eyes, where socially relevant information is held. The other most famous example from Yarbus’s work is the marked difference observed in scanning a painting under different instructions. This revealed one of the most promising aspects of eye-movement research – our eye movements are driven by the information we are trying to retrieve. We are able to use information from the edge of our (peripheral) vision to determine where it is most useful to direct our attention next and plan our eye movements in an efficient, focused way. So, eye movements can tell us what visual information is needed to solve a given task; but the information searched for can also tell us what the task was. Eye tracking is a putative ‘mind-reading’ tool.

In the modern day, eye-tracking and other desktop and wearable technology is increasingly promising to monitor not only our every move, but also our every thought. The media show colourful ‘heatmaps’ produced by market research companies that claim to reveal which parts of an advert captured our attention. A vast area of research lies behind the production and interpretation of such data, which has implications in wide-ranging areas of psychology.

Why do we move our eyes?

One of the main reasons we move our eyes is simply to keep images steady on the back of the eye. Consider footage as a person with a video camera walks down a corridor: it appears jerky, yet the world seems stable when we ourselves are walking. It is thought that this need for stabilisation is why eye movements first evolved (Land, 2011). A pigeon makes bobbing head movements to counteract its body motion, but thankfully we make small stabilising eye movements instead.

However, there’s a reason we need to move our eyes that makes eye tracking useful for observing what we are paying attention to. Light that enters the eye gets focused on the fovea, a small area where visual receptors are closely packed together. This is why, if we want to see something clearly, we need to look at it, not simply be facing towards it. We do not have a high-resolution image on the back of the eye; instead we are constantly moving our eye around, sequentially imaging small parts of the scene, piecing them together. This is why we need to keep our eyes in ‘smooth pursuit’ of an object of interest. It is also why we make large sweeping movements called ‘saccades’ across the scene, stopping our eyes at informative points to take in information. It is typically this ‘scan-fixate’ behaviour that is used by researchers to test theories on eye movements and attention, or by companies to infer workload or attention to design (Holmqvist et al., 2011).

What we can measure and how

Screen-based eye tracking (or remote eye tracking) can map the rotations of our eyeballs onto 2D trajectories on the screen, reflecting where light was captured from by the eyes. These trajectories are what we use to measure the tracking of objects, record the order of looking at things, or to extract when a saccade was initiated, how fast it was and what shape it took over space and time. The counterpart to extracting saccades is the extraction of fixations; when they happened, how long each one lasted for, how many fixations there were in a certain area of the scene and the total duration of fixations in a scene (these latter two measures are what are represented in the colourful heatmaps seen in the media). Usually, to be able to summarise descriptions of what and where captured our attention, we need to define Areas of Interest (AOIs); for example, the eyes and mouth versus the rest of a face (Holmqvist et al., 2011).

On top of these measures, most eye trackers can also measure pupil size. Pupil diameter changes not only with changing light levels, but also with internal states associated with different levels and types of arousal, so can provide another measure of emotional, attentional and cognitive processes (Laeng et al., 2012).

But eye-tracking research is not just about participants sitting in front of a screen. Many interesting psychological processes occur when we are mobile, interacting with the scene in front of us. The seminal work of Mike Land and colleagues (e.g. 1999) opened up the gates to mobile eye tracking. Head-mounted eye trackers paired with forward-pointing cameras enable this method to be introduced into more realistic real-life environments. In this case our AOIs become objects that might change how they look based on our viewpoint. Our head movements and stabilising eye movements begin to play a significant role in the measurements we are making.

What have we learnt?

Where our eyes are looking is normally where we are paying attention to, and as we prepare to shift our attention we prepare to shift our eyes. Eye movements tell us both about exogenous attention, (i.e. what objects capture our attention), as well as endogenous attention (how we guide our attentional window according internal motivations) (Carrasco, 2011). Understanding the cognitive processes leading to these internal and external biases is crucial to many areas in psychology: how we build up our visual perception of a scene, how we plan our movements, how we encode semantic information, what social cues influence our behaviour and what emotional biases guide us in our everyday decisions.

We know that eyes are partly drawn to objects that are salient, based on low-level visual properties, such as high contrast parts of the scene, or different colours or moving objects (Parkhurst & Niebur, 2003). Often, when first presented with a scene, these are the areas that we fixate on and that drive the initial scanning movements of short fixations – seemingly a first-pass sweep to decide where to look further in the scene. In terms of visual information without knowing semantic content, these are the most informative areas of the scene, where objects of interest are likely to occur (Itti & Koch, 2001). Additionally, there is a central bias when looking to the centre of a screen, which is important to take into account in the design of stimuli (Tatler, 2007).

However, within free-viewing conditions, without a clear task, after initial scanning movements, eyes are drawn to objects that are salient based on the semantic context of the scene; for example, faces that hold important social information (Risko et al., 2012). Unexpected stimuli hold information that could be useful, and this semantic information drives our later eye movements. These typically require longer fixations, although there is some debate about whether our earliest fixations can be driven by this more complex semantic context (Henderson & Hollingworth, 1999). The increase in fixation durations over viewing time implies we move on from scanning the scene to analysing relevant parts in more detail (Antes, 1974).

Given a task, eyes are drawn to objects relevant to that task, and evidence has grown for how we process static scenes and interact with the world (Castelhano et al., 2009; Hayhoe & Ballard, 2005). Eye movements give some suggestion about what objects were attended to and in what order, and the length of fixations reveals how much processing of each object is needed. Hence eye movements are used to gain insight into cognitive processes (e.g. reading, learning, memory and decision making). In reading, gaze-contingent paradigms have been used to change how much information is shown to the eye dependent on its position. This has revealed how far out in peripheral vision we need words to be presented to be able to read fluently (Reingold & Stampe, 2000). Even as we are reading the current words, the next set of letters are beginning to move into our attentional focus. Our eye movements change according to difficulty of text (Rayner et al., 2006), further showing how they can be connected with cognitive load. This is important for determining usability. This difference in looking times comes into play in learning and memory as well: the amount we look at items reflects our ability to remember them. Interestingly, when trying to recognise a scene, our eyes move in the same way as when we were first observing the scene (Peterson & Beck, 2011). We can visualise some aspects of a decision-making process using eye movements as we flick from one choice to another, and we can see that our eyes land on our final choice just before we make the call (Horstmann et al., 2009).

Eye movements can also give insight into the sequence of motor planning and how we use visual information to guide our own movements (Hayhoe & Ballard, 2005). We can also see how eye positions map onto our motor learning, first scanning in detail for visual feedback, whilst later becoming more like visual signposts to attach our movements to (Sailer et al., 2005). We have also discovered how experts use eye movements differently: in the field of sports science it has been found that it is not simply motor performance that differs with expertise, but also how we search out information and what information we use to guide our movements (North et al., 2009). These expertise differences extend from sport to other professional domains where the efficient processing of visual information is crucial, such as aviation and the arts (Gegenfurtner et al., 2011).

Differences in eye-movement patterns can be found between different mental conditions. This has been a rich area of research; for example, in clinical psychology, where the difference between orienting and maintenance can be mapped onto eye-tracking terms. Anxiety disorders show differences in what biases our attention, whilst depressive orders involve maintaining attention on dysphoric stimuli (Armstrong & Olatunji, 2012). The difference in the way autism spectrum disorder populations move their eyes when observing social scenes shows that they may not simply lack social-processing ability, but this may be due to the way that they allocate attention (Boraston & Blakemore, 2007). Eye tracking is being developed as a diagnostic tool; in schizophrenia it is particularly useful as there are often typical eye-movement patterns associated with patient groups (Levy et al., 1994).

Eye movements in free-viewing conditions may reveal what our everyday ‘task’ is (i.e. what needs to be detected for survival) or what our inbuilt ‘adaptive’ biases are. For example, (emotional) faces draw attention (Judd et al., 2009; Mogg et al., 2007); others’ gaze cues our own eye movements (Freeth et al., 2010); we search out stimuli we find beautiful or prefer in some way (Holmes & Zanker, 2012); and objects we fear, or have some sort of other pre-disposed attentional bias to, are fixated more (Rinck & Becker, 2006). This idea that from the outset of perception the odds may be stacked against us, has led to work in diverse areas, such as the report of how obese people’s attention is triggered more by high-sugar foods, revealing differences in reward-system function (Castellanos et al., 2009). In this way the free-viewing paradigm can give us insight into what our everyday tasks are, but research into such in-built biases and individual differences with no immediate clearly defined task is in its early days.

Current applications

The above links found between eye movements and attention and the increasing ease of use of eye trackers have led to a wide range of applications beyond the basic questions of psychology (Duchowski, 2002). In driving research it has told us about the effect of processing demand, and how eye movements in search of information change with expertise in the location of hazards (Crundall & Underwood, 1998). Eye tracking has become a useful tool in the market research and user-experience sectors, for example in maximising the effectiveness of an advert in affecting where people look and what information they take in (Wedel, 2008), or in using fixations as an additional measure to screen interaction in the use of mobile phones (Al-Showarah et al., 2014). Eye tracking has been useful for measuring task complexity and workload, in aircraft cockpits, for example; and the method has improved computer use accessibility, providing alternative modes of interaction (Majaranta & Bulling, 2014). Finally, clinical use in diagnosing ocular and vestibular disorders is well established, although  reliability for diagnosing schizophrenia or Parkinson’s still needs to be further developed (Bedell & Stevenson, 2013).

The future

Eye tracking may become integral in further new technologies. It is useful for virtual reality, as we can limit the amount of detail needed in an image if we know exactly where someone is looking – only that part needs to rendered in high resolution – saving on the computational power needed (Bektas¸ et al., 2015).

Mobile eye tracking is becoming less time-consuming and has improved object tracking, making it easier to automatically identify which object someone is looking at on a frame-by-frame basis (De Beugher et al., 2013). This will give insight into attention allocation in natural everyday activities and social interactions. It may lead to eye trackers becoming more used in social psychology, and may be useful for studying patterns over a day as battery life improves.

Eye trackers can now be easily attached to any computer and are reducing in cost, making the possibility of large-scale eye-tracking studies more feasible, allowing their use in real-world settings and in measuring individual differences. Software-based solutions for eye tracking on mobile devices are in the early stages of development, although this is proving to be a difficult problem to solve (Bleicher, 2013).

There are other limitations. We are far from predicting where someone will look at any given time, even on a screen. Existing models struggle with adding in all the competing demands on eye movements, just as it is hard to disentangle all the influences on our thought processes. It follows that we are still some way from ‘reading someone’s mind’ from looking at their eye-tracking patterns.

A better model of the mechanisms driving eye movements involves a better model of cognition, so as we build up a full picture of what causes eye movements and what can be inferred from them, in turn we are building a picture of attention and cognition itself.

However, careful design using well-controlled experimental variables has yielded useful confirmation of models based on eye movements – teasing apart processes such as orienting versus maintenance (Armstrong & Olatunji, 2012) and the ability to track the time course of decision making. This has allowed researchers to operationalise concepts such as ‘intuition versus deliberation’, and to evaluate their contributions to decision making  (Horstmann et al. 2009).

Measuring eye movements allows psychological research to move away from reaction times and observe unconscious behaviour in more naturalistic tasks. One of the strengths of eye tracking lies in the fact that behaviour can be observed and quantified in the absence of a clearly defined task, providing a measure of unconscious processing. There are many exciting questions that can exploit this strength. I firmly believe that eye tracking will become an ever more useful tool, as progress is made in associating patterns of eye movements with thought processes, such as aspects of processing information, decision making and forming intentions.

Meet the author

‘I want to know how the brain pieces together our visual world – so I need to know exactly what people are looking at. Eye tracking allowed me to do that, moving out of a dark lab, into the real world. I have also loved organising and teaching on the annual Real World Eye Tracking course at Royal Holloway. For scientists the appeal of measuring our simplest actions is clear, a bridge between physiology and behaviour. However, eye tracking is spreading not just in to social sciences, but is about to become part of everyday life.’

- Szonya Durant is at Royal Holloway University of London

References

Al-Showarah, S., Naseer, A.J. & Sellahewa, H. (2014). Effects of user age on smartphone and tablet use, measured with an eye-tracker via fixation duration, scan-path duration, and saccades proportion. International Conference on Universal Access in Human–Computer Interaction (pp.3–14). Springer International.
Antes, J.R. (1974). The time course of picture viewing. Journal of Experimental Psychology, 103, 62–70.
Armstrong, T. & Olatunji, B.O. (2012). Eye tracking of attention in the affective disorders. Clinical Psychology Review, 32, 704–723.
Bedell, H.E. & Stevenson, S.B. (2013). Eye movement testing in clinical examination. Vision Research, 90, 32–37.
Bektaş, K. et al. (2015, May). A testbed combining visual perception models for geographic gaze contingent displays. Paper presented at the Eurographics Conference on Visualization, Cagliari, Italy.
Bleicher, A. (2013). Rise of the eye phones. Spectrum, IEEE, 50(5), 9–10.
Boraston, Z. & Blakemore, S-J. (2007). The application of eye-tracking technology in the study of autism. Journal of Physiology, 581, 893–898.
Carrasco, M. (2011). Visual attention: The past 25 years. Vision Research, 51, 1484–1525.
Castelhano, M.S., Mack, M.L. & Henderson, J.M. (2009). Viewing task influences eye movement control during active scene perception. Journal of Vision, 9(3)6: 1–15.
Castellanos, E.H. et al. (2009). Obese adults have visual attention bias for food cue images. International Journal of Obesity, 33, 1063–1073.
Crundall, D.E. & Underwood, G. (1998). Effects of experience and processing demands on visual information acquisition in drivers. Ergonomics, 41, 448–458.
De Beugher, S., Brône, G. & Goedemé, T. (2013). Object recognition and person detection for mobile eye-tracking research. Proceedings of the First International Workshop on Solutions for Automatic Gaze Data Analysis 2013 (SAGA 2013) (pp.24–26). CITEC.
Duchowski, A.T. (2002). A breadth-first survey of eye-tracking applications. Behavior Research Methods, Instruments, & Computers, 34, 455–470.
Freeth, M. et al. (2010). Do gaze cues in complex scenes capture and direct the attention of high functioning adolescents with ASD? Journal of Autism and Developmental Disorders, 40, 534–547.
Gegenfurtner, A., Lehtinen, E. & Säljö, R. (2011). Expertise differences in the comprehension of visualizations. Educational Psychology Review, 23, 523–552.
Hayhoe, M. & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9, 188–194.
Henderson, J.M. & Hollingworth, A. (1999). High-level scene perception. Annual Review of Psychology, 50, 243–271.
Holmes, T. & Zanker, J.M. (2012). Using an oculomotor signature as an indicator of aesthetic preference.
i-Perception, 3, 426–439.
Holmqvist, K. et al. (2011). Eye tracking: A comprehensive guide to methods and measures. Oxford: OUP.
Horstmann, N., Ahlgrimm, A. & Glöckner, A. (2009). How distinct are intuitionand deliberation? An eye-tracking analysis of instruction-induced decision modes. Max Planck Institute Collective Goods Preprint, (2009/10).
Itti, L. & Koch, C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2, 194–203.
Judd, T. et al. (2009). Learning to predict where humans look. In 2009 IEEE 12th International Conference on Computer Vision (pp.2106–2113). IEEE.
Laeng, B., Sirois, S. & Gredebäck, G. (2012). Pupillometry a window to the preconscious? Perspectives on Psychological Science, 7, 18–27.
Land, M.F. (2011). Oculomotor behaviour in vertebrates and invertebrates. In S.P. Liversedge et al. (Eds.) The Oxford Handbook of Eye Movements (pp.3–15). Oxford: OUP.
Land, M.F.,  Mennie, N. & Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28, 1311–1328.
Levy, D.L. et al. (1994). Eye tracking and schizophrenia: A selective review. Schizophrenia Bulletin, 20(1), 47–62.
Majaranta, P. & Bulling, A. (2014). Eye tracking and eye-based human–computer interaction. In S. Fairclough & K. Gilleade (Eds.) Advances in physiological computing (pp.39–65). London: Springer.
Mogg, K., Garner, M. & Bradley, B.P. (2007). Anxiety and orienting of gaze to angry and fearful faces. Biological Psychology, 76, 163–169.
North, J.S. et al. (2009). Perceiving patterns in dynamic action sequences: Investigating the processes underpinning stimulus recognition and anticipation skill. Applied Cognitive Psychology, 23, 878–894.
Parkhurst, D.J. & Niebur, E. (2003). Scene content selected by active vision. Spatial Vision, 16, 125–154.
Peterson, M.S. & Beck, M.R. (2011). Eye movements and memory. In S.P. Liversedge et al. (Eds.) The Oxford Handbook of Eye Movements (pp.579–592). Oxford: OUP.
Rayner, K. et al. (2006). Eye movements as reflections of comprehension processes in reading. Scientific Studies of Reading, 10, 241–255.
Reingold, E.M. & Stampe, D.M. (2000). Saccadic inhibition and gaze contingent research paradigms.  In A. Kennedy et al. (Eds.) Reading as a perceptual process (pp.119–145). Amsterdam: Elsevier.
Rinck, M. & Becker, E.S. (2006). Spider fearful individuals attend to threat, then quickly avoid it: Evidence from eye movements. Journal of Abnormal Psychology, 115, 231.
Risko, E.F. et al. (2012). Social attention with real versus reel stimuli: Toward an empirical approach to concerns about ecological validity. Frontiers in Human Neuroscience. doi:10.3389/fnhum.2012.00143.
Sailer, U., Flanagan, J.R. & Johansson, R.S. (2005). Eye–hand coordination during learning of a novel visuomotor task. Journal of Neuroscience 25, 8833–8842.
Tatler, B.W. (2007). The central fixation bias in scene viewing. Journal of Vision. doi:10.1167/7.14.4
Wedel, M. & Pieters, R. (2008). A review of eye-tracking research in marketing. Review of Marketing Research, 4, 123–147.
Yarbus, A.L. (1967). Eye movements and vision. New York: Plenum Press.