At the 2000 London Conference Glyn Humphreys gave his Presidents’ Award Lecture on the cognitive neuroscience of action selection.
IN everyday life we carry out many hundreds of visually guided actions on the objects that surround us. We may reach and grasp a kettle and pour boiling water from it into a teapot; we may pour from a jug of milk and from a teapot into a cup; and we may raise the cup to our lips to drink. Although each of these actions seems simple enough, the processes involved are complex. We need to use visual information to guide the reach-and-grasp actions. Having grasped an object, we must then effect the appropriate category of action – we need to pour from the jug but drink from the cup. Over the past 20 years a considerable amount of research has been carried out into understanding the first part of the process, how visual information is used in reach-and-grasp actions (see Milner & Goodale, 1995, for one review). However, much less work has been conducted into the factors that determine the categories of action we perform with objects, once the objects have been grasped. In this article I will discuss research within my laboratory that has focused on this last question – how categories of action are selected from visually presented objects. The research uses converging evidence from experimental psychology, cognitive neuropsychology and functional brain imaging to reveal both the underlying cognitive architecture and the neural substrate of action selection. It then uses computational modelling to provide an explicit account of both normal performance and its breakdown following brain damage. The work suggests that, in addition to being based on contextual and associative knowledge about objects, action selection is influenced by ‘affordances’ derived from the visual properties of objects.
BPS Members can discuss this article
Already a member? Or Create an account
Not a member? Find out about becoming a member or subscriber