‘Your brain is revealing the building blocks of everyday experience'

Ella Rhodes collates a series of contributions from those researching vision and perception.

Many take visual perception for granted…

…I remember being told as an undergraduate that ‘we just open our eyes and see’. I didn’t agree with it then, nor now.

In my psychobiology and perception courses, I was taught about the anatomy of the visual system, from the retina to early cortical areas. At the retinal level there are the three cone photoreceptors transformed into cone-opponent pathways as the signals propagate their way to the cortex. There, the visual scene appears to be projected across the cortical surfaces in retinotopic maps, and segmented into elements such as orientated lines, motion with a particular direction and speed, colour (or wavelength) and depth. All of that’s somehow recombined at a later elusive stage, to form the percepts we take for granted.

Colour caught my attention, particularly the hiatus between the early cone-opponent dimensions, evident in the neurophysiology, and the later colour-opponent dimensions that are intuitively related to our experiences of colour. The two cone-opponent pathways are defined by activity in the long- vs middle-wavelength sensitive cones (colours that appear, very loosely, reddish-greenish, more accurately pink-cyan) and short- vs the sum of long- and middle-wavelength sensitive cones (colours that appear, very loosely, blueish-yellowish). On the other hand, the colour-opponent dimensions, defined by unique hues (red vs green, blue vs yellow), are defined by our experiences: can you see a reddish-green, or a blueish-yellow? No, but you can imagine, and set quite precisely, unique hues: a yellow that is neither orange (reddish) nor greenish, a green that is neither blueish nor yellowish… Both cone-opponent and colour-opponent dimensions are important ways to classify colours and to categorise stages involved in understanding how colour is processed in the visual system, but how do you get from the one to the other?

It was imperfect colour constancy that cemented my research interests in colour perception. How is it that my jumper looks much the same whether I am indoors or out, and, if indoors, whether I am viewing it under incandescent, fluorescent, or LED lighting, when the wavelengths reflected around each environment differ so widely? And why is it not perfect, so that the jumper, or carpet, I buy in a shop sometimes does look different at home? Perhaps a well-known example is that dress, images of which went viral in 2015 (#TheDress): why do some people see it as blue and black and others as white and gold? It all has to do with the lighting and our expectations. My interests still centre on basic issues of visual perception, but now include applied vision research that may have practical benefit. I have studied brain plasticity and visual adaptation, as well as sensory thresholds and cognitive effects on perception, including attention and arousal. Over the last 20 years, my research has focused on migraine and, more recently, photosensitivity. My principal interest remains in visual cognition, how it may become altered in any central nervous system disorder and the implications that has for the design of our visual environments.

My research in migraine covers all three of these areas. Take the second aspect: understanding how visual cognition can be altered in neurological conditions. Some find it strange to study perception for a condition that involves head pain, but it is not. Many researchers have assessed visual processing in migraine due to the intense sensitivity to light that patients experience (photophobia), the fact that visual stimuli can trigger attacks in up to 60 per cent of patients (typically, stripes, flicker and glare), and the visual disturbances that may precede an attack (visual aura).

The classic visual aura [Image above: Aura number 10 by Simon Furze] has been called a fortification spectra (because it resembles the boundary walls on ancient forts or castles, if viewed from above) or a scintillating scotoma (because there is a zig-zag crescent of activity, scintillating, that grows from central vision to the periphery over half an hour or so, leaving a blind region in its wake). Others can see the world as if through running water, or see simpler stars and phosphenes, or have pockets of blindness. It’s your visual system showing you the architecture of your early visual cortex as a wave of activity moves across the cortical surface, leaving little neuronal activity behind. Depending on the symptoms, your brain is revealing orientation columns, colour centres, or motion areas that are the building blocks of your everyday experiences that we all take for granted.

A long-standing debate is whether the cortex is hyper- or hypo-excitable in migraine and how much of the cortex may be affected: early accounts considered the primary visual cortex alone to differ. In the absence of animal models – we can’t know if an animal has a headache – my approach is to use visual tests as non-invasive tools to test models of the pathophysiology. I have devised threshold tests involving motion, orientation, colour perception and masking and have found that one general model is unlikely to describe neural function in migraine – different circuits in various cortical areas can be affected in distinct ways. My research has shown distributed processing differences, between attacks, throughout the visual pathways, starting as early as the retina and extending to the primary visual cortex and extrastriate cortical areas such as MT/V5. Certainly, the increasingly old-fashioned general cortical hyperexcitability model is incorrect, as is any model of a general alteration in neural function. This is important as treatment should be dictated by our understanding of the underlying pathophysiology: otherwise, treatment becomes pot-luck, or borrowed from other medical conditions such as epilepsy. This explains why medication for migraine includes anticonvulsants, antidepressants and anxiolytics, none of which are effective for the majority.

I am particularly interested in visually induced migraine and in the causes of visual discomfort. Why is there aversion to certain visual patterns that can trigger migraine attacks? Why do we see the illusions that we do? If I look at a high contrast striped or dotty pattern, and know it is black and white, why do I see washes of colour, or depth, or undulating motion as if the pattern is breathing? Why are these ‘illusions’ heightened in migraine: what can that tell us about the condition? Taking the example of striped images: the perceptual distortions have been attributed to fixation instability and accommodative changes and also to the massive cortical excitation generated by the patterns, due to the organisation of the early visual cortex, which leads to a spread of excitation to neighbouring cells. If a neuron, ordinarily tuned to colour, or motion, or depth, is recruited to fire inappropriately by its overactive neighbours, you will see that aspect (colour, or motion, or depth).

Raising awareness that the visual environment can elicit discomfort or aggravate existing medical conditions is important. Glare can occur from interior lighting, daylight, or vehicle and road lighting at night. Computer screens, television and interior lighting can flicker. We often encounter high contrast, repetitive striped patterns from arrays of fluorescent lights, escalator treads, window blinds, building decorations, fashion, and art in public spaces. Many of these problematic features are relatively easy to eliminate.

- Dr Alex Shepherd is a Reader in Psychology at Birkbeck, University of London

 

Making AI more human

I’ve always been interested in engineering and understanding how machines work. I was a teenager in the 1980s, around when the first home computers came out. It was a great time to be a geek! I wasn’t interested in playing games that other people had written – I wanted to make this new thing do something useful. That’s how I got interested in computer programming and electronics. When I left school I was pretty sure that I wanted to be an electronics design engineer, and even worked for a local company designing digital control equipment during my Electronics degree. You might have thought that this was my dream job, but in the last year of my degree I discovered Computer Vision and Artificial Intelligence.

The idea that we could build machines that could see and think like us really appealed to me, but I decided that I should first understand how human visual perception worked. I went off to do a PhD in Neuroscience. I quickly realised two things: visual perception is far more complex than most people realise, but the brain is, at one level, an electronic system and so can be understood using the same mathematics that describe electronic systems. One of the most striking things about the human visual system is that many of the detailed operations it undertakes can be re-created in computer software and then applied to do useful things like recognising number plates, or even faces.

My main research focus is on how we use the material properties of surfaces to tell whether a change in luminance (the amount of light entering the eye) is due to a change in illumination, such as a shadow; or a change in reflectance, such as an object boundary. This is interesting because computer vision systems have great difficulty dealing with shadows, whereas humans seem to have no problem with them. Images are made up of patches of different luminances (shades) and colours, but what we see is the product of the amount of light falling onto objects and the amount of light they reflect. The problem is that the value of any given pixel changes when either illumination or reflectance changes; and it’s very difficult to work out what the true cause is. In one recent project we studied luminance changes falling across textured surfaces and discovered that certain combinations of luminance and texture look like shadows, whereas others looked like material changes. We then built a computer vision system for removing shadows from images based on the same principles.

In recent years a particular kind of artificial neural network has come to the fore in Computer Vision. ‘Deep Networks’ are very powerful and can achieve performance as good, if not better, than humans on many tasks. However, they can be tricked by images that humans have no problem with, such as so called ‘adversarial stickers’ [see tinyurl.com/ybx8gkxg]. Intelligent tech is here to stay, so it’s important we are able to interact with these systems in a way that feels natural to us. That will require that they behave like us and fail in predictable ways. I’m currently researching how Deep Networks can be made to see more like humans do.

- Dr Andrew Schofield is a Reader in Psychology at Aston University

 

Motion processing in autistic children

Vision is perhaps not the most obvious research area within the field of autism. Yet differences in the functioning of the visual system and other sensory systems have been noted from the earliest descriptions of autism; such ‘sensory symptoms’ are now recognised as an important part of the condition.

I came to this research area with an interest in how the visual system changes with age in typically developing children. We rely on visual information for almost all of our daily activities. Seeing the world in a different way, as in the case of developmental conditions like autism, could have a huge impact on a child’s everyday life.

One of my most interesting and unexpected findings is about the way that autistic children integrate motion information. According to a popular theory, autistic individuals focus on small details in the visual scene, and have difficulties integrating things together to see the overall ‘whole’. This might mean that autistic individuals have difficulties in perceiving the overall movement of a shoal of fish, for example.

Previous studies have reported that autistic individuals have difficulties in seeing the overall motion of dots in ‘motion coherence’ tasks. In these tasks, a proportion of ‘signal’ dots move coherently in a given direction, while the remaining ‘noise’ dots move in random directions. Autistic individuals tend to need a greater proportion of signal dots to perceive the motion, suggesting difficulties with integrating motion signals.

However, when we asked autistic children and non-autistic children to judge the direction of signal dots from a given distribution (without any random noise dots), we found that autistic children could actually integrate more information than children without autism. Alongside this enhanced averaging ability, we suggest that autistic children have difficulties with filtering out task-irrelevant information (‘noise’), providing a new interpretation of previously reported motion processing difficulties in autism. These differences could explain feelings of sensory overload in some cases, but more work is needed to establish this link.

Three questions still puzzle me. First, there are big individual differences in motion processing abilities – not only for autistic children, but also for typically developing children and adults – and we don’t know why. Individual differences are often disregarded in visual perception research, but constitute an important area. Second is areas of overlap and difference between neurodevelopmental conditions. Motion processing is affected in almost all conditions, but this may be for a range of reasons. Finally, I’d like to know which stages of perceptual processing are affected in autism and other developmental conditions. It is often assumed that differences originate in early sensory processing, but later decision-making processes may also be affected. I am now using EEG and computational modelling to tease these different processes apart.

- Dr Catherine Manning is Sir Henry Wellcome Postdoctoral Fellow at the University of Oxford

 

A brain working blind

Why does the world stay still when we move our eyes around? Such deceptively simple questions remind us how much processing of visual information must be performed in the brain to allow us to perceive our world. The retina at the back of the eye has millions of cells that capture photons of light entering the eye through the pupil. As we move our eyes around the pattern of light changes on the retina so there is no reason for the image of the world to stay still. Rather, this must be a function of the brain, which takes the retinal images and uses knowledge of the position of our eyes and head to determine whether movement is due to change in our bodies or our environment.

While I’m interested in how the visual system guides our behaviour when it is healthy, my current research is designed to guide rehabilitation for people who have suffered a stroke that affects their visual system. A stroke occurs when the oxygen supply to a particular part of the brain is interrupted usually due to a blockage, or rupture, of an artery. Different functions in our brain, such as language, movement and vision are located in different areas with a specific artery supplying the blood. If there is damage to the artery supplying blood to the main visual area at the back of the brain, the primary visual cortex, the person is no longer able to see one side of the world.

My research focuses on a phenomenon known as ‘blindsight’ – the finding that although people are unable to see after their stroke, when forced to guess visual information such as the direction of moving dots, they do much better than expected by chance. This suggests that some areas of the brain can still receive visual information, even though the person is not aware of this information.

Over the past decade we have used magnetic resonance imaging to look inside the brains of people with damage to the visual system to see how this happens. We have found, firstly, that when visual stimuli are presented in the ‘blind’ field this leads to activity in an area known as ‘MT’ which processes visual motion. Secondly, we have used a technique known as diffusion imaging to show that there is a pathway to area MT that is healthy in people with blindsight, but not those without blindsight. This suggests that if information can get from the eyes to area MT, people can detect visual information, and show blindsight. If this pathway is not healthy then no visual information can get through to the brain.

Understanding the brain areas and pathways that underlie blindsight gives us information about how we might design rehabilitation programmes to strengthen these pathways and improve visual function after stroke. More broadly, understanding the role of the brain in visual function is important because research is rapidly developing different types of gene and stem cell therapies for diseases of the retina. To ensure that these treatments are effective, the brain will need to adapt to its new input from the eyes.

- Dr Holly Bridge is Associate Professor at Nuffield Department of Clinical Neurosciences, University of Oxford

 

Learning to see deeper

I study perceptual development from the point of view of the developing brain. The eyes and brain work together to give us our perceptual experience of the world. When reading these words, catching a Frisbee, or crossing a road, it is the brain that has the difficult job of interpreting what is in front of us and guiding our actions. This problem is made more interesting when you consider that most of our visual abilities are very immature at birth – so, the brain has to ‘learn to see’ during infancy and childhood.

I first became interested in vision and development during my undergraduate degree. I was particularly interested in modules on perception and on Artificial Intelligence (AI). An AI perspective reminds us that vision is an information-processing problem and can be studied in those terms. My PhD and post-doc work focused on aspects of visual and cognitive development including spatial recall and the combination of vision and other senses to perceive and act effectively in the complex 3D world.

One problem I have been interested in is how the developing brain learns to correctly combine different sensory signals. Depth perception is a classic case in which we combine different ‘cues’ to see – including those well known to painters (perspective, shading, occlusion) as well as those to do with movement (motion parallax) and differences across the two eyes (stereopsis). I have studied how this develops, by asking children of different ages and adults to make judgments about 3D stimuli using different combinations of cues. How they perform with combined vs single cues is compared quantitatively with the predictions of different information processing models, particularly those in which information from cues is combined, and those in which it is kept separate.

One striking finding has been that the computations used by young children are different from those used by adults. It takes until 10-12 years of age to effectively combine these cues, leading to big improvements in the precision of perceptual judgments. In a neuroimaging study, we were able to show that the emergence of this combination takes place at the level of early sensory areas in visual cortex. This shows that some of the fundamental computations and mechanisms supporting how we see are still being reshaped long into childhood.
More recently I have been interested in the prospects for people of different ages to ‘learn to see’ using new senses provided by technology – for example, navigating using devices that translate distance into sound or vibration. Augmenting our existing senses has crucial applications for people whose vision is impaired. It also has broader implications for extending our sensory repertoire to tap into signals that humans do not normally have, such as sensing magnetic North. Key questions in this research parallel those I have asked in my research on learning to use standard visual cues: which computations are being used to make use of the signals, at what level in the brain, and how do these change with age and experience?

- Dr Marko Nardini is Professor at the University of Durham

 

When the visual experience breaks down

The visual world around us is typically perceived as stable and coherent. However, this experience can break down, resulting in striking distortions and hallucinations. Certain visual stimuli like bright lights, striped and flickering patterns can be aversive to look at and in susceptible observers can induce phantom visual / somatic experiences.

It is thought that these aberrant visual experiences (termed ‘pattern-glare’) reflect an increased degree of underlying cortical hyperexcitability in the visual cortex. This hyperexcitability may come about because certain patterns of visual information overwhelm the inhibitory regulation of localised neural assemblies in the visual cortex. The visual phenomena experienced by the observer is then a phenomenological consequence of the visual system becoming over-stimulated. In some individuals, their visual cortex appears more susceptible.  

Increased levels of cortical excitability are now known to be present in a number of neurological groups including migraine, photosensitive epilepsy, and stroke patients. Additional work suggests such factors may also be implicated in autistic spectrum disorder, dyslexia, and multiple sclerosis. However, what is particularly striking is that such factors have also been observed (albeit in attenuated form) in neurotypical groups – consistent with the notion of a continuum of aberrant neural processing in visual cortex.    

Over recent years my laboratory has been developing research tools like computerised tasks and validated screening measures designed to help quantify cortical hyperexcitability more accurately and determine its role across a host of hallucinatory experiences. For example, to bridge the explanatory gap between neurological studies and neurotypical groups, I recently published the first empirical investigation showing that neurotypical observers who were predisposed to hallucinatory out-of-body experiences and other forms of anomalous body experiences, do indeed display signs of elevated cortical hyperexcitability as measured by a computerised ‘pattern-glare’ test.  

Pattern-glare tasks require observers to view a series of striped discs (or gratings) on the computer screen that vary in terms of the density of the stripes. Those gratings with a medium spatial frequency can induce a host of visual distortions and illusions in observers. These illusions include the appearance of phantom colours and distortions of shape, structure and of motion. An increase in the number or the intensity of these perceptions indicates an increased degree of cortical excitability – where neurons in the visual system become overstimulated and display a failure in inhibitory regulation.

In addition, my work has shown that the visual cortex of neurotypical observers predisposed to aberrant perceptions / hallucinations reacts more strongly to ‘excitatory’ brain-stimulation and is harder to suppress by ‘inhibitory’ configurations of brain stimulation – both congruent with the notion of a more excitable cortex. This work was done using transcranial direct-current stimulation (tDCS) montages over the primary visual cortex. More recently I have been developing screening measures which appear to show that distinct types of anomalous visual experiences appear to ‘cluster’ together and may well represent diverse neurocognitive underpinnings. This more fine-grained view of hyperexcitability may have considerable utility for scientific, clinical and translational fields.       

For future research, I am excited about exploring the role of cortical hyperexcitability, and how it varies temporally, in other areas such as anxiety and stress, sleep disorders, and conditions like depersonalisation disorder. Hyperexcitability appears to have both a constant trait-based and variant state-based component to it which as yet, remains largely unexplored. Understanding these factors and how they interact will have important implications for neuroscience, neuropsychiatry and philosophy, perhaps even illuminating why some individuals transition to disorder while others remain resilient to it.

- Dr Jason J. Braithwaite is Reader in Brain Science, Lancaster University

 

Processing faces and social situations

Faces convey rich visual information about emotions and intentions, essential for navigating our social world. Psychologists are fascinated by them. As an undergraduate psychology student, I got swept up in the hype surrounding face perception research, and undertook a PhD on face processing difficulties in adults with autism and developmental prosopagnosia.

Autism is characterised by wide-ranging social-communicative difficulties, while developmental prosopagnosia (sometimes called ‘face blindness’) is associated with a more specific problem in recognising faces and other social stimuli (e.g. bodies). Both conditions are thought to be ‘neurodevelopmental’ in origin and there is a wealth of literature on face processing in these conditions. Within the large
(and messy!) literature, there is widespread debate regarding if or how face processing is impaired in autism and prosopagnosia and what can or should be done about it.

Newborn babies and even non-human animals instinctively look at face-like stimuli. Therefore, if autistic adults showed impairment in this instinctive behaviour, it may explain why they struggle to process faces and interact with others in social situations. So I conducted the first study using face-like stimuli to test basic face processing ‘instincts’ in autistic adults (Shah et al., 2013). We found that autistic adults showed intact reflexes towards face-like stimuli, suggesting that the most basic and important face processing mechanism is not impaired in autism. My ongoing research, now led by my student Emily Taylor, aims to continue to better understand intact and enhanced mental processes in autism.

As for prosopagnosia, I helped to develop the first self-report questionnaire measuring face processing difficulties. This was an exciting but equally contentious development, as there are debates over whether people have enough insight into their face processing ability for such a questionnaire to be useful. Nonetheless, such research has raised awareness of developmental prosopagnosia so that people with the condition are more involved in psychological research and have a better understanding of their condition. This research has also raised broader questions about how much insight we have into our mental processes when viewing social stimuli. My ongoing research, now led by my student Rachel Clutterbuck, aims to understand this.

Psychologists will always be fascinated by faces, and although debate can be fruitful it can also make it difficult to publish findings which contradict longstanding opinions. My hope for the future of face processing research is a more collaborative and inclusive enterprise, such that undergraduate and postgraduate students are inspired to pursue tricky but important research in the field.

- Dr Punit Shah is at the University of Bath

 

High-level questions

Our visual sense of the world is incredibly rich – somehow the patterns of light falling on our retinae ultimately give rise to the perception of a world full of people, places and things that guide our social behaviour. Not only can we recognise and identify thousands of stimuli, but we can also infer other attributes, such as the emotional state of a person or what they are attending to, and navigate our way through novel environments. Vision can also lead to profound aesthetic experiences – my love of cinema has also drawn me to understanding visual perception, and was perhaps an alternative career path I never pursued.

Understanding the neural representations that support such ‘high-level vision’ has been at the core of my research since my PhD. Among many different approaches, I have conducted electrophysiological studies of neurones in rhesus macaques selectively responsive to the sight of bodies and faces, functional brain imaging studies of people viewing photographs of real world scenes, eye tracking studies of people viewing faces, and more recently computational studies of visual processing in deep neural networks. The core question I am interested in is how the brain processes the visual information coming from the retina by way of the thalamus to produce representations that are useful in guiding complex behaviour.

In the primary visual cortex (V1), neurones are responsive to simple visual properties such as the orientation and location of edges in space. In later visual areas, neurons appear to be selectively responsive to categories of visual stimuli such as faces, bodies or scenes, and functional brain imaging studies reveal localised clusters of such selectivity suggesting specialised regions for processing different types of visual stimuli. There is even evidence for a word-selective region, which suggests that category-selectivity can be created through visual experience without a strong genetic predisposition for that specific selectivity.

But how does the brain produce category-selectivity from the low-level properties represented in V1, and what exactly is represented in these category-selective areas? Do they represent more abstract or conceptual properties of visual stimuli or are they driven by simple differences in visual properties? Recently, I have been interested in face pareidoilia, the experience of seeing a face in an inanimate object, such as the image of a person on the surface of the moon. This is an interesting phenomenon because the perception of a face occurs in the absence of the typical low-level visual features, and we persist in seeing the face even when we know that it is not actually a face. Understanding how such stimuli are processed in the brain will provide insight into our subjective visual experience.

Our perception not only depends on the stimuli entering the eyes, but also on our prior visual experience and memories. So, I have also become interested in trying to establish the nature of our internal visual representations. In a recent study we asked people to remember photographs of 30 real world scenes and tested their memory by asking them to draw each of the scenes. Using online crowdsourcing we were able to score the drawings for their content and show that our visual memories contain an impressive amount of visual detail and spatial information.  

Ultimately, I hope that by understanding the processes through which visual perception arises we can gain insight into the disturbances of perception that occur in many disorders. There are so many fascinating questions about high-level vision… I’ll be busy for a long time to come!

- Dr Christopher Baker is Chief of the Unit on Learning and Plasticity in the Laboratory of Brain and Cognition for the National Institute of Mental Health

BPS Members can discuss this article

Already a member? Or Create an account

Not a member? Find out about becoming a member or subscriber