Question fest

Psychology undergraduate Madeleine Pownall (University of Lincoln) reports from BRAINFest in Cambridge.

Can virtual reality technology be used in dementia treatments? How can we communicate with those who are unconscious? Why do some children take longer to acquire language that others? How do we predict our future? 140+ neuroscientists joined forces at Cambridge BRAINFest to both ask and answer some of these questions. In the interactive thematic showcase of the festival, Cambridge’s Corn Exchange played host to 29 exhibits, all discussing how neuroscience has, can, and will change the way we do things.

Making sense of our (virtual) reality

I am stood in a park, watching the trees slowly turn from bright green to a grey, lifeless hue. The people around me start walking slower and I notice everything in the scene starts to become a little less exciting. I take off the virtual reality headset, and learn from the researchers that I have just experienced two minutes of living with anhedonia, a condition which affects an individual’s ability to experience pleasure. Characteristic although not entirely synonymous with depression, one contributor to an accompanying case study video describes it as ‘that feeling that things lose their significance’. The pleasure pathways of the brain are altered and this affects senses of reward and motivation – as Anhedonia Support puts it, ‘emotional flatlining’.

Walking around the various exhibits, I began to see a theme emerging. Virtual Reality is a breakthrough in technology and has been adopted by several of the research groups in attempt to treat, diagnose and relate to a plethora of mental health illnesses. The technology allows wearers to experience a highly realistic three-dimensional world within the comfort and security of a supervised therapy session. A recent meta-analysis has also concluded that VR therapy leads to prominent behavioural changes in real-world settings outside of the therapy room. The implications for therapy are vast, but its uses do not stop there. A 2008 paper by Daniel Freeman explained how VR can be used to help diminish stigma around mental health illnesses, particularly with those who hear voices. Freeman writes that ‘understanding schizophrenia requires consideration of patients' interactions in the social world’.

Arguably, we can never really understand something unless we have experienced it ourselves. Therefore, one of the most effective VR experiences that I participated in at BRAINFest was a two-minute simulation of living with psychosis. I found myself in a courtyard with voices telling me to ‘get out’, growing louder and more panicked as the experience went on. Surreal and highly disorientating, it succeeded in giving me a realistic taster of a typical case of psychosis. ‘A Walk through Dementia’, funded by Alzheimer’s Research UK, was also a powerful experience. The purpose of this Android-compatible VR app is to offer ‘a glimpse into how the condition can impact a person’s everyday life’. You find yourself accompanied walking along a street and it is not too long before you are lost, despite originally thinking you know the way back. The graphics of the VR make faces within the picture alter subtly, making it difficult for the user to figure out who anybody is. It was a powerful experience, and one which I hope inspires others to relate to those suffering from dementia.

Questions of consciousness

‘Are you a lark or are you an owl?’ asked Dr Michael Hastings, from the MRC Lab of Molecular Biology. We all know about the presence of differing phenotypes and genotypes, but Dr Hastings explains a characteristic which dictates how we all function differently in relation to time: our chronotype.

Larks (like myself) are early to bed and early to rise, and have a very different chronotype makeup than owls. Our sleep-wake cycles are unique, despite being synchronised partially with light. The suprachiasmatic nucleus (SCN) is a region of the brain located in the hypothalamus which is responsible for controlling our circadian clock, and – perhaps unsurprisingly – it has connections to the retina. However, despite this connection, the presence of light is not entirely necessary to keep the body clock ticking. If the SCN is taken out of the brain and put into a culture dish it will continue to, metaphorically, ‘tick in isolation’. Mice brains are used to demonstrate this. If an electrode recording electrical activity is stuck into the isolated SCN of a mouse, it maintains a 24-hour cycle. It will visibly ‘turn on’ and ‘turn off’ just as it would as part of a body. The same principle applies to Michel Siffre in the Texas cave experiment – he found that his body kept a regular 24-hour sleep cycle despite living underground with no source of light or time.

So we know how the body-clock works, but why does it operate like this? To answer this, fruit flies were genetically created to have either a 20 or 28 hour sleep cycle (or in some cases, no cycle at all). After studying the genes of these flies, it was concluded that there is one body-clock gene: the period gene. Tweak that, and everything else follows.

As psychologists, we must ask how the presence of a body clock affects us behaviourally? Dr Hastings uses the results of a week-long actogram, which shows how much time a person is spent asleep or awake, to demonstrate this. When the subject was late to bed, generally we see from the graph that they are late to rise. This creates a delay in the 24-hour cycle or, using Dr Hasting’s words, ‘social jetlag’. This also helps us understand real jetlag. Perhaps the most artificially created change in our body clock, jetlag is a perfect example of our body attempting to maintain a circadian rhythm, even when our SCN and retinal information are completely mismatched. Looking to a more societal approach, Dr Hastings also explains that lifestyle patterns such as working night-shifts make us psychologically more prone to accidents. We are artificially working against our body clock, or going against what our SCN is ‘molecularly geared up to predict’. So, the take-home message is this: sleep when you’re tired and wake when you’re not. Seems simple enough.

After this explanation of our sleep-wake cycles, I was particularly interested in the consciousness section of the exhibition. Cambridge researchers are taking dream research to new heights, a step forward from the Freudian psychodynamic dream analysis that has governed psychology for so long. Technological advances in biology, maths, chemistry and engineering allows researchers to ask more conceptual questions about dream content.  The analysis of what we think about when we sleep is now – according to the research group –moving away from theoretical, discursive, interpretative accounts, and is now grounded in advanced algorithms. My enthusiasm was soon noticed, and it wasn’t long before I was their newest recruited participant. The study involved a questionnaire relating to how I feel when I’m dreaming: am I in control? Am I an active member or an observer? Do I know I’m dreaming? According to my results, I’m a ‘bad dreamer’. I show little insight and little capacity to control my environment - the two prerequisites of being a ‘good dreamer’, according to the team. There are also, unusually, no questions in the study which ask me anything related to the content of my dreams. The analysis that the team perform on dreams is done entirely in terms of the dreams ‘structure and syntax’, rather than more subjective traditional measures. I ask excitedly whether this means we are now able to decode thoughts. Short answer – no, but the team is learning how to decode thought content in terms of structure.

Going hand in hand with this research, the consciousness and cognition imaging group – led by Dr Emmanuel Stamatakis – aims to answer questions on ‘complex interactions between brain regions’ related to consciousness. The team uses anaesthetic drugs during MRI scanning to investigate how consciousness (or lack thereof) affects our brain. After speaking to the group I learn that states of consciousness lie on a continuum, rather than two explicitly categories of being ‘in’ or ‘out’. Interestingly, the ability to communicate with all levels is currently being investigated. The research group have been developing communication strategies with those who have disorders of consciousness (i.e. in an extreme case, people currently in a coma). This is done by giving participants carefully measured doses of sedative drugs, and testing their ability to react to external stimuli. The researchers hope to train participants to be able to signal with their eyes whilst in a subconscious state (when they’re asleep). So, I enquire, how far can this research take us? It would seem that the possibilities are endless. Communicating with the unconscious allows us to make more informed and ethical medical decisions, understand more about brain activity, and opens doors to more in-depth dream research.

The language of learning

Professor of Experimental Psychology, Zoe Kourtzi, uses neuroscience to explain how we predict behaviour. ‘Try to predict what I’m going to say… next’, she starts.

The ability to judge, plan and predict is a key survival mechanism in humans. We use bodily cues such as facial expression and body language in strangers to predict whether someone is a friend or a foe. This is in the same way that we use previous knowledge of our environment to predict whether we need an umbrella when we leave the house in the morning. Our predictions, if grounded in considered reasoning and past information, are generally reliable (although admittedly we sometimes get predictions wrong… cue Theresa May-related political satire).

The concept seems relatively straightforward: we use our history to predict our future. This is rooted in the age-old psychological concepts of behavioural conditioning, but this time there’s a twist: neuroscience has joined the discussion. Professor Kourtzi explains that there is a link between our hippocampus and visual cortex which allows us to use memories to make sense of what we see. Interestingly, this link is not just applied to explicit predictions. When people are put into a MRI scanner and shown a static picture of an athlete at the start line of a race, the part of the brain associated with motion lights up. We expect motion, and so this is translated to our brain: we neurologically ‘see’ it.

We have the capacity to infer stories from static pictures, using knowledge that we have about the world. However, the strength of this skill differs from person to person. Using learning a new language as an example, Professor Kourtzi explains that although we all have the capacity for new learning, our strategies differ. Her research involves creating an artficial ‘alien’ language, and asking participants to predict which ‘letters’ come next in a sequence. The symbol selection relies on judgments of probability, a hallmark of learning. Those who attempt to memorise the symbol order show significantly worse performance than those who focus on the rules and structure of the new language.  

The Centre for Neuroscience in Education’s Baby Lab is also attempting to discover how language is learnt. According to the researchers, ‘language lies at the heart of our experience’. The team are using EEG, eye-tracking, and motion capture cameras to assess the ‘brain wave rhythms’ which align with speech development. It is predicted that the infants who show a greater rhythmic ability as babies will find it easier to learn language when they get older. The link between musicality of language and acquisition has previously yielded interesting findings, and this large-scale study hopes to further demystify children’s acquisition of language. When children have delays in language acquisition they may experience ‘severe developmental costs’, say the researchers.

With this in mind, parents with a child with a diagnosis of intellectual disability often ask the question ‘what does this mean for my child?’ The Imagine Id team are on hand to answer this. Imagine Id, standing for Intellectual Disability and Mental Health: Assessing Genomic Impact on Neurodevelopment, aims to understand the impact of genetic changes on children’s behaviour. The study collects genomic information of children who have had a molecular investigation into the cause of their intellectual disability. Pathogenic Copy Number Variants (CNV) and Single Nucleotide Variants (SNV) are considered to be at the root of these conditions, and these children are eligible to participate. To investigate this, a series of online questionnaires (including the Developmental and Wellbeing Assessment) are filled in by the parents, followed by an online puzzle for children including the ‘draw a person’ test. Currently at 1500 participants, the team hope to eventually recruit 5000 children with genetic conditions.

Questioning the future of neuroscience

By 2050 neurodegenerative disorders will become the second most common cause of death. To tackle this, £250 million has been invested into the Dementia Research Institute, headed up by Professor Giovanna Mallucci.

Professor Mallucci explains the basic neurology behind dementia; if we lose 60-70 per cent of our brain synapses we experience memory loss, an additional 10 per cent and we start to lose brain cells. Dead brain cells cause dementia. The current research being conducted by the institute is focussing on understanding mechanistic toxic processes that cause this brain cell death, and is working towards targeting these to repair cells. She explains that proteins in the brain act like deckchairs. If they are correctly folded they are useful, if not then they completely lose their function. In a 2014 paper, Professor Mallucci explains misfolded proteins have been found in Alzheimers, prion and Parkinson’s disease and has also been found to be ‘activated in mouse models of neurodegeneration and in various in vitro models’.

So far it appears that we understand the neurological mechanisms well, although we must now question: how do these findings translate to real-world dementia treatments? Professor Mallucci explains that in mouse trials, certain drugs have been found to be able to “fix the faulty response” associated with misfolded proteins. So why aren’t we using this drug in humans? Simple answer: it’s toxic to the human pancreas. However, recently another drug, trazadone (commonly used to treat anxiety and depression) has been found to also reduce signs of neurodegeneration. So far discoveries have been rapid and the future is looking good. Trazodone ‘won’t be the end, but it’s a start’, concludes Professor Mallucci.

The festival considered the interplay between neurology and psychology throughout the full span of human life. I left feeling excited about our scientific future: will we find a dementia treatment? What discoveries will be made relating to genetic conditions? And it didn’t end there: the MRC Cambridge Stem Cell Institute spoke to me about how they aim to establish the ‘true medical potential’ of stem cells, the Oliver Zangwill Centre discussed the advances in brain injury rehabilitation, and researchers talked about how they are investigating using capsaicin cream from chillies as pain relief. As Professor Bill Harris put it in his talk, our brain is a ‘sophisticated, computational device’. My head was spinning with questions and a newfound enthusiasm for neuroscience.

- Read more about the event, and Madeleine Pownall's second report.

BPS Members can discuss this article

Already a member? Or Create an account

Not a member? Find out about becoming a member or subscriber