Supporting ‘team science’

Katherine Button argues that we need to build a diversity of roles into the fabric of the psychology department, as well as thinking about our role in bigger interdisciplinary projects.

There is growing awareness that the challenges of the 21st century will be best met by interdisciplinary collaborative efforts. The main UK funding bodies have been integrated into UK Research and Innovation, and we are increasingly seeing cross-council funding calls to encourage truly interdisciplinary and team science approaches. Human health and behaviour are high on the agenda, and psychology is key. But are we ready to be active team players? 

Team science has been defined as output-focused research involving two or more research groups (AMS, 2016) and it is on the rise. Authorship lists are getting longer, and the proportion of papers involving multiple disciplines and international collaborations is increasing year on year (Adams, 2013). And for good reason. Team science works.

We all know the classic examples; CERN, the EU Graphene initiative, the human genome project, all have yielded game-changing results and involved large international teams and extensive funding. There are also the examples of longitudinal cohort studies; UK biobank, international clinical trials, such as those looking at heart disease and statins, and more recently the international team efforts between academic researchers and industry in the hunt to find a vaccine for Covid-19.

Smaller-scale team science approaches are also increasing as groups seek to collaborate with others with specialist expertise in advanced skills such as imaging, mathematical modelling or statistics, or by researching the interface between unusual combinations of conventional subjects, such as economics and psychology.

It is at these interfaces were science has advanced the most. In a 2013 Science article, Brian Uzzi and colleagues analysed 17.9 million academic articles. They found that the highest impact science was grounded in highly conventional combinations of prior work but with an unusual combination of disciplines facilitated by team-authored papers. Papers high in novelty but low in conventionality or vice-versa fared less well.

This is important, given that we often prize novelty above all else in our grading criteria for student assignments and research grants alike. Yet it seems the most productive sort of novelty emerges from the building of a diverse team where members are steeped in expertise and the conventions of their disciplines, and the novelty arises from the meeting of these different perspectives.

Indeed, one of the benefits of collaboration is exposing dogma within our fields and the transfer and sharing of best practice. It was working on clinical trials where sample size justification is so vital that exposed how relatively small the samples used in neuroscience and psychology studies tended to be (Button et al., 2013).

What makes a good team?
After graduating from Cambridge with a BA in Neuroscience, I moved to Bristol to undertake an MEd in the Psychology of Education as a BPS accredited conversion course in preparation for applying for a doctorate in clinical psychology. I got waylaid working as a research assistant on a large multicentre clinical trial of online CBT for depression in what is now Bristol Medical School, and ended up doing a PhD in Psychiatry (investigating information processing biases in social anxiety) instead.  

Working in an epidemiology department and being involved with multi-centre clinical trials hugely influenced my view of psychology research. I didn’t think of it as ‘team science’ back then, but the multi-centre RCTs were the archetypal team science effort. Meetings involved input from statisticians, clinicians, methodologists, qualitative researchers, research assistants, therapists, lay representatives, from multiple sites, each bringing their own specialist expertise and having their own area of responsibility.

I benefited enormously from the availability of these varied experts, and in particular being able to walk down the corridor and knock on multiple doors for statistical advice. So what makes for a good team? Diversity and complementarity in skills, effective leadership and organisation and a mix of breadth (i.e. big picture thinkers) and depth in knowledge and expertise. We need those that can sit at the interface of disciplines, those that provide deep topic knowledge and those with the technical skills to realise the work.

The disconnect
Team science is beneficial for scientific progress and we should be facilitating it at every opportunity, rewarding and recognising it through career pathways and training our students in team science approaches. But this is where we hit a snag. Our current structures for recognition and reward in academia are based on an old model of small research groups led by a principle investigator, the typical teaching-research academic track, and assessment based on individual work.  

The metrics of success by which we currently recognise, and reward academic researchers are publications and grant funding. As we all know there are only two authorship positions worth having; first or last, indicating you wrote the paper or were the senior investigator respectively. Relying on these ‘key’ authorship positions becomes problematic as authorship lists increase with team science. It may even act to disincentivise collaboration or can lead to friction and power struggles within the team as the ‘big beasts’ fight it out for prominent authorship positions.

The metric of publication and previous conventions for authorship are simply too narrow to encompass the many and varied contributions to team science. Statistical methods, data visualisation and curation, resources, software development are all increasingly roles that contribute to the work in modern publications and a middle authorship position may simply not reflect the significant contributions made to the work.

Similarly, successful funding bids often involve large teams – yet for hiring and promotion decisions it is PI funding that is most important. Again, this is problematic, particularly in cross-discipline grants where the two groups might be contributing equally to the project but a co-PI model isn’t supported. This model also leaves skills specialists in a tricky spot, as their role is vital but seen as supporting and thus rarely suited to being the lead applicant.   

So how can we can change to recognise and reward team scientists, and to train for the future of team science?

In 2016 the Academy of Medical Sciences published a working group report on ‘Improving recognition of team science contributions in biomedical research careers’ (tinyurl.com/acmedteamsci). This report highlighted key areas where the current conventions for reward and recognition have failed and made several recommendations for how these could be improved. The three which I find particularly relevant are the need for career paths for skills specialists, the CReDiT taxonomy for authorship contributions and training for team science.

Career paths for skills specialists
In psychology we are very good at training a particular type of student: one who writes exceptionally strong, well-argued and constructed essays, has an excellent grasp of the relevant psychological theories, enjoys grappling with the societal issues and the psychological applications. A student who also excels in statistical and technical skills is less common. Given the central importance of the methods and analysis to the soundness of psychological research, this asymmetry has always intrigued me.

I see this reflected in the composition of our academic staff – many senior psychologists are topic experts that feel uncomfortable with numbers and this discomfort may be socially transmitted to our students. It is also reflected in our academic articles. Lengthy introductions set up the theoretical framework, followed by lengthy discussions full of interesting speculations, while the strengths and limitations of the study design – crucial for evaluating the veracity of the findings – is often buried at the end like the small print on a loan agreement. (Worse still, some journals do actually relegate the methods to small print.) At the student level, if an empirical dissertation is a terrible piece of research with a convincing write-up which reflects at length on the shortcomings in the research design, the student can still obtain a first class mark. In contrast an excellent piece of research poorly written up will always be heavily penalised.

The asymmetry is also reflected in those we promote and those we don’t. Post-docs are often the most productive, talented and enabling people in a department – they have the specialist skills that are the engines of papers (they analyse, build experiments, process data). But to progress they must find an academic post where this specialist skill set rather quickly becomes somewhat obsolete in the face of teaching, admin and team management. And we tend to promote the ‘big-picture’ thinkers while the skills specialists run the risk of becoming a perpetual post-doc. The few mathematically minded staff in the department are often inundated with requests for statistical support from colleagues and students alike.

The idea that methods and statistics are just some sort of inconvenience to be got through before we get to the good bits may also be reflected in authorship positions. Writing the paper can earn a first authorship position while the statistical analysis, which can take considerably longer, is often rewarded with a second author position. The strength of findings is intrinsically linked the rigour of the methods and the integrity of the statistical analyses. If we routinely gave these the proper consideration they deserve, perhaps we would see as many psychology professors specialised in methods and statistics as we do in topic area.

As our methods and techniques become increasingly sophisticated, skills specialists will become increasingly important. We need career paths and development opportunities for the skills specialist and other researchers and support staff who provide key roles in team science but fall outside the traditional ‘PI’ track. Psychology is a broad discipline encompassing many new and advancing fields and techniques such as mathematical modelling of decision making, brain imaging and changing research cultures. We need to increase diversity within the ‘PI’ track as well to reflect this increasing diversity.

We also need to recognise excellence in teaching and ensure career progression for those whose teaching is the main focus of their role. In recent years we have started to see this reflected in different types of career tracks; ‘research only’ and ‘teaching only’ have been added alongside the conventional teaching and research track. However, we have yet to achieve true parity in career advancement across these tracks. Research only posts tend to be post-doc positions or other short-term grant dependent posts where steady career progression is challenging. The more secure teaching positions are often seen as less prestigious and the average time to progress to professor slower than the research-teaching track.  

CReDiT taxonomy for authorship contributions
The reliance on first and last authorship positions no longer reflects the diverse and complex inputs into modern scientific papers. While some people advocate for the removal of authorship lists at altogether, most acknowledge that changing cultural conventions takes time. Widespread adoption of the Contributor Roles Taxonomy (CRediT) is an excellent first step to acknowledging the diversity in contributions to a paper. At present it includes 14 roles from conceptualisation, writing the original draft, formal analysis, funding acquisition, and supervision through to providing resources, data curation, visualisation and project administration. Each author can be assigned multiple roles and it makes each author’s contribution clear.

Applicants for jobs could be encouraged to list their CRediT statements alongside their publication list to make their contributions to papers explicit. The range of evidence used in deciding grant funding and career progression could be broadened to include published study protocols, published datasets, pre-prints, patents, software and code.  

Training for the future of team science
We need diversity within and across departments, and the leadership skills to get us to that position. Interestingly the current focus on reproducibility and open science, as well as the era of big data, may already be changing things. They highlight the need for skills and technical specialists as well as the need to train our students in transferable skills such as coding and data curation. We may also find we need to employ more specialised support staff such as data librarians to help us navigate the legal requirements as open data becomes the expectation.

Many psychology departments are grappling with how to move from the traditional statistics teaching based on point-and-click in SPSS to one based on teaching analysis through scripts and coding using open source software like R. It’s tough, as many of us academics feel outdated and need to learn how to code ourselves so we can support our students. But for our students, it seems vital for them to have some basic grounding in a transferable skill like coding to increase their employability both within and outside of academia.

There are universities leading the way. Glasgow has switched all their quantitative methods teaching to the programming language R and they are already reaping the benefits. Their skilled undergraduates are able to take on challenging analyses in their final year projects. This move was enabled by hiring new staff tasked with developing and delivering their world leading PsyTeachR course. This may also be paving the way for a new generation of psychologists well-versed in coding and data science who can bring the new skills to the psychology department team.

We also need to ensure we train our students in the soft skills of teamwork, collaboration and leadership. An ambitious model we have adopted is an undergraduate consortium for the final year empirical dissertation project, which we wrote about in the March 2016 issue of The Psychologist (and see also Button et al., 2019). Students collaborate with other students, PhD students, post-docs and academics across multiple psychology departments on a large research project designed to train students in reproducible team science whilst also meeting the current convention for individual student assessment (BPS, 2016: see tinyurl.com/y2nj3ndt).

Working in teams has many benefits – by pooling their data collection these projects achieve reasonable sample sizes (usually in the 200-400 range) and the students benefit for the networking and sharing of ideas and practices. The projects are led by a PhD student who gains experience in “PI’ leadership skills whilst under close academic supervision. This model is flexible and would work equally well as a simpler team of students working within a single department.

The consortium approach has many pedagogical benefits, equipping students with the transferrable skills required for effective teamwork. However, for these skills to be fully realised they must be recognised and rewarded in assessment alongside the current criteria focused on individual assessment.

You are not alone
Academia is a weird system. We spend years honing very specialised skills as PhD students and post-docs, to then move onto an academic post which often requires minimal use of those skills. Instead we are required to rapidly learn how to teach, apply for grants, manage research groups and perform admin roles. This may lead to selective career advancement where the topic experts and big-picture thinkers are promoted over skills specialists who may end up leaving academia.

On top of this we now have emerging trends of big data and open research, and as techniques and methods advance it becomes increasingly challenging for a single group to stay cutting-edge. Likewise, as the teaching excellence framework and the Covid-initiated need to move teaching online require a shift in how we teach large swathes of the syllabus, it becomes increasingly challenging for a single academic to excel in both research and teaching.  

But embracing a team science ethos can be liberating – no longer do we need to be able to do all of these things alone, we just need to work as part of a team or a department where between us we have the diversity of expertise covered.

For this to work we need to ensure we reward all types of contribution to the team effort and ensure we train the next generation in the skill sets required to support team science. By building career paths to reward diversity into our own departments, we will be creating the environment for effective team science both within and beyond our field.  

The green shoots are there – exciting new teaching initiatives and a bright and inspiring generation of early career researchers working towards the future of reproducible team science. We must play our part and ensure outdated conventions for reward and recognition don’t stifle their growth.

- Dr Katherine Button is a Senior Lecturer in the Department of Psychology at the University of Bath. [email protected]

Key sources
Adams, J. (2013). The fourth age of research. Nature, 497, 557–560.
Button, K., Ioannidis, J., Mokrysz, C. et al. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci, 14, 365–376.
Button, K.S., Chambers, C.D., Lawrence, N. & Munafò, M.R. (2019). Grassroots Training for Reproducible Science: A Consortium-Based Approach to the Empirical Dissertation. Psychology Learning and Teaching, 19(1),77-90.
Uzzi, B., Mukherjee, S., Stringer, M. & Jones, B. (2013). Atypical combinations and scientific impact. Science, 342(6157), 468-472. 

BPS Members can discuss this article

Already a member? Or Create an account

Not a member? Find out about becoming a member or subscriber