By Stephane Farenga, FACE Executive Member, Deputy Head of Widening Access and Student Success, University of Hertfordshire

Why Evaluate? was a joint event between OFFA and the Widening Participation Research and Evaluation Unit (WPREU) at the University of Sheffield. It sought to encourage learning and debate about evaluation in widening participation and to share expertise and experiences so as to nurture creative and innovative changes in practice and research, thus hopefully better equipping us all to meet the diverse needs of students, communities and the country.

In this post, I’d like to share a few talking points and themes from the day:

  1. The growing trend to adopt randomised control trials (RCTs) for WP evaluation
  2. Tension between local evaluation of ‘what works’ and wider sector evidence base
  3. Inward critique of the widening participation sector

Do you speak RCT?

Are RCTs coming to a WP team near you? That is certainly the feeling within the sector as the case was made for the use of RCTs. The ensuing scepticism, both on the day and online (see Twitter), suggests many do not share the same enthusiasm for potentially porting a traditionally positivist and quantitative method into WP. But do RCTs have a role to play? Perhaps spurred on by government calls for national evaluations of programmes like NCOP there is a growing momentum to establish causal relationships in the quest for increased participation. However, their value to WP research was certainly called into question. For starters, is it appropriate for this sector, in which we generally seek to understand the experience of participants over their quantitative outcomes (hard participation and success data aside)? Is it ethical to withhold an intervention from a group of students? Do we even have larger enough sample groups to make an RCT work? It is generally accepted that that the bigger the populations groups, the more robust the RCT is, with a 2,000-strong sample group a loose standard–a target that is going to be nigh on impossible to reach for all but those of us involved in consortia-powered activity. What the debate around RCTs does show, is an anxiety around methodology and a real need to improve the knowledge stock of the evaluation sub-field. Maybe in doing so, we’ll become more comfortable speaking RCT.

Locals vs nationals?

The RCT dilemma is microcosm of some of the issues surrounding the local and national evaluation spaces. A harder–quantitative–evidence-base is being sought at a national level. Take NCOP. Consortia are being encouraged to engage with RCTs and other quasi-experimental methods. CFE Research are overseeing a large-scale, national evaluation heavily predicated on crunching data. Meanwhile, local evaluations can rely on much more varied methodology, but are not intended to be widely disseminated.

Another angle in this debate is around expectations, specifically those set by national bodies, organisations and Government on the type of activity to be evaluated and the method by which this is achieved. At Why Evaluate?, Colin McCaig (Sheffield Hallam University) touched on the sector’s anxiety when it comes to fulfilling these expectations. Often, practitioners have an innate sense of ‘what works’ in WP but properly evaluating such activity to reveal this is not straightforward. There is a sense that successful may not translate well in evaluations. This dovetails nicely with the perception in the sector that to properly evidence activity, specialism is required to do so. Anyone receiving regular job announcements will have noticed the increase in research/evaluation posts being advertised–surely a positive trend. However, truly embedding rigorous evaluation practice should reflect a bottom up approach. It should not rest in the domain of one “specialist”, but all practitioners should be given the space to develop evaluation skills and ability to take ownership of the whole activity planning/delivery/reflection cycle.

Lookin’ in the mirror

Lastly, a quick glance in the mirror. That’s according to Femi Bola, formerly of University of East London. She challenged delegates to think deeply about the current state of equality and diversity in HE. While, in her words, “we’re all nice people”, the lack of progress in many ways of the WP agenda reflects on all of us. She believes one of the reasons for this is our perception of what students need: “We assume a lot about students’ motivations and what they want […] We start at the middle of the sentence instead of beginning”. Her honesty exposes how closely we skirt the deficit-model line in what we believe students should aspire to and what success should look like. Maybe a little a soul searching in WP would go a long way.

Leave a Reply

Your email address will not be published.