Posts

Co-production and Reconsidering Knowledge Hierarchies: how can we enable meaningful collaborations?

If we define academia as an institutionalised pursuit towards producing and propagating knowledge, how do we set up standards for our hands-on academic end-result? How does our research become useful and what does this utility represent in the first place? Should we strive to inspire policy-making, ameliorate social conflict, develop new generation technology? When justifying the need for their research, academics often argue for the exploration of a perceived literature gap, the necessity to look at certain topics from a different perspective, using new methods and up-to-date information. But what happens if we try to step back from our scholastic perspective and seek to get some insights from local stakeholders?  Sarah Banks et al, in their 2013 article Everyday ethics in community-based participatory research , proposes an interesting paradigm change towards collaboration and cooperation between people with varying degrees of education and experience, between laypeople and re

NCRM: Delivering a national research methods training programme in a pandemic

You’ll likely be sick of reading it, but this truly is an exceptional period.     Those of us that work or study in universities have been on a rollercoaster as students were sent off-campus and most classroom teaching pivoted rapidly to online delivery, followed by numerous false dawns promising a full return to campus-based learning.     Education is rarely created in the moment: it is a complex product of design, timetabling, content preparation, delivery systems, evaluation and reflection – all of which have been extensively disrupted.     The National Centre for Research Methods commenced a new five-year ESRC funding award in January 2020 with an ambitious programme of face-to-face training events at locations around the UK with speakers, venues and in many cases course delegates, already booked when the first lockdown struck.     Our vision for this phase of the Centre included a steady increase in online training provision, alongside our face-to-face courses, allowing trainers a

Moving Between Disciplines

This week the Research Methods Café involved a discussion around the theme of what it means to move between disciplines. The discussion covered the skills and experiences that can be gained through embracing interdisciplinary perspectives and strategies for exploring numerous frontiers simultaneously.   Several challenges to interdisciplinary work were highlighted early in our discussion. First existing funding structures make it difficult to progress interdisciplinary work. Some funders see interdisciplinary work as insufficiently specialised to be of value to the kind of work they are trying to foster. Similarly, reward structures often assume patterns of work that are not immediately aligned with interdisciplinary methods. Finally, it was noted that the number of papers published by a department is of significant importance to the department – a feature of academic life that complicates cases where a PI might straddle a number of disparate research fields. How can departments accrue

Keeping "Generalization" Relevant

What are the implications of research for our wider understanding of the world, especially regarding causes and effects, and for “best practices” within professional and personal realms? While not fully addressing these questions, this week’s Research Methods Cafe Conversation took up the issue of generalizability in qualitative research . There were concerns raised that qualitative research was being labelled as “ungeneralizable,” even by the authors themselves, and that this was being presented as a common limitation of such research. Overlooking generalization may prevent valuable research from having its intended impact, but there are several possible reasons for this trend, as discussed in the conversation. Generalizability may be seen as constrained by the statistical logic of quantitative designs and therefore impossible to demonstrate in qualitative research. Another possibility mentioned is that authors simply would not or could not do the intellectual labor to show their rese

What do Research Software Engineers (RSEs) do, and how can they help me?

Image
Research Software Engineers have been working at Advanced Research Computing (ARC) , Durham University for almost two years now. Sitting within the research division of the university, the team works alongside academic colleagues on long-term software projects as well as immediate technical support – for example, code refactoring, debugging, performance engineering and more. We aim to advance sustainable software practices in research environments across the university through regular training sessions , discussions and technical collaborations.   We began our presentation at the Research Methods Café with an introduction to how we work as RSEs. Structurally, the RSE team sits alongside the Research Computing Platforms team at ARC. To work with us we ask you to fill out a project application form . From there we would set up a meeting to chat about your project. Up to 5 days of support are free at the point of use and up to 40% FTE of an RSE's time can be costed into grants (as na

Interrogating sociotechnical systems through interdisciplinary research

In the beginning of March, the   Research Methods Cafe   hosted   Javier S. Monedero , who is a Distinguished Researcher “Beatriz Galindo” at the   AYRNA   research group in the Dept. of Computer Science at the University of Córdoba, joined the conversation to share his experience of working in an, for his background, unusual research environment. As a former research asssociate and affiliate researcher with the   Data Justice Lab   (DJL; Cardiff), he is bridging the social and communication sciences with computer sciences, working in the knowledge gaps separating them. On such an endeavour, one faces many obstacles that need to be overcome to ensure integrity of information in the presence of diversity. Intrigued to find out more, the conversation was joined by a large group of students and scholars from many departments, including Anthropology, Sociology, Sport & Exercise, Education, Government and International Affairs, University Library and Collections, Business School, and Ph

Introducing the eefAnalytics Package

The Educational Endowment Foundation (EEF) has paved the way towards research-informed interventions that aim towards investigating new methods to reduce educational inequality among the deprived students in schools today. In order to facilitate the efforts by EEF, we developed a statistical package to allow researchers to conduct their investigation using state of the art methods for analysing data from Randomised Controlled Trials (RCT). One of the many vessels towards achieving the aforementioned goal is the eefAnalytics package. A set of user-friendly commands, developed for both R and Stata software, allowing researchers in education to use an optimal model to quantify causal effects in RCTs. It should be noted that these commands can also be used in other disciplines as long as the RCTs have continuous outcomes and do not need to exceed a two-level structure (e.g. participants nested within schools). Moreover, advanced knowledge of the specific calculations used by the pack

Understanding Uncertainty in Effect Size for Multisite Educational Trials

Image
Multisite randomised controlled trials are routinely used in health and education to evaluate the benefit of a health or educational intervention on study outcome. A multisite trial involves two or more sites with a common intervention and data collection protocol. An important characteristic of a multisite trial is randomisation of participants to intervention and comparison groups within sites. This approach offers several advantages over single-site trials such as enhanced internal validity , and greater statistical power when studying outcomes with large variance (e.g. academic scores). Multisite trials have been used in health research for a long time and gained popularity in education studies in recent years. In education, multisite trials involve randomisation of pupils (students) into intervention and comparison groups within each school. This design makes it possible to rigorously study a cross-school distribution of intervention effects, facilitating estimation of both the o

The Value of Posterior Probabilistic Inference in Educational Trials – An Intuitive and Useful Concept

Image
It is known that educational trials commonly aim at directly or indirectly improving the education attainment of pupils. To this end, the p-value is commonly used in frequentist inference to judge whether an intervention truly works or just has worked by chance based on the significance level. However, the p-value has been blamed for the lack of reproducibility of research findings and the misuse of statistics .   It’s a typical case of Goodhart’s law which states that when a measure becomes a target, it ceases to be a good measure. To work around this, some education researchers advocate the use of effect size (ES), which can be thought of as the strength of the effect of the intervention, and its confidence interval (CI; the range of plausible values of the ES) to assess the effectiveness of interventions instead of relying on a p-value.   Even if ES and its CI are useful metrics, there is empirical evidence that teachers and other education stakeholders will not easily understan