Thursday, 16 November 2017

The school research lead, improvement research and implementation science



This week saw the welcome announcement of the appointment of Dr Becky Allen as the director of the UCLIOE’s Centre for Education Improvement Science.  On appointment, Dr Allen wishes to help develop “a firmer scientific basis for education policy and practice” and drawing on methods such as laboratory experiments and classroom observation.

Now regular readers of this blog will know that I have often expressed a concern over how educational researchers often misuse terms associated with evidence-based practice.  So, given this new initiative in improvement science it seems sensible look at a definition of improvement science/research and to do this, I’ll use the work of (LeMahieu et al., 2017)

Improvement Research : a definition (LeMahieu et al., 2017)

Improvement research is … about making social systems work better. Improvement research closely inspects what is already in place in social organizations – how people, roles, materials, norms and processes interact. It looks for places where performance is less than desired and brings tools of empirical inquiry to bear and to produce new knowledge about how to remediate the undesirable performance. Put simply, improvement research is not principally about developing more “new parts” such as add-on programs, innovative instructional artifacts or technology; rather, it about making the many different parts that comprise an educational organization mesh better to produce quality outcomes more reliably, day in and day out, for every child and across the diverse contexts in which they are educated.

Examples of Improvement Research/Science

  1. Networked Improvement Communities;
  2. Design-Based Implementation Research;
  3. Deliverology;
  4. Implementation Science;
  5. Lean for Education;
  6. Six Sigma;
  7. Positive Deviance
As such, (LeMahieu et al., 2017) state that All seven of the approaches  ……. share a strong “common core”. All are in a fundamental sense “scientific” in their orientation. All involve explicating hypotheses about change and testing these improvement hypotheses against empirical evidence. Each subsumes a specific set of inquiry methods and each aspires transparency through the application of carefully articulated and commonly understood methods – allowing others to examine, critique and even replicate these inquiry processes and improvement learning. In the best of cases, these improvement approaches are genuinely scientific undertakings

In other words, improvement research is a form of ‘disciplined inquiry’ (Cronbach and Suppes, 1969)

What Improvement Science Is Not?

However,  as (LeMahieu et al., 2017) note a major distinguishing feature of  improvement research, is what it does not attempt to do.  Improvement research is not about creating new theories or research and development.  Nor is about seeking to evaluate existing teacher strategies, interventions of field-based trials.   Rather improvement science is about doing more of what works, stopping what doesn’t and making sure everything is joined up in ways which bring about improvements in a particular setting

Given this stance, then statements about the Centre for Education Improvement Science (CEISbeing about ‘laboratory experiments and classroom observations’ seem a little incongruent with the existing work in the field.

My confusion about the work of the CEIS is further compounded by mention in Schools Week where it describes Improvement Science London, which is also based at UCL, improvement science involves the recognition of “the gap between what we know and what we put into practice” and using the “practical application of scientific knowledge” to identify what needs to be done differently.   However, that could probably more accurately be described as ‘implementation science’ (a subset of improvement science admittedly).  So, let’s delve into a little more detail about what is meant by the ‘implementation science.

What is implementation science?

(Barwick, 2017) defines Implementation science (as) the scientific study of methods that support the adoption of evidence based interventions into a particular setting (e.g., health, mental health, community, education, global development).  Implementation methods take the form of strategies and processes that are designed to facilitate the uptake, use, and ultimately the sustainability – or what I like to call the ‘evolvability’ – of empirically-supported interventions, services, and policies into a practice setting (Palinkas & Soydan, 2012 ; Proctor et al., 2009); referred to herein as evidence-based practices (EBPs).

Barwick goes onto state that Implementation focuses on taking interventions that have been found to be effective using methodologically rigorous designs (e.g., randomized controlled trials, quasi-experimental designs, hybrid designs) under real-world conditions, and integrating them into practice settings (not only in the health sector) using deliberate strategies and processes (Powell et al., 2012 ; Proctor et al., 2009; Cabassa, 2016).  Hybrid designs have emerged relatively recently to help us explore implementation effectiveness alongside intervention effectiveness to different degrees (Curran et al,  2012).

As a consequence – implementation science sits on the right hand side of the following figure (taken from (Barwick, 2017))




So where does this leave us?

Well on the one hand, I am really excited that educational researchers are beginning to pay attention being done in field such as improvement and implementation science.  On the other hand, I’m a bit disappointed that we are likely to make the same mistakes as we have with evidence-based practice, and not fully understand the terms we have borrowed. 

Finally – this post may be completely wrong as I have relied on press releases and press reports to capture the views of the major protagonists – as such I may be relying on ‘fake news.’

References

BARWICK, M. 2017. Fundamental Considerations for the Implementation of Evidence in Practice. MelanieBarwickJourneysInImplementation [Online]. Available from: https://melaniebarwick.wordpress.com/ [Accessed 15 November 2017].

LEMAHIEU, P., BRYK, A., GRUNOW, A. & GOMEZ, L. 2017. Working to improve: seven approaches to improvement science in education. Quality Assurance in Education, 25, 2-4.

Saturday, 11 November 2017

The effectiveness of lesson study has been called into question, following a £543,000 study at 181 schools, and 12,200 pupils made no difference to Y6 pupils attainment in reading and mathematics

The effectiveness of lesson study has been called into question, following a £543,000 EFF study involving 181 schools and 12,200, pupils found it made no difference to Y6 pupils attainment in reading and mathematics.

Lesson Study is a CPD approach originating in Japan that has become more popular in England in recent years and is a collaborative approach to professional learning.  Simply put, lesson study is a joint practice development approach to teacher learning, in which teachers collaboratively plan a lesson, observe it being taught and then discuss what they have learnt about teaching and learning

The project found no evidence that a particular version of Lesson Study improves maths and reading attainment at KS2.  However, there is evidence that some control schools implemented similar approaches to Lesson Study, such as teacher observation. As such the trial might, therefore, underestimate the impact of Lesson Study when introduced in schools with no similar activity. 

So does this EFF report sound the ‘death-knell’ for Lesson Study in England.  David Weston, Chief Executive of the Teacher Development Trust states in the TDT blog

There are some possible options.

1. If we decided to ignore the above and assume that the pedagogical content was effective, then either:
a. Lesson Study is an ineffective mechanism in all cases, or
b. it was an ineffective mechanism in this particular case
2. If we were determined to conclude that Lesson Study is always effective (which is also not plausible), then we would conclude:
a. This implementation is flawed, or
b. This pedagogical content is definitely bad.

My suggestion would be that none of the above conclusions are supported, in my view, by any reasonable reading of this study and the wider evidence base. We also need to question the extent to which we can draw any strong conclusions from a study where so many in the control group appeared to be engaging in similar practice

However, a report on peer lesson observation published by the EEF at the same time indicated that peer observation led to no overall improvement in combined maths and English GCSE scores for pupils of the teachers involved.  This would suggest the concerns about that the control group in the Lesson Study evaluation were enjoying improvements in pupil outcomes, and offsetting the impact of Lesson Study are possibly not warranted.

So what are school leaders and research leads to do.  First, if you are thinking about implementing Lesson Study it would be worth remembering there is more than one variety of Lesson Study.  In particular, I would recommend that you have a look at the work of Sarah Selezynov of the UCLIOE who identifies seven components of Japanese Lesson Study as this will allow you to  make comparisons between for want of a better phrase ' the original and cheap imports'.  

Second, and this is more generic advice, it's worth turning to the work of ( Miller et al., 2004) state when critically examining whether to implement change or changes, which appear to be fashionable,  school leaders and school research leads, could usefully ask themselves the following questions.

What evidence is there that the new approach can provide productive results. Are arguments based on solid evidence from lots of schools followed over time?
Has the approach worked in schools similar to our own that face similar challenges?
Is the approach relevant to the priorities and strategies relevant to our school?
Is the advice specific enough to be implemented? Do we have enough information about implementation challenges and how to meet them?
Is the advice practical for our school given our capabilities and resources?
Can we reasonably assess the costs and prospective rewards (Amended from (Miller et al., 2004) pp 14-15

If the answers to these questions suggest positive outcomes, it may well be that school may have identified a change which has ‘legs’.  

And finally, if there is one lesson to come out of this discussion, it is that school leaders need to actively engage in evidence-based school leadership.  Failure to do so, will lead to resources being misused, time being wasted, workloads increasing and pupils not making the progress they deserve.

Reference
MILLER, D., HARTWICK, J. & LE BRETON-MILLER, I. 2004. How to detect a management fad—and distinguish it from a classic. Business Horizons, 47, 7-16

Friday, 3 November 2017

School Research Lead - From evidence to implementation

During this Thursday's #UKEdReschat there was a lively discussion hosted by @StuartKime which focussed on the implementation of research and the ingredients associated by with successful implementation

However, for me this cycle of implementation misses out a fundamental step - how to bring together all  the evidence and then make a decision about how to proceed. For as (Alonso-Coello et al., 2016) 
Often the process that decision-makers used, the criteria that they consider and the evidence that they used to reach their judgments are unclear.  They may omit important criteria, give undue weight to some criteria, or not use the best available evidence.  Systematic and transparent systems for decision-making can help to ensure that all important criteria are considered and that the best available research evidence informs decisions.  P1

Adopting some form of 'evidence' framework can have a number of benefits for school leaders. (Alonso-Coello et al., 2016) subsequently identify a number of such benefits, which I have adapted for the use in a school-setting.
  • You and your fellow decision-makers, will have an improved understanding of the advantages and disadvantages of the various actions being proposed
  • Helping ensure that you include all important criteria in the decision-making process
  • Providing you with a concise summary of the all best available evidence – be it research evidence, school data, stakeholder views and practitioner expertise
  • Helping colleagues will be in a better position to understand the decisions by senior leadership teams and the evidence supporting those decisions

As mentioned in a previous post  evidence-based school leaders make explicit the criteria they use to make a decision.  In the context of your school, these criteria may well change depending on what domain and sub-domain of school leadership and management you are concerned with (Neeleman, 2017).  The criteria for making decisions about teaching and learning – may well be different to the criteria you apply to making financial decisions.  In addition, you may want to take into account whether criteria are adjusted for different parts of the organisation.  In the context of your school, the criteria being applied at say level of the Board of MAT Trust, may well be different to how the criteria are applied at Key Stage 1 in a primary school.    Using (Alonso-Coello et al., 2016) as a starting point, let’s look at some of the criteria that could be applied to decision-making (see Figure 1)

 Figure 1 Evidence to Decision Template


Element
Criteria

Priority of the problem
Is the issue an important problem for which a remedy is sought and that can be locally implemented?

Benefits
How substantial are the desirable anticipated effects?

Costs
How substantial are the undesirable anticipated effects?

Certainty of the evidence
How robust and secure are the different sources  - research, practitioner expertise, stakeholder views and school data - of evidence?   


Balance
Does the balance of the desirable and undesirable effects favour the intervention or the comparator?

Resource use
How large are the resource requirements – attention, time, money, professional learning?

Does the balance of costs and benefits favour the intervention or the comparator?

Equity
What impact does the decision have on educational equity?  Will it help close gaps in attainment?

Are there important ethical issues which need to be taken into account?

Acceptability
Are there key stakeholders – teachers, parents, trustees, who would not accept the distribution of the benefits, harms and costs?

Would the intervention adversely affect the autonomy of teacher, department, school or MAT?

Feasible
Are there important barriers that are likely to limit the feasibility of implementing the intervention (option) or require consideration when implementing it?

Is the intervention or strategy sustainable?

Additional comments and recommendation




Furthermore, application of the above framework needs to be seen in the context of the strength or otherwise of the evidence - be it research, practitioner expertise, school data or stakeholder views.  However, that is another discussion which I will explore in a future post.

And finally

Having reflected on the EEF school improvement cycle - it seems to me insufficient attention is being paid to how you turn evidence into a decision and the processes necessary to support evidence-based decision-making - and this represents a fundamental flaw in the five stage process put forward.  Evidence-based decision making is so much more sophisticated than a simple model of priorities, external research, implementation, evaluation and mobilisation - and involves critical appraisal of multiple sources of evidence, aggregation of that evidence, and subsequent integration of the evidence into the decision making process.


References


ALONSO-COELLO, P., OXMAN, A. D., MOBERG, J., BRIGNARDELLO-PETERSEN, R., AKL, E. A., DAVOLI, M., TREWEEK, S., MUSTAFA, R. A., VANDVIK, P. O., MEERPOHL, J., GUYATT, G. H. & SCH√úNEMANN, H. J. 2016. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 2: Clinical practice guidelines. BMJ, 353.


NEELEMAN, A.-M. 2017. Grasping the scope of school autonomy: a classification scheme for school policy practice  Belmas. Stratford on Avon, England