Purely Scientific Versus Innovation and Pragmatism: The Way Forward for Evaluating Innovations in Integrated Care

By Christina Theodore

I recently attended the 16th International Conference on Integrated Care in Barcelona (#integrated care) and was excited by the different ways health and social care economies are striving to integrate care to make improvements in patient outcomes. However, as an evaluator of such models of care, I was disappointed at the lack of innovative and pragmatic approaches to evaluation.

At the conference, my colleague Niamh Lennox-Chhugani was on the discussion panel of a session led by Cheafea and DG Santé entitled ‘Building the evidence base: how can integrated care improve the sustainability and resilience of health systems?’. It became clear, through the Q&A session, that evaluators and researchers are still placing a great emphasis on purely ‘scientific'* approaches and methods rather than pragmatic and innovative, formative approaches which would help with successful implementation. There appears to be a strong perception that ‘scientific’ equals ‘robustness’ and that taking a pragmatic approach or developing different ways to evaluate such innovations would result in less ‘credible’ findings.

Although traditional approaches and methods have a place in the evaluation of integrated care, evaluators need to consider how best to ‘blend’ these with approaches which support stakeholders through their journey of transformational change.

There is value in evaluations which are purely ‘summative’; assessing ‘outcomes and impact’ and ‘value for money / cost-effectiveness’ is of great relevance and importance to designers of health systems. These can achieve internal validity and can provide useful information for policy. However, there is also validity in more pragmatic, ‘formative’ approaches which help with policy implementation. These support workforce to continuously improve and inform scale up, spread and adoption of innovation.  These points are addressed in detail in a paper co-authored by Gloria Laycock and Jacque Mallender entitled ‘Right method, right price: the economic value and associated risks of experimentation’.

These types of evaluation are able to answer the question, “What works, for whom, in what respects, to what extent, in what contexts, and how?” and take a more realist approach. In order to answer this complex question evaluators need to:

  • Identify and understand the contextual, behavioural and environmental factors at play and how these may: impact on the innovations’ design and implementation; lead to unintended consequences; influence the key enablers of success associated with people, process, finance and technology. This requires a clear understanding and articulation of the ‘Theory of Change’ associated with the model of care which itself should drive the design, methods and lines of enquiry of the evaluation.
  • Develop a minimum data set (so not to burden stakeholders) which is driven by the Theory of Change, and is also meaningful to key stakeholder groups. The data should be presented in a way which draws together multiple sources of information in an engaging and transparent way (preferably through data visualisation methods). These can then support shared learning and joint problem-solving with a view of enabling effective decision-making for improvement and ultimately challenge traditional values, belief and attitudes and enable the cultural and behavioural changes that are required.
  • Adopt an improvement-science strategy; whereby data and emerging findings are fed back to key stakeholders through ‘plan-do-study-act’ methods, at key points within stakeholder’s decision-making process. This should enable innovation to take an evolutionary and iterative approach in its development. This feature of an evaluation should be designed to enable ‘rapid-cycle testing’ in order to accelerate stakeholder learning and decision-making.
  • Consider how they effectively adopt and develop co-design methods to ensure stakeholders can effectively engage in the process of understanding and acting on the data and emerging findings of the evaluation.  This needs to be done with a view to achieve better integrated care as well as supporting spread and adoption.

There is a need for commissioners of evaluations, to see the role of evaluators and the purpose of evaluations in a ‘new light’.   Traditionally, evaluations have been viewed as an activity which happens after the design or even after the implementation of an initiative to support investment / disinvestment decisions. Commissioners should utilise evaluations and the expert knowledge of evaluators to contribute to improvements in the planning and design phase of innovations and ongoing improvements in implementation.

The points I have raised above should be reflected in the specification of any integrated care evaluation, whether local or national.  Commissioners should seek out evaluators that can demonstrate a proposed approach which provides a balance between being scientific yet pragmatic, have assessed the challenges that current evaluation methods are unable to address, and developed innovative methods which better support stakeholders on their journey of transformational change. This is the mind-set which is required to ensure that the evaluation design is robust and that findings are also relevant and credible.

 

 

Christina Theodore is a Principal Consultant and leads Optimity Advisors’ integrated care evaluation work.  She is an experienced health and social care professional and health care researcher. She has led national evaluations for a range of UK national health organisations, including the third sector and has undertaken many local formative and improvement-based evaluations.

 

* Note, I use the term “scientific” as a shorthand to represent the body of more traditional evaluation approaches and methods, including but not limited to randomised controlled studies. These approaches are considered the gold standard in many disciplines, including medicine.