Screening for important unwarranted variation in clinical practice: a triple-test of processes of care, costs and patient outcomes

At the Association's symposium and AGM on 1 December the following awards were presented for the Best HSR Papers (Quantitative, Qualitative,  emerging researcher and New Zealand Researcher); the HSR Impact Award; and the Best PhD Student Prize.

The HSRAANZ’s awards and prizes program recognises individuals who have made significant contributions to the fields of health services research and health policy in Australia and New Zealand.

Over the next few weeks we will be reviewing the winning research.

Below we hear from Andrew Partington, from the School of Public Health, University of Adelaide and the winner of the award for the Best Quantitative Paper on "Screening for important unwarranted variation in clinical practice: a triple-test of processes of care, costs and patient outcomes"  

Partington et al.

A question all Government Health Departments invariably ask themselves is “how can we identify meaningful opportunities for quality (including cost) improvement, in a systemic and systematic way?”

When presented with evidence of variations in population health, overspends (including length of stay, LOS) or critical incidents, groups like the Australian Commission on Safety & Quality in Health Care (ACSQHC) suggest improvement processes should be undertaken, comprising multiples stages and detailed investigations of data sources, casemix, hospital structures, and resources.

With the introduction of National Activity Based Funding, State and Territory governments have become increasingly engaged in the assessment of variation. With information from organisations such as the Health Round Table, states regularly engage clinical leaders on LOS and relative utilisation benchmarks.

These activities are helpful – but they are not trivial pursuits. They require resources and engagement with clinicians and front line staff, which means they’re hard to do regularly, and sustain over a period of time.

Policy levers are needed to prioritise initiatives with the greatest ‘bang for buck’ to reduce the risk of seemingly ‘inefficient policy’ being scrapped.

On the clinical front, high-level and KPI type insights are often stymied by uncertainty and disagreement surrounding the age-old arguments “… but my patients are sicker” (which they genuinely may be) and “… but I have better outcomes” (which they legitimately might achieve).

So we need better ways to identify priority areas of clinical activity in which formal processes to improve performance are likely to provide the greatest value.

In a paper recently published in the Australian Health Review, we propose a way forward with joint, comparative analyses of processes of care, costs and outcomes.

Much in the same way as a clinical ‘screening tool’, we provide a case study of a method that could be applied to a wide range of clinical areas to justify further efforts to firstly diagnose (confirm) and then treat (improve) areas of clinical activity in which increased risk of under-performance is suspected.

Using linked, routinely collected data, we fitted multiple regression models using data from 7,950 patients presenting at the ED of four different hospitals with symptoms suggestive of an acute coronary syndrome.

Through this, we identified statistically significant casemix-adjusted differences in:

  • Mean inpatient costs (up to $669 per presenting patient);
  • 30 day and 12 month outcomes (up to twice as many related readmissions or deaths); and
  • Clinical management strategies (up to 41% higher inpatient admission rates, 66% higher use invasive diagnostic interventions and significant differences in length of stay).

We further explored differences within clinical sub-groups, and illustrate how with a varied appetite for investing in quality improvement, different hospital sites may be the best-practice benchmark.

There are no universal facts discovered through this process, but a rigorous decision-aid that provides statistically adjusted insights to inform collaborative clinical and operational planning.

We conclude the paper by discussing how, in the future, with the inclusion of capacity, staffing, and Patient Reported Outcome Measures (PROMs) data, such analyses will further contribute to robust health-service business cases.

For now though, this case-study illustrates a practicable approach to using routinely collected data to compare risk-adjusted costs, outcomes and processes across providers and patient sub-groups. Such analyses improve on the use of singular, non-adjusted measures of performance to identify priority areas for non-trivial investments in time and financial resources to improve service quality.

Partington et al.

andre-partington-head-shotAndrew Partington, Principal Project Officer, Activity Modelling and Purchasing, System Performance and Service Delivery, SA Health. Having worked as a Research Associate within the Adelaide Health Economics Group at the University of Adelaide, Andrew has spent the last couple of years as a strategy consultant within the UK National Health Service. Most recently, he joined the South Australian Department for Health & Ageing where he helps to lead state-wide commissioning initiatives.  While his research focus includes unwarranted variations and health state valuation, Andrew is most interested in improving the way health economics is used to engage diverse decision-makers in service quality and financial sustainability initiatives.  When not nerding-out on models and flow-diagrams, you’ll find him banging on pots and pans in blues bars