Informaticopia

Monday, November 30, 2009

NHS hospitals performance debates and league tables

A report in yesterdays Observer Eleven more NHS hospitals at centre of safety scandal based on research by the Dr Foster organisation has escalated controversy about how hospitals should be judged, with underlying debates about monitoring techniques and the quality of data supporting the indicators on which these are based.

I have been critical of the relationship between Dr Foster and the NHS information centre, and the culture of secrecy which surrounds it, for many years (see:
* CfH Learning to Manage conference March 2009
* NHS-HE Forum Meeting May 2008
* Dr Foster & the Information Centre July 2007
* NAO report on Dr Foster Feb 2007
* Is NHS data there for any company - or just one? April 2006
* Paying for data your taxes paid for March 2006)
however I think the main current issue is around the selection of indicators and the way in which they are collected an published.

Last week the press picked up on the perceived discrepancies between the Care Quality Commission demand for improvements at Basildon and Thurrock University Hospitals NHS Foundation Trust following unannounced inspections and league tables showing them performing well - based on self reporting mechanisms completed by the trusts.

The patients association, amongst others, have called for a more rigorous inspection regime, rather than allowing organisations to rank themselves against published criteria - and this is probably a good thing, but the Dr Foster 2009 hospital guide highlighted in the Observer used a different set of criteria, focusing specifically on "avoidable deaths" and untoward occurrences" showing wide variations in the hospital standardised mortality ratio but not taking into account wider measures of patient satisfaction etc.

Errors such as swabs being left in body cavities during surgery are important, and a terrible tragedy for the individuals involved, they are rare when you take into account the number of operations performed. Measures which improve the quality of care and reduce avoidable problems are obviously to be welcomed, however the latest Dr Foster report has been described as "alarmist".

I think a wider debate on the appropriate indicators of performance and an openness about the quality of data which supports them is overdue and to be welcomed.

Labels: , ,

Cerner attempted "suppression" of a critical report

A story entitled "Claim of censorship over Cerner system" in Computer Weekly has alerted me (and probably many others) to a report, by Professor Jon Patrick from Australia - The Story of the Deployment of an ED Clinical Information System6.0.pdf which is critical of Cerner FirstNet.

The paper presents an international and historical overview of comments about Cerner software implementations in a variety of hospital and healthcare systems, before focussing on the specific implementation in New South Wales. A variety of issues are highlighted, perhaps best summarised by the statement "The problem of designing a satisfactory Clinical Information System is that it is non-deterministic, that is, it is not possible to know all requirements in advance, because of the variety of the users it has to satisfy is effectively a Complex System."

Issues around contracts, relationship between the healthcare organisations and company responsiveness are accompanied by the finding that "the user interface software .... is organised in a rigidly hierarchical manner that does not fit the Australian hospital workflow and which makes it highly inefficient for data retrieval."

The essay has some interesting things to say, based on the comments of clinicians and managers using the system, although I'm sure its rigour will be criticised, however I suspect it would have languished largely unread if a representative of Cerner had not contacted the university department publishing it, leading to it being removed and then reinstated, which has brought it to a much wider audience.

Perhaps this in itself is indicative of a cultural clash between a large commercial sector organisation and public sector and academic traditions.

Declaration:
I am also party to a contract with Cerner, which, at present, impinges on what I can say about the particular project I manage. I intend at a later date to be publishing an evaluation to help others.

Labels: ,

Tuesday, November 24, 2009

Research and publication sharing networks

A colleague today invited me to join Mendeley a web and desktop application which helps to organise, share, and discover research papers.

I tried it out (as I often do when told about these sorts of applications) and I particularly liked the easy way to import references and whole papers from Google Scholar, publishers sites and electronic journals, with one click. This makes building an attractive CV much easier. I didn't like the fact that the "Medicine" discipline was the closest I could find to "health" & it didn't include a sub category for Health Informatics - but perhaps this will change.

The site also claims to help explore research trends and connect to other academics in your discipline, although as I only signed up this morrning it is too early to see how effective this will be. So far I have found it the easiest to use of all the sites of this type which I have tried. My own profile is at http://www.mendeley.com/profiles/rod-ward/ to give an idea what it looks like (but I haven't uploaded all my publications yet)

Labels: , ,

Evaluating eHealth: How to make evaluation more methodologically robust

Evaluating eHealth: How to make evaluation more methodologically robust is a new paper just published by the Open Access Public Library of Science.

The authors point out the necessity for evaluation of eHealth computer systems and the difficulties such evaluation will encounter. "There is a consensus about the evaluation of clinical treatments, such as drugs, in which randomized control trials are state of the art," they say. "No such consensus exists yet for the evaluation of highly complex service interventions such as computer systems." The authors conclude that "multiple methods research" is necessary for eHealth systems evaluation: "Research commissioners and research teams need to recognize the importance of undertaking combined quantitative and qualitative work when evaluating IT systems."

It fits beautifully with the paper I'm currently writing for my DPhil and contains clear and logical arguments about some of the socio-technical approaches needed within NPfIT.

Labels: , , ,