Are hospitals fiddling the waiting times?

  • Published
Computer keyboardImage source, Science Photo Library

Deep in the bowels of every hospital are teams of administration clerks whose job it is to record how long patients are waiting.

This is done because patients have the right to have their treatment started within 18 weeks - and hospitals are under pressure to meet that target.

The administrators start the clock ticking as soon as they get a referral from a GP or, in some cases, another health professional.

But importantly - and this is what the National Audit Office has been looking at - hospitals have the power to pause the clock or even restart it.

The watchdog's report found this was not always done correctly or consistently.

This can happen in a number of ways.

For example, the first key stage after referral is first consultant appointment. This is where diagnosis and/or treatment options are discussed.

If a patient doesn't respond to an invitation to a consultation some hospitals will make contact in case they have innocently missed the correspondence.

But some trusts don't give the patient a second chance: instead they are sent back to the GP and the clock is restarted when a new referral is made.

Another loophole relates to the way pausing the clock is interpreted.

Hospitals are entitled to do this if a patient chooses to wait longer for personal or social reasons - a holiday or work commitments perhaps.

The clock is meant to be restarted when the patient says they are ready again, but sometimes it stays paused until the hospital is able to arrange the next appointment, buying itself valuable days, even weeks.

What is more, some hospitals only allow the clock to be paused for as little as two weeks before sending the patient back to their GP.

'Fudge the figures'

The big question being posed now is whether this is being done intentionally.

The Patients Association believes it is, suggesting managers are trying to "fudge" the figures.

Publicly the NAO says it is not possible to confirm this.

It could be argued much of what has been found is not outright falsifying.

The rules governing waiting time recording run to over 100 pages and, as a result, there is a fair degree of ambiguity and contradiction.

So instead managers could be just pushing at the slightly fuzzy boundaries to gain an advantage - wrong maybe, but someway short of malpractice.

However, I am told that behind the scenes the watchdog has made it clear to NHS England and ministers there is a case to answer.

That is understandable. Looking at the errors in more detail, there is a clear trend towards mistakes that made the waits shorter than they otherwise would have been.

Of the 167 cases with errors, 129 led to an under recording of the wait by an average of 40 days.

Impact

So why would they do this?

The 18-week target is one of the most important and visible in the NHS.

Performance data is published monthly so NHS chiefs can keep an eye on what is happening and patients can use the data when they are choosing where they want to go.

Any breaches can lead to fines. But even more worryingly for managers is the prospect of ministers and NHS England bosses breathing down their necks.

Even gaining just a few days here and there can make all the difference.

With the exception of the first two weeks, when the most urgent or straightforward, cases are dealt with, a patient is more likely to have an operation in the final week before the target deadline than any of the previous ones.

In fact, you are between 30% and 60% more likely to have your treatment started in week 17 to 18 than at any point from week five onwards.

So, therefore, it is hardly surprising that some of the errors identified actually led to the patient being recorded as seen within 18 weeks when they weren't.

There were 26 cases where this happened, although there was 11 were the effect was the reverse - people were recorded as being seen outside the deadline when they should have been within.

Overall net effect was that the numbers seen within 18 weeks were 15 higher than they should have been.

Let's have a look at what would happen if that is repeated on a national scale - and at this point it is worth noting that there is an argument that it shouldn't be extrapolated like this because the sample size was small and limited to one speciality.

Nonetheless, the impact is revealing.

The 15 cases represents 2.3% of the review groups. That may sound small, but it is not insignificant.

Currently the NHS is only meeting the target by 1%.