CMI & CDI: To Link Or Not To Link
Not too long ago, the effectiveness and results of a CDI program were tightly aligned with CMI increases. This is not surprising when you consider a few points:
- An effective CDI program actually does ultimately increase CMI.
- CMI is a metric that resonates well with any hospital CFO, as it corresponds to MS-DRG based reimbursement.
- CMI is a reflection of the average severity of illness of all patients treated at an organization, and improved documentation efforts will accurately capture severity of illness, more than likely increasing it.
So why do so many "new age" CDI subject matter experts start to get heartburn when CDI performance is measured primarily by CMI? I can only speak for myself, but here are a few of my hypotheses as to why this metric/measure causes frustration with so many:
- As quality metrics become more prevalent in healthcare, there is an increased need for the capture of the true severity of illness for a patient population. Sometimes, this means generating a query, although there is no impact in the DRG code (and no effect on reimbursement), as in the case where APR-DRG groupings are used, where every diagnosis is assigned an SOI and ROM category or weight.
- CMI can be affected by many other variables besides improved documentation alone, like:
- The patient mix that walks through the hospital doors during a given month.
- The hospital expands its neurosurgery service line, bringing in more high relative-weight neurosurgery patients.
- The number of tracheostomies, ventilated patients, transplants, and other surgical procedures.
- The hospital implements a new technology such as CAC or an EMR system.
- Coding accuracy, productivity, and adhering to coding guidelines: Increased productivity demands on coders could affect accurate coding and efficiency of coders.
- Changes in the relative weights assigned to DRGs from year to year: Sometimes, this negatively impacts a hospital’s CMI when the relative weights for some of its highest volume DRGs go down, making comparisons with last year’s CMI as an indicator of the impact of the CDI program misleading.
CMI alone, I think we can all agree, will not be a very good indicator of the performance of your CDI program at your hospital, nor of actual documentation performance.
So how do we get down to actual performance then?
Many suggest carving out the factors that influence the fluctuations in CMI not related to improved documentation and then monitoring other metrics as well. These include review rates, query rates, physician response rates, and physician agreement rates. These are good suggestions, but can become complicated. It would take an astute CDI manager to be able to make correlations between actual changes in documentation performance and the above-listed metrics, and then present a solid case based on the data collected versus the actual point increase or decrease in CMI.
To add to the confusion, I will play devil’s advocate and speak on behalf of the physicians at the bedside. Which one of those metrics is an actual representation of my documentation performance, according to industry guidelines and regulations? Does a low query rate for hard-to-deal-with orthopedic surgeons mean that they have better documentation than a responsive hospitalist? A lower review rate may just be a reflection that a CDS was out for a couple of weeks. It is important to define what constitutes high quality documentation for your program and what is the most impactful component of that definition to which we can apply a metric.
We at ClinIntell believe that your best bet would be the physician’s ability to capture severity by documenting diagnoses based on definitions when their patients have met the clinical criteria. CMI alone won’t resonate with physicians, at least not as well as it would resonate with your CFO. I cannot tell you how many CMOs and even physician advisors I have had to educate on the concept of CMI and its relevance to accurate portrayal of severity in the chart. This is no fault of theirs, as CMI 101 is not a part of the medical school curriculum.
We have to make a clear distinction between what we are trying to measure. Is it:
- CDS effectiveness and productivity?
- Overall effectiveness of the CDI program alone?
- Actual physician documentation performance?
- Physicians’ participation in the program as a whole?
From a CFO’s point of view, I believe the emphasis will be on overall CMI, and rightfully so. Any fluctuations that require further attention should result in an efficient root cause analysis process. At that point, the metrics that speak specifically to the efficiency of the CDI program should be factored in. I think from a CDI manager point of view, it will be necessary to monitor metrics to ensure that the program is running efficiently and effectively. The physician advisor should be concerned with physician participation in the program and with overall documentation performance. In the case of the individual physician, s/he should be concerned with their actual documentation performance, which should include striving to decrease the number of queries received over time. Of course, all these areas are interconnected.
In summary, CMI should not be completely "unlinked" from documentation performance, because an unexpected decrease in CMI could prompt leadership to analyze the metrics of the CDI program. Those metrics may very well tell the story of underperformance (from CDSs and/or physicians) when we look at the other metrics mentioned earlier.
- Steer away from linking program efficiency to CMI alone.
- Do not use CMI to measure physician documentation performance (there are patient-mix factors out of a physician’s control that can cause significant month-to-month fluctuation).
- Clearly define, measure, and familiarize yourself with the metrics that are necessary to enable you to be successful in supporting the CDI efforts at your organization, from the CDI manager to the physician at the bedside
In any case, regardless of the degree to which you link CMI to CDI, I think we can all agree that a CDI program is only as effective as its weakest link.