CDI comes with its own set of challenges; some unique to our field, while others are shared with other healthcare functions. Some of these include articulating the value proposition in a manner that resonates with providers, identifying true documentation opportunities for the health system/facility, engaging providers, and defining the scope of the program that allows the organization to strike a reasonable balance between adequate coverage and avoiding surplus resources that contribute to waste in healthcare.

There is another challenge that has traditionally contributed to many of the aforementioned challenges, forcing even the most sophisticated programs to get accustomed to making crucial decisions without having the insight to fully support those decisions. That challenge is not having access to “expected” levels of performance in CDI that are specific to your organization.

For many seasoned or novice CDI professionals, it is not difficult to trace the root cause of many high-priority initiatives back to clear, concise, and accurate clinical documentation. Why is it, then, that even after more than a decade into our profession, there are still organizations with CDI functions that struggle to gain traction, not only with many healthcare executives, but more importantly, with providers? A closer look will frequently reveal that this same group of individuals that we may struggle with, ironically enough, exhibit a keen interest in and level of responsibility to other initiatives for which clinical documentation quality is an underlying driver of success.

Immediately, two very popular initiatives come to mind, namely mortality and length of stay. While we are able to tightly align the goals of CDI with a direct positive impact on the metrics that determine improved versus declining performance for these initiatives, we are still met with resistance in many instances when trying to pursue accurate and compliant clinical documentation.

There are many potential reasons for this, ranging from initial CDI approaches that required minimal provider participation and engagement, to an organizational culture that does not support and promote provider accountability in CDI. An additional point to consider, though, is the availability of expected levels of performance for those initiatives, based on your unique and ever-changing patient population.

The Value Of “Expected”

Having access to expected metrics, even beyond healthcare, presents significant value, as it allows one to make informed decisions and take calculated risks. That is not surprising since the concept of the expected value of a random variable is one of the most important concepts in probability theory. When you make decisions in terms of expected values, it simplifies the process and helps you decide whether or not there is adequate reason to engage in an activity, allowing you to model risk and incorporate it into your decision making.

Consider the following example: Frequently, many programs rely on and monitor CC/MCC capture rates, whether for a service line or for specific high-volume DRG groups. Often, decisions are made based on observed trends in these rates, or based on how they compare to national averages or certain benchmarks. These decisions may include staffing, education, increasing the scope of the program, and having one-on-ones with specific providers or groups of providers. Without having access to expected levels of performance for your distinct facility and patient mix, one may make an uninformed decision since insight into whether you are actually performing poorly, or whether you are maintaining or improving performance based on your fluctuating patient mix is unknown.

These choices can be costly and when providers are involved, we risk losing credibility. In a scenario where expected levels of performance (not based on benchmarking) are known, it becomes clear that more informed and beneficial decisions can be made. In addition, monitoring aggregated data and trends, in the absence of expected values, can lead to a false sense of security. For example, there may still be a significant gap with the expected level, despite improved performance compared to potentially irrelevant national averages.

The limitations of no “expected” go beyond uninformed decision making; your success with sharing performance data with providers can be limited too. Engaging a provider with low performance according to descriptive statistics may not be compelling enough or sufficiently actionable to those who just want to know: “How do I improve?”.

With regards to mortality and length of stay initiatives, leadership has “guiding stars” in the form of expected values of performance and can easily monitor performance relative to those values on corporate dashboards, acting when performance declines (moves away from “expected”). I would argue that this could be one of the reasons why executive “buy-in” in CDI can be challenging sometimes, especially since monitoring severity reporting performance at the corporate level requires the interpretation of several metrics (CMI, CC/MCC capture rates, query response rates) with no clear insight into how far away the organization is from expected levels of performance.

Anecdote From an Industry Expert

A colleague recently took a new position as the president of a short-term acute care facility and when he wanted to do a temperature check on severity reporting at this new facility, along with potential opportunities to improve, he was presented with data that had several metrics including CMI (comparison to baseline), capture rates (down to specific DRG groups), provider response rates, provider query rates, CDI productivity metrics, and CDI program ROI in the form of CDI Net Incremental Revenue. While he didn’t devalue these metrics, and rightfully so, since these are metrics that allow for effective management of the program and are pertinent to specific roles in the organization, he was frustrated by not knowing where they are, where they should be, and what improvement goals could be established.

How the organization achieves these goals, while important, was of little interest to him. To him, knowing the expected levels of performance for CDI allowed him to monitor performance efficiently at his level, much like he does with length of stay and mortality. He believes, like most other successful executives, that having competent leaders and managers in place, and then getting out of their way is the key to success. His level of participation is most valuable in the awareness amongst his team of him monitoring whether they are getting closer to or moving away from reliable expected levels of performance for specific initiatives. I happen to agree with that, and historically, CDI has fallen short in that aspect.

Conclusion

Our industry is evolving, becoming more sophisticated, and attracting more attention from executive leadership than ever before. It is incumbent upon us to be curious, ask the right questions, and ensure that our seat at the corporate table is not only deserved, but also crucial to the overall success of the organization. We cannot win favor and interest from our leaders if we are not even in the race. What better way to do it than by gaining insight from other initiatives and having expected levels of performance in CDI.

“The goal is to turn data into information, and information into insight.” -Carly Fiorina

ClinIntell

Redefining Severity Reporting

ClinIntell is the only CDI data analytics firm in the industry that is able to assess documentation quality at the health system, hospital, specialty and provider levels over time. ClinIntell’s clinical condition analytics assists its clients in identifying gaps in the documentation of high severity diagnoses specific to their patient mix, ensuring the breadth and depth of severity reporting beyond Stage 1. Accountability and an ownership mentality is promoted by the ability to share peer-to-peer documentation performance comparisons and physician-specific areas of improvement.

Connect with us on LinkedIn to stay up to date on insights, events and more!