Demonstrating Clinical Outcomes as a Care Delivery Company
Co-authored by Morgan Cheatham and Suhas Gondi
Demonstrating clinical outcomes as an early-stage care delivery company is both an art and a science, and conversations about clinical outcomes require a mix of cross-functional collaboration, creativity, and rigor. For the second event of an ongoing series of conversations I’ve been hosting among startup CXOs at healthcare and life sciences companies and industry leaders, we decided to engage in discussions on “Demonstrating Clinical Outcomes as Care Delivery Companies.”
We were fortunate to gather some of the brightest minds on this topic, namely, Dr. Will Shrank, Chief Medical Officer of Humana, Deana Bell, Principal & Consulting Actuary at Milliman, Kate Fitch, Principal & Healthcare Consultant at Milliman, and (soon-to-be Dr.) Suhas Gondi, incoming Internal Medicine Resident at Brigham & Women’s and health policy researcher at Harvard. As a fun fact, Suhas and I went to high school together!
Thirty startup CXOs and employees joined us for a tactical conversation about best practices for demonstrating clinical outcomes as an early-stage care delivery company with a focus on payer and employer audiences. Attendees hailed from pre-seed to growth-stage companies representing >$1b of capital raised and serving tens of thousands of patients nationally spanning oncology, serious mental illness, neurology, dermatology, chronic pain, and with a focus on underserved populations including LGBTQ+ patients and patients with housing instability.
Before we dive into the minutiae, let’s align on the basics. Demonstrating clinical outcomes is an ongoing process, not a “check the box” exercise. How your company measures clinical outcomes will likely change over time, and as your organization grows, investments in outcomes validation should too. Your methodology as an early-stage business will likely differ from the gold standards utilized in academic medicine, and it’s important to be mindful of the limitations of your research approach. That being said, focusing on outcomes must begin at company inception and should be a core value of any healthcare delivery organization. Building the infrastructure necessary to collect data that informs outcomes research and validation is a pre-seed/seed stage initiative.
What’s the Goal of Demonstrating Clinical Outcomes?
To start, what are we trying to show with clinical outcomes data and who cares about it? For many care delivery companies, there are two customers: the patient and the payer, the latter of which likely takes the form of either a health plan or an employer. In theory, patients should care about the clinical outcomes of their providers; however, gaps in health literacy and poor transparency around outcomes prohibit patients from shopping for care based on outcomes alone (probably worth a separate post on this!). Employers and payers, however, wield a special power as it relates to curating the provider networks and point solutions that members can access. This piece will focus on how to articulate clinical outcomes data for employer and payer audiences. For more insight into how to contract with payers, check out this article based on a prior event focused on this topic.
So, the goal of demonstrating robust clinical outcomes is to secure commercial relationships with payers and employers. When working with payers, it’s highly unlikely you will begin any sort of engagement at scale. The early studies you conduct to demonstrate clinical outcomes can help you land a pilot – i.e., an opportunity to serve a small segment of a payer’s membership, often times ranging from 50 to a few hundred members depending on the population.
Once engaged in a payer pilot, your company will collaborate with the payer to study the impact of your solution or intervention more rigorously. For any collaboration with a payer to achieve meaningful scale or success, the payer will have to invest real resources, oftentimes far more than the startup will invest, and in some cases, more than the startup has raised from venture capitalists! Keep this context in mind as you’re entering engagements with payers.
Defining Return on Investment (ROI)
Unsurprisingly, as you embark on discussions with prospective clients, payers care about your solution’s ability to demonstrate clinical or financial return on investment (ROI); however, before getting there, most payers are actually more interested in seeing that your team has thought critically about how to best measure your solution’s performance. If your proposed study design for how you will measure clinical outcomes during a pilot is robust, many payers will not be terribly interested in seeing the expected ROI math (i.e., whatever model your team has developed to show that it’s going to work based on your assumptions). In fact, many payers just want to see that you’ve developed a framework to study something in a thoughtful way, and/or that you’ve collected leading indicators (e.g., engagement/usage rates) to inform the study.
What does return-on-investment (ROI) mean for a payer? We often focus our attention on the bottom line of the plan, alluding to financial ROI. Financial ROI typically comes from your solution’s ability to reduce the total cost of care (TCC) and can be articulated as: for $x invested in your solution, the payer or employer saves $y. Best in class financial ROI is 3:1 or greater, and is often derived from adjacent ROI calculations, such as Clinical ROI and Member Experience ROI.
Clinical ROI refers to returns generated in the form of improving a key clinical metric specific to the disease or population you are treating (e.g., reduction in A1c levels). Though Clinical and Financial ROI often go hand-in-hand, some solutions will also deliver value by increasing utilization and costs. It’s important to remember that ROI metrics are not always driven by a reduction in care. Lastly, Member Experience ROI speaks to improvements in member engagement and experience based on your solution’s ability to provide a better service than the status quo (e.g., improvements in NPS, increases in the number of member touchpoints or member retention, etc.). Though Member Experience ROI may not bend the cost curve, payers are often interested in solutions that will serve members better overall.
Here are some additional thoughts on ROI in my Bessemer 10 Laws of Healthcare piece.
Things to Know Before the Payer Conversation
Payers want to partner with companies that have traction clinically, but they don’t expect your company to have everything figured out, especially at the early stage. It’s important to enter payer conversations with empathy and a beginner’s mindset, and an orientation for execution. Whatever you do, do not oversell your clinical outcomes data as this may shut down the conversation altogether.
During the section of the pitch where you speak to clinical outcomes, it’s essential to:
Offer a clear definition of the population your solution targets and develop a sense of what percentage of a payer’s membership this subpopulation represents (i.e, what is the prevalence of the condition(s) you serve in the commercial, Medicare, or Medicaid populations?). If your company is focused on a broad specialty area (e.g., inflammatory diseases), it’s very challenging to articulate the benefit of an intervention for an entire specialty category, and efforts to do so may be met with skepticism. Be sure to focus on populations for which your solution will have the biggest impact and go deep there first. For example, anxiety and depression may impact 30% of a population, but what percent of patients with anxiety and depression can your solution actually engage and why? If possible, it’s helpful to utilize claims data to support your assertions around prevalence and incidence. That being said, it’s often difficult as an early-stage company to access claims, so we’ll touch on how to work around this chicken-and-egg problem later in the article.
Highlight leading indicators for clinical outcomes that address member engagement or behavior change (e.g., clinical scores such as PHQ-9 or GAD-7, member enrollment and retention, sessions with the care team per week or month, etc.). This strategy can be especially helpful if you do not yet have access to claims data as you can speak to the leading indicators that you expect to manifest a specific outcome in the claims. It’s important to develop in-house infrastructure to track any leading indicators that you cite in order to maintain data provenance and lineage.
Speak to budget impact. Assume the payer’s initial reaction will be something along the lines of, “I don’t believe that your solution will save me money.” The (potentially) good news? Even if your solution doesn’t demonstrate a Financial ROI, the payer may still be interested in covering it for the reasons mentioned above such as improving the overall member experience. How much would it cost the payer to do so? For this exercise, it’s helpful to know what fee-for-service codes cover your solution and whether any bundled models exist.
Understand the status quo care pathway for the plan. What is the current member experience for the patient population covered by the plan today? Has the plan worked with any other third-party vendors in the past or developed internal solutions? How will your solution integrate into existing efforts, and how will you demonstrate that any improvements in cost, outcomes, or quality are attributable directly to your solution?
Acknowledge other organizations in the space or adjacent spaces that are similar in value proposition. Articulate your solution’s difference clearly, but do not oversell.
Designing your Pilot Study
To address the points highlighted above, early-stage companies looking to test their product or service in a pilot study should pay careful attention to study design. In their best form, pilot studies can be incredibly useful, producing results that meaningfully inform product development and help convince potential customers that the product or service might actually achieve its intended goals. Unfortunately, these efforts too often end up becoming largely useless studies from which there is little to learn and little reliable evidence to help with sales.
Making thoughtful decisions about study design can make all the difference - for your company and for the payer’s evaluation. Here are a few important questions to ask to help guide this process:
1. What’s the goal of this pilot study?
If there is a risk that a product or service can cause harm, then the first priority of any study is to demonstrate its safety. Most care delivery companies pose little risk of harming patients (e.g., providing tailored nutrition to a specific population would be considered low risk of harm). So long as there is little risk of harm, the focus of these studies is to demonstrate early signs of efficacy of new technology-enabled services, care models, mobile applications, etc.
As we discussed in the Payer Contracting Best Practices article, there is a difference between efficacy and effectiveness: effectiveness refers to a result acquired in an average clinical or a real-world environment, whereas efficacy refers to a result acquired under ideal or controlled conditions. For more in-depth reading on efficacy in digital health, check out this short paper published in February 2022 titled, “Evidence and Efficacy in the Era of Digital Care” authored by Dr. Will Shrank and Suhas Gondi. In addition to efficacy, some pilot studies may simply seek to show that patients or clinicians (or whoever the intended user is) actually use the product or service. In these cases, demonstrating engagement may be a primary goal. Clarifying the goals of the study is step one.
2. What is the right study design?
Your priority is to choose a study design that will best enable you to answer the specific question(s) you are asking (e.g., does my care model lower hospitalizations?) within the constraints you’re facing.
There is often a false dichotomy between 1) conducting a randomized control trial (RCT) with a small sample size and a rigorous design, and 2) utilizing a real-world evidence-based approach with larger sample sizes and a longer follow-up period. This is the wrong dichotomy – it’s more important to delineate between causal vs. non-causal study types, as this distinction speaks to the fundamental question of attribution (i.e., was your solution responsible for delivering the outcomes you’re claiming or was it something else?).
Unfortunately, many commonly employed study designs don’t help answer the important questions that pilots should start to shed light on. For example, one of the most commonly used pilot study designs in digital health is a pre-post analysis, where a population receives an intervention and the study looks at how certain metrics (e.g., spending, outcomes, utilization) changed in the post-intervention period relative to the pre-intervention period. The challenge with this design is that it’s often impossible to know if the changes you’re observing would have happened regardless of the intervention, meaning you might erroneously attribute some change in spending to the intervention, when in reality that change would have occurred anyway.
Another commonly deployed yet flawed study design in digital health is the Focus Group Trial (FGT). In FGTs, a panel of clinicians will review a series of cases retrospectively to identify patients that received a certain intervention. The clinician will then determine whether the patient would have had a certain outcome if they had not received the intervention (e.g., a hospitalization, specialist visit, etc.). The FGT is not a standardized methodology (partly due to the subjectivity of the decisions the panel of clinicians must make) and would likely never get accepted into a peer-reviewed journal, nor would it pass the sniff test with most payers.
TLDR - it’s highly unlikely that payers will underwrite a big investment of time and resources based on Focus Group Trial data or Pre-Post analyses.
For these reasons, it’s critical to have a control group – a population that is as similar as possible to the group receiving the intervention – so you can compare the metrics in the intervention group to a reasonable “counterfactual” scenario. Having a well-matched control group helps make sure you’re not mistaking secular trends as being produced by the intervention. However, it’s essential to construct the control group properly. Randomization is the best way to create the control group because then, you can be convinced that the control and intervention groups are roughly the same. If you can’t randomize for whatever reason, you can build a “matched control” group, where you use claims analytics to construct a group of individuals that is similar to the intervention group on relevant characteristics (ideally, other members who were never offered the intervention). Given these nuances, close partnership with payers is important for robust evaluations.
Another common mistake is to compare individuals that engaged with the service to those that were offered it but did not engage, considering the non-engagers as the control group. This creates bias because the engagers and non-engagers are fundamentally different. The patients that engage with a new mobile application to help them manage their blood pressure, for instance, tend to be more engaged in their health and more likely to engage in other healthy behaviors like exercise, than those that are offered the same app but don’t download or use it. This makes it an unfair comparison, favoring a positive impact of the app when that might not actually exist, which many customers are increasingly wise to.
As mentioned, ultimately, the goal of study design is to choose something that allows you to conclude a causal relationship between an intervention and an outcome. Randomized Controlled Trials (RCTs) aren’t the only way to demonstrate causality, but they are the gold standard. Study designs exist on a spectrum, and some (such as studies with control groups) are more likely to facilitate causal inferences than others (like pre-post studies without control groups). Which design is best will depend on your product/service, the target population, the amount of time you can follow people after the intervention, and many other factors.
3. Do I need an academic or another third-party partner?
If you can partner with an academic group or another third-party partner to help with your pilot study, they can help you think through some of these choices around study design. In some cases, they can also spearhead the analysis with appropriate funding, and help publish the results. As the bar for evidence in digital health rises, peer-reviewed publications can help a startup stand out in the increasingly competitive marketplace. Most payers won’t expect this at the early stage, but they will down the line.
4. So what about RCTs?
Yes, RCTs are considered the gold standard study design in medicine and the benefits of RCTs for digital health companies are extensive:
RCTs allow for causal inference
Have a higher probability of being accepted by a peer-reviewed journal (vs. the quasi-experimental approaches described above)
More likely to help a payer establish conviction in your solution’s outcomes.
However, the cons are many as well, including that RCTs are expensive and extremely time-consuming for early-stage companies. Ultimately, it’s extremely rare for payers to see startup care delivery companies with RCTs, and when they do, those are generally companies that have been operating for a while and have raised significant capital, and the context of the discussion is usually around a much larger engagement, not a pilot. For earlier stage companies, RCTs are the exception, not the expectation.
The Chicken and Egg Problem with Running Studies on Claims Data
As described, running analyses with third-party claims data can be an excellent way to articulate your solution’s proposed clinical outcomes story; however, it can be extraordinarily difficult for early-stage companies that are not yet working with payers to garner access to claims – hence, we’ve found ourselves with a bit of a chicken-and-egg problem.
So what do you do if you have an unproven model, you don’t have data, and you find your company stuck in a cycle of not having a strong sense of the cost drivers within your population? Unfortunately, there’s no simple solution, but a good place to start is establishing a strong relationship with a payer so that they feel invested in your success. The way to do this best is to make your success synonymous with their success. Ultimately, you’re going to have to ask someone to go out on a limb – the payer has to believe that simply learning more about what you’re building will be helpful for them. In our Payer Contracting Best Practices post, we talk a bit more about how to build genuine relationships with payers to find your champion and make the most of the relationship.
Even still, let’s hammer one point home: you don’t need a payer contract to gain access to claims. Other audiences that may be more intrinsically motivated to share data with you include Accountable Care Organizations as well as integrated delivery networks (IDNs), which are health systems that operate their own health plan. Some actuaries may also have sample assets you can utilize.
Once you do have access to claims data, your goal is to establish the status quo of the population you’re serving without your solution. Identify your target population, use coding algorithms to parse the claims, and observe patterns of care, cost, and utilization to understand where your solution fits in. It is essential to identify your target population and the TCC cleanly, as well as any and all variables that could impact your results.
Leveraging Patient Stories
When it comes to matters of clinical outcomes, let the data tell the story, and not the other way around. But the story is also important, especially when it centers on the voice of the patient. Collect patient narratives and include these stories in your presentation to add a richness that complements robust outcomes data. In the midst of all the great work we do, we must not forget that medicine is a science that begins with a story.
If you’ve made it this far, thanks for reading! We hope you’ve learned something new about demonstrating clinical outcomes.
The next session for this recurring, yet-to-be-named healthcare and life sciences virtual discussion series will cover, “The Evolving Healthcare Product Landscape: Lessons from Healthcare’s Product Leaders.” This event will be held on Thursday June 16, 2022 between 2:30-4:00pm ET co-led by Gina Kim, Chief Product Officer at Cohere Health, Ayo Omojola, SVP Product at Carbon Health, and Dhruv Vasishtha, VP Product at Firsthand. We’ll be doing a deep dive on the rapidly evolving healthcare product tech stack, sharing best practices for “buy vs. build” decisions and talking through team building on healthcare product teams. And as always, we’ll be keeping it real. Apply to attend here!
This was great! Loved this: "In the midst of all the great work we do, we must not forget that medicine is a science that begins with a story." I'm happy to see that a story plays an important role. Each person (patient) has their own personal story and too often it never gets heard because they are simply never asked.