This is a continuation of our special AUA: IME Supporter Edition from earlier this week. Click here to view Part 1.
As a reminder, if you have an issue (professional or personal) you would like help with, click here to submit your question(s). We like offering advice and people seem to enjoy hearing our opinions (we won’t comment as to whether our advice is any good).
Now, on to the questions!
Q: In terms of the number of participants, are you seeking the highest number of participants or evidence that it is the right target audience?
Each organization views this differently; however, I would venture that for the majority of us, we are seeking evidence that the right audience participated in the education. High numbers can catch one’s attention, but our medical teams are getting more savvy. Depending on the disease state, a high number of promised learners will instantly draw skepticism and dilute the value of what we are trying to convey. When medical starts asking questions about who participated in the program, and the only information we can share is a large number and not evidence that it was our target audience, we’ve lost their interest and a chance for them trusting the value of the data we’re sharing.
Q: Trying to determine what each grantor requests for outcomes/impact is challenging. Whatever happened to the Outcomes Standardization Project?
The OSP has been effective in establishing consistent definitions for terms commonly used across our industry. From what I have observed, many providers have adopted these OSP definitions, and when reviewing grant submissions, I do appreciate when groups apply them. If a provider chooses not to adopt OSP definitions, I would expect them to clearly explain how they are defining clinicians and the phases of engagement within their program.
All that said, expecting full alignment in outcomes reporting across companies is likely unrealistic (note from Derek: Yup.) Supporters have emphasized for several years that outcomes reports are critical for IME teams, but there is also increasing diversity in how programs are designed and evaluated. As technology continues to influence both society and education, CME programs have evolved and innovated in ways that generate more varied and unique datasets. Because of this, providers should move beyond focusing primarily on large participation numbers or basic pre- and post-test metrics. Instead, they should think more strategically about how to demonstrate program impact. The goal should be to communicate outcomes in a way that clearly conveys the value of the program to stakeholders who may not have a background in CME, while highlighting insights that will resonate with industry colleagues.
Q: Regarding multi-support, if it takes more than a year to get sufficient funding even to meet the contingency plan, do you prefer the provider keep seeking funding or would you like the funding returned?
Communication is vital in multifunder situations. First, I think the contingency plan should always include what can be accomplished with the amount of funding you requested from a single supporter. Then, once you receive approval of funding from a supporter, regular monthly updates on other support decisions are helpful. As the proposed start date approaches, there should be a discussion of whether the committed supporter wishes to move forward with just their support, wait for further decisions, or request that funding be returned. There are very few programs that I would be willing to wait a year for. And to my fellow supporter colleagues, I’d be interested in hearing why it is taking more than a year to make a decision.
Q: Where do the number of learners and the cost per learner rank on the list of things to look for when reviewing a proposal?
Cost per learner is not something I prioritize. What’s more important is the right audience and the audience generation methodology applied. If the provider is leveraging lists or distribution partners, I will dig into that. Can they deliver the right audience? What is the mix of disciplines I could expect (e.g. if I’m expecting physicians, is the program going to give me more nurses or pharmacists?). When I see large numbers, I question the authenticity of those numbers. I also try to dig into the demographics more. If the report has large numbers, and I am able to dig and drill down into my specific audience and learn that only 10% of the audience was my target learner, I will get frustrated. But the CME provider can redeem themselves if they perform a deeper analysis or segmentation of that small target audience. What is most important is seeing the data and impact on the audience that the supporter is interested in reaching.
Q: Is there ever any concern (Legal? Internal?) that RFPs might be seen as guiding content because of the detail provided?
This is absolutely something that our Ethics & Compliance and Legal departments are concerned about and why they are involved in the review of all RFPs before they can be posted. Some companies are more conservative than others, and like most things in IME, “guidance” and “influence” are open to interpretation. This is why providers may feel that some RFPs don’t really say anything about what the supporter wants to see. In these cases, the internal compliance folks likely have a wide interpretation of what constitutes influence on content, which ensures the information in RFPs remains at a very broad level.
