The CMEpalooza STEPtacular Challenge Starts Today!

In his essay The Etiquette of Freedom, Gary Snyder provides the following life advice:

Practically speaking, a life that is vowed to simplicity, appropriate boldness, good humor, gratitude, unstinting work and play, and lots of walking brings us close to the actually existing world and its wholeness.

Snyder–who was friends with Wendell Berry, climbing partners with Allen Ginsberg, and the inspiration for Jack Kerouac’s The Dharma Bums–famously practiced what he preached. And while we like to think that CMEpalooza embraces a number of these principles, I’m specifically writing today to help you with the last one: lots of walking.

Today is the start of the CMEpalooza STEPtacular Challenge, sponsored once again by Talem Health. Whether you’re looking to get closer to the wholeness of the existing world by walking a lot or just need to get away from your friggin’ computer screen for 10 minutes, all our welcome to join and participate. Here are all the details you need to know.

  • The challenge begins today (Monday, April 20) and will go until the end of the day on Sunday, April 26
  • The challenge is to record at least 10,000 steps in one day. For each day during the challenge period that you record 10,000 steps, please send in a screenshot of your step-counting device with the number of recorded steps, either by email (thecmeguy@gmail.com) or text (267-666-0CME [0263]). Please include the following information:
    • Screenshot of number of steps
    • Date of steps
    • Name
    • Email
    • Physical mailing address (we need this to send you merch)
  • For each day you achieve 10,000 steps and send in a screenshot, you will be entered into a random drawing for a $50 Amazon gift card (there are five gift cards; you cannot win more than one.) Winners will be drawn during the week of CMEpalooza Spring. You can send them in each day individually, or you can send them all in at once at the end. It doesn’t matter to me.
  • The person who records and submits the most steps in one day wins a $250 Amazon gift card
  • Please send in your steps by 5 pm ET on Monday, April 27
  • Everyone who enters even one day will receive this sweet Yeti mug courtesy of Talem Health.

Thanks everyone and good luck!

CMEpalooza STEPtacular Challenge sponsored by:

This is Your Final Warning: CMEpalooza Pursuit Entries Due Tonight

For all of you procrastinators in our audience (you know who you are), it’s time to set aside whatever you are doing to hammer out an entry or two (or four) for our CMEpalooza Pursuit (the 80s edition) sponsor event.

Which of our sponsors has something in common with former NFL quarterback Dan Marino? Which one likes to use Top 10 lists in their activities? Which one invites you to “Just Say Yes”?

No clues from us, but just remember that the submission portal will close at 11:59 p.m. ET this evening. You can get all of the details you’ll need to enter by clicking here.

Ask Us Anything: April Edition

After we reached the bottom of the Ask Us Anything mailbag in March, we set it aside for a few weeks to see what the postman would fill it up with. Thanks to our community of contributors with their professional woes , our mailbox is overflowing again (OK, overflowing is probably an exaggeration, but it is definitely flowing). Consequently, we’re back with another installment of our advice, with some help from our friends.

Remember, if you have an issue (professional or personal) you want us to help with, click here to submit your question(s). Our advice will be worth its weight in gold, whatever that means.

Dear Derek and Scott,

Has anyone explored AI tools (e.g., Microsoft Copilot) to assist with conflict‑of‑interest mitigation—whether through drafting mitigation language, standardizing decisions, or tracking faculty disclosures longitudinally across CME activities?

Regards,

The CME Countess

SCOTT: As with all things related to accreditation, Derek and I looked at each other with a quizzical look before deciding to throw this out to our crackerjack team of specialists in this area. Even some of them were a bit stumped, but here are a few highlights that they shared:

I saw a spreadsheet that helped track and manage faculty/disclosures longitudinally across many activities, but haven’t yet gotten the scoop on what “artifacts” were put into AI to create it. Yes, that’s the magic sauce, so this probably won’t be very satisfying!

Here are the reports: 1) Report showing all master disclosure info. 2) Report showing a schedule of all sessions (with event name/info), speakers, and their disclosures. Somehow, this report included “Relevant Relationships” and  “Resolution Needed?” …..but I don’t know what was used to determine this!  3) The last report was a dashboard by activity showing how many relationships needed to be mitigated vs which were already mitigated vs which were not relevant.

And then there is this:

  1. Identifying Therapeutic Areas for Ineligible Companies: I use it to figure out therapeutic areas and drugs/devices available of known ineligible companies  – fairly reliable (caveat: Not great at identifying drugs/devices developed in partnership with other companies)
  2. Identifying relevant financial relationships for known ineligible companies based on course description and therapeutic areas. Will flag what “it” thinks is relevant. I go through this with a fine-tooth comb based on spot checking therapeutic areas/knowing what the areas are. IMO – this isn’t really worth doing in AI
  3. Identifying ineligible companies – not good at identifying ineligible companies. It does not look at where they are in the regulatory process. If you need a confidence boost – it’s easy to prove AI wrong here so you can celebrate outsmarting it! 😊

Or this:

The quality of AI‑generated output depends heavily on well‑designed prompts and the information provided. Even with strong prompts and reliable output, human review is required for confirmation.  We never use AI as a final answer or decision‑making tool. We have several tiers of human review that we implement to ensure we get to the correct (and compliant) decisions each time.

As a related note, AI can also be useful as a drafting support tool when preparing communications about compliance decisions. Running a rough draft through AI can help clarify and appropriately frame compliance concepts for stakeholders (such as planners and project managers) who may be less familiar with accreditation requirements. This is especially helpful in situations where we anticipate questions or resistance.

DEREK: I have not explored this, so I wrote a poem instead. I think it’s pretty good.

Roses are red
Violets get sappy
I used to manage accreditation
But now I don’t and that makes me happy

Dear Derek and Scott,

When is it, if ever, appropriate to follow up with a supporter about when a grant decision may be rendered?

Hey There,

Decision Seeking Delilah

DEREK: You are being more kind with the wording of this question than is necessary. You ask a valid and appropriate question. The reason why it is a valid question–and why it is so frequently asked–is that supporters have been notoriously (how can I put this nicely…terrible? Awful? Horrendous?) unreliable in rendering grant decisions within a specified timeframe. There are sometimes valid reasons for that and some are better than others, but that’s not the intent of this question and can be addressed at another time.

As a general rule of thumb, if it is beyond the review timeframe outlined in the grants portal or in the RFP guidelines and you are still waiting for a decision on your grant application, then it is appropriate to follow up with the supporter. Supporters will not (or should not) get annoyed at you for doing so. If the decision due date has not passed, then I would advise you to wait before following up. It’s as simple as that.

A major pet peeve for supporters is when someone submits a grant application and then follows up the next week asking when a decision will be made. You would be amazed how often grant requests are submitted with a start date of less than 30 days out. Super annoying. If you have a tight turnaround time, then it’s better to first email the supporter to ask if they can accommodate a quick review, rather than spend the time submitting an application that is automatically rejected for noncompliance with time parameters.

SCOTT: My favorites are when you submit for support of a live activity (ie, satellite symposium) attached to a conference well within the proper time window for potential review. This may even be in response to an RFP with a hard submission and expected response deadline. But then the grant sits…and sits…and sits. It sits for so long that the conference has already happened and you forgot you ever submitted the grant proposal in the first place.

Out of the blue–let’s say a month after the conference has ended–you get a “sorry, but your grant proposal has been declined” email. You don’t say….????

(And yes, this just happened to me this week)

Dear Derek and Scott,

We recently integrated PARS into our LMS (we use OASIS). I’m wondering what operational gains (or challenges) people have experienced beyond the first year? Has the integration meaningfully reduced the burden of annual PARS reporting? Are there areas where ACCME automation is helping, or where further automation would be most valuable (e.g., learner numbers, closing expired activities)?

Me Again,

The CME Countess

SCOTT: The Countess contributed quite a few thought-provoking questions, which we mostly understood (I don’t think PARS is a golf term when used in this context, though I could be mistaken). I picked the best two to knock out in this edition of our mailbag. Of course, this forced us to lean on our crackerjack team of accreditation specialists once again (we have them on a regular retainer). Here is what they said:

My experience here is probably very unique with integration from our own suite of databases and systems, so this may or may not be useful. We were probably able to automate more items, but more complicated rules were needed. Simply building the PARS integration for both activity reporting and learner reporting is an incredible time-saver! I don’t know what OASIS offers to know the scope of what was done in the first year, but any and all automation that can be done seems valuable (literally every data point that PARS collects for activities). We are reporting learner numbers on a regular cadence, just as part of our regular data feed. When year-end comes, this data point should be updated and final. I do NOT want to automate closing of expired activities, as I do like reviewing all activity data at year-end for QC purposes and then closing everything. But maybe I’ll be ready in a year or two more for that.

I have limited experience with Path LMS (by Momentive), but their integration is still somewhat limited from what I know. I suspect all of ACCME’s technology partners are managing things differently and on different timelines (with OASIS likely being fairly advanced).

I’ll summarize the results of the many other people I approached as follows: Shrug Emoji Images – Browse 846 Stock Photos, Vectors, and Video | Adobe StockShrug Emoji Images – Browse 846 Stock Photos, Vectors, and Video | Adobe StockShrug Emoji Images – Browse 846 Stock Photos, Vectors, and Video | Adobe StockShrug Emoji Images – Browse 846 Stock Photos, Vectors, and Video | Adobe StockShrug Emoji Images – Browse 846 Stock Photos, Vectors, and Video | Adobe StockShrug Emoji Images – Browse 846 Stock Photos, Vectors, and Video | Adobe StockShrug Emoji Images – Browse 846 Stock Photos, Vectors, and Video | Adobe Stock

DEREK: Hey, did you all see that Oasis was elected to the Rock & Roll Hall of Fame on Monday? It’s true. That’s all I have to contribute to this discussion.