Hello everyone, and hello summer!
We’re back (OK, I’m back – Derek is in Alaska for two weeks) from our usual midyear hiatus and are in the beginning stages of planning for CMEpalooza Fall — that’s Wednesday, October 19 if you are scoring at home. We’ll have our Fall agenda soonish. That means sometime before the last fresh corn of the season leaves your local supermarket. We like to give ourselves plenty of wiggle room with these things.
Anyway, I’m here today to write a little bit about data, or more significantly, data that makes you go, “Hmmmm.” Since the dawn of CMEpalooza, we’ve tracked traffic to our various delivery mechanisms to get a sense of what’s popular, what’s not, how our audience is growing, etc. The usual metrics. And truthfully, we’ve grown about 10-15% in terms of general traffic to just about everything year over year. It’s been a nice, steady climb that we’ve always felt good about.
And then came this Spring, and well, something weird happened. Our YouTube viewership for a handful of sessions went crazy. Prior to this Spring’s event, our most viewed session had accumulated somewhere the neighborhood of 2,500 views according to YouTube analytics. Derek probably has the actual number somewhere, but I’m too lazy to do research (hello, it is summertime!).
As we do following every live event, Derek and I went in to see what our YouTube numbers looked like this Spring. We can see, in real time, how many people are watching each session — that’s one of the best things about Streamyard, the platform we use for our broadcasts — so we knew our live event was well attended but in line with previous iterations. So we were quite surprised a few days later to see what our YouTube metrics looked like for two of our sessions.
Every other session from the Spring had relatively normal traffic, but these two significantly outperformed any expected metrics. I joked to some of the presenters of these sessions that perhaps their legions of college exes had been stalking them to see what they were up to. We dug a little deeper into YouTube Analytics to try to figure out what had happened. And while were are some answers there, a lot of questions remain.
From what we could tell, these two specific videos somehow became popular “recommended” suggestions for people watching other YouTube videos. For the most popular of these two CMEpalooza Spring sessions, the most popular linked videos were the Optimist Bahamas Live Stream; Data Exchange Podcast (Episode 123): Jack Clark; and Day 1 Conference: “The Geopolitical Impact on Talent Acquisition” (Anke Strauss, IOM). All very interesting I’m sure, but I have no idea what any of them have to do with our sessions or what about the title or content or audience may have triggered their inclusion in those videos “recommended” suggestions. Not surprisingly, the number of viewers of these two sessions who watched more than the first 30-60 seconds was quite low, a significantly lower percentage than our typical sessions.
For those of us in the CME planning world, we see these sorts of statistical anomalies from time to time. Maybe it’s pre-test data that looks a bit squirrely or something in the evaluation that has us scratching our heads. It’s often tempting to overlook the potential drivers of these data deceivers because they look so good. I mean, who wouldn’t want to be able to report that 5,732 learners accessed one of our online educational activities or that only 12.3% of learners answered a pre-test question correctly about a key variable tied to a learning objective? But usually, there is enough that looks suspicious (and sometimes, you can figure out the issue) that requires the outlier data to be cast aside.
So no, in our next CMEpalooza sponsor prospectus, you won’t see us crowing that our overall audience for the Spring 2022 event was 400% greater than any other iteration. But say we did. Would that be a boldfaced lie? Technically, maybe not – I mean, the YouTube data shows what it shows. But in an industry where we rely so heavily on data to tell our outcomes stories, it’s the interpretation of the data that often matters most. So, no, we won’t pretend that thousands of people are suddenly interested in Derek’s new haircut or the insightful question from our audience at 34:52 of one of our recent sessions. We’re good, but we’re not that good.