5 Key Metrics to Elevate Your Course Effectiveness

laura@learningforgrowth.ie

A Data-Driven Approach to Creating Successful Learning Programs


1. Conversion from registration to engagement with the course

Engagement starts when the learner first becomes aware of the course, whether that is hearing about it from a friend, an ad on social media, as a result of a Google search etc. There are a few key pieces of information from this number and related data. 


The first is understanding if your messaging about the course is setting the right expectations for the learners that you want to come onto the course. In other words, does the messaging, in whatever format it reaches the potential learner, accurately capture the challenges that the training solves?


Secondly, where are the learners who become active learners on the course coming from? Usually, you’ll ask something along the lines of ‘Where did you hear about this course?’ in a pre-or post-course survey. Going deeper and understanding where the learners that stayed on the course is valuable. Maybe you can redirect resources from one communication channel to another. For example, if word of mouth is your best channel for active learners, consider moving a portion of the marketing budget from a less successful communication channel to rewards for successful referrals by past learners.


2. Break down drop-off rates into ‘withdrawn’ vs ‘did not start’

While we can often focus on completion rates and drop-off rates as the main data to tell if a course is running well, understanding at what point someone left the course is crucial.


Did not start learners’ behaviour can be analysed with the marketing channels as mentioned in the first point. Withdrawn students, who showed up at some point to start the course and later decided to leave, are learners that you had another engagement point with and that should be analysed. You could go even further and break down your ‘withdrawn’ category into different percentages of completion of the course - depending on the duration of your overall course. This would be overkill for a short course but for a 9-month course would be a good idea for example. 


You’re looking for trends in when people broke off from the course to understand if there is a barrier of some description. These can be difficult to spot in one group or cohort but can appear clearly over time.


3. Module-by-module engagement

Leading on from looking at withdrawn students and trends in when they leave the course, we can also look at the course module by module. It may also be the case that learners attend and engage with some modules very well but other modules have a lower engagement rate, even if the learner does come back to their usual engagement level after that point. 


Again, this can be difficult to see with smaller numbers of learners, gathering data over time can show areas where small improvements can be made that, cumulatively, make a big difference. Each data point and subsequent changes flows into the next, the more people that complete the course and are happy with it, the more likely they are to tell someone else about it and encourage them to take it. Similarly, the less barriers there are throughout the course, however small, the less chances we give to learners to second-guess their participation and divide to withdraw from the course.


4. Course completion

This is a metric that will be on everyone’s mind. But it is important to remember that it is a result of cumulative efforts from the above measures (and others) and it cannot be changed by focusing on it alone. 


As you increase the number of metrics that you are measuring and as you make subsequent changes, make sure that you change one thing at a time and monitor your course completion rate during the period of change. The duration of trialing a change to the course depends on the duration of your course and the number of learners that take part. I would aim for at least 100 learners at a minimum being exposed to the change to provide you with data that you can act on.


5. 3, 6 and 12 month impact

While it doesn’t have to be at these intervals - that will depend on your course and what the change is. For example, if your course is intended to help someone land a job, external factors can contribute and tracking over 3, 6, and 12 months can be useful to see if market factors contributed to landing/not landing a job. However, if your course is intended to have learners implement a new process, you should be seeing the results much more quickly and look for the number of errors or time taken to complete the process over 1 week, 2 weeks, or 3 weeks for example - again depending on what the process is and how often it is completed. The point is that training impact can be difficult to measure, especially when it comes to the true impact that lasts over time and creates a real difference in people’s lives, but that doesn’t mean that we shouldn’t try. The important thing is to measure over a reasonable time frame for the impact that you are looking for and to apply context to the results so that you can understand what is the impact of the training and what is more likely caused by other factors.