Note: The Connecticut Media Group is not responsible for posts and comments written by non-staff members.

No Excuse for Not Measuring Training’s Impact

Once upon a time it was assumed that training was something that was good to do, just because it made sense to do it.  You did not need to measure it or worry about whether it was strategically appropriate to do it versus other business decisions.  The idea that it needed to be financially justified just struck too many people in HR and even budget-wielding executives as just so wrong.  However, that is no longer the case.  In fact, because training and trainers have failed to quantify their efforts in any business context, it is now considered a luxury rather than a necessity and is often among the first things reduced or removed when it comes time to tighten belts.

There is no excuse for not measuring training's impact.

There is no excuse for not measuring training’s impact.

The Excuses

The excuses commonly heard whenever trainers have to answer questions about Return on Investment (ROI), are down the following paths:

  1. Training inutitively makes sense.  You can’t deny that.
  2. Training is hard to measure because (it is about behavior, it takes a long time to see results, there are other factors that influence results, etc.).
  3. Training is not like choosing machinery or making a purchase.  These are our employees – we HAVE to train them or they won’t feel (loved, respected, valued, needed, etc.).
  4. I am a people person, not a finance person.  I don’t understand all that talk about costs, assets, per capita, ROI, etc.

Stop the Excuse Making

And you know what – there are elements of truth in each of the excuses that make it more complex than it might otherwise be to measure and evaluate training’s impact.  So what?

Rather than offering excuses, we need to just start measuring.  We may not get it totally right.  We may not even be completely accurate.  We will learn as we go and we will improve.  In fact, we will train ourselves on how to do better training that impacts the organization and we will see better measures, better results, and better outcomes if we would just stop chasing “THE” answer, and settle for “AN” answer that we will refine over time.

The Basics

Any discussion on training measurement has to include the work of Professor Donald Kirkpatrick.  He outlined four levels of evaluation for training:

Level 1 – is Reaction.  What did participants or trainees FEEL about the training.  Did they like it?  Did they see how it could be used on the job?  Are they happy they attended?  Etc.

Level 2 – is Understanding.  How well did trainees comprehend or master the content of the training.  Can they answer questions about the material?  Are they able to recognize the skills, knowledge, or abilities needed to perform the task(s) as they are trained?

Level 3 – is Transference.  Do the trainees use the skills on the job.  Is the training being used or applied on the job, or are trainees choosing to default to a prior approach, process, technique, etc.?  Is there uniformity and consistency across trainees, or are all “doing their own thing?”

Level 4 – is Results.  Is the impact on the business being seen.  As a result of the training, is the business more effective, efficient, profitable, or productive?

Admittedly, it is far easier to ask a particicpant if they liked a training event and feel satisfied if the answer we receive back is positive than it is to try to link training to the outputs or outcomes of the business.  However, if all we measure is the satisfaction of the trainee during the training, we run the risk of creating training events that are more entertainment than educational, and less relevant for the goals of the business.

If we think that measuring; number of events, hours spent in training, number of trainees that participated is meaningful, then we will receive trainings that meet those standards, but may do little to advance the company’s bottomline.   We have to break out of the counting the efforts of the trainers and start measuring results for our customers, organizations, and trainees.

Our pursuit of the best measures have to viewed as a goal, not a requirement.  Rather than postpone measuring until we are certain we have it all figured out, we need to start with:

  1. What are the key organizational goals and how does training impact that from succeeding (or failing)?
  2. What are the relevant metrics or measures that can be used to determine success (errors/accuracy, speed, sales, etc.)?
  3. Who and what are required for training to succeed (management endorsement or sponsorship, Subject Matter Experts providing content reviews, training resources, technology, processes, etc.)?

The ability to measure training is not an exact science and it does have some constraints.  However, it is inexcusable to avoid measuring and evaluating training at all or just assuming it worthwhile (or worthless).

David Zahn