Note: The Connecticut Media Group is not responsible for posts and comments written by non-staff members.

Assessing Assessments

Do your assessments measure performance accurately?

Do your assessments measure performance accurately?

One of the common steps taken by executives and well-intentioned consultants attempting to help businesses meet current or future training needs is to conduct a skills assessment of those presently doing the job (job incumbents) and compare those to the skills needed to perform the job.  The triggers that create the desire to conduct a skills assessment can be any of the following:

  1. Introduction of new technology necessitates new skills
  2. Introduction of new process to complete the job tasks
  3. Creation of a new job.

How it is Done

Given that there is often an executive with some level of expertise in either/or both of the “old” way of doing things and the new way the job is to be completed; or a consultant offers to assist the company in helping to transition from the current ways of doing things to the new ways; it is left to that person to share their recommendations based on experience.  Sadly, this is not often a recipe for success.  It relies too heavily on subjective perspective and too narrow of a focus (the insights of the singular executive or consultant replete with the biases, assumptions, and beliefs of that person that may taint the assessment).

In some instances, as a way to overcome the personal bias of the “assessor,” there will be surveys, interviews, or even observations of the job incumbents doing the actual work so that the final assessment can be based on some kind of objective analysis.


There are three primary concerns that any performance assessment should include:

  • Measurements and Standards
  • Reliability
  • Validity

Measurements and Standards

When looking at performance or behavior, it is essential that there be the establishment of standards (what does “good” performance look like?  How do I distinguish between “In Need of Development” from “Effective,” etc.).  Where and when possible, the standards should be objective, quantifiable, and calibrated to identify breaks between levels (where does “good” start and end before it becomes “excellent?”).


Another key factor to consider is whether performance would be judged or assessed the same (or even similarly) by multiple raters or assessors.  Even if only one rater/assessor is performing the assessment, it is still critical to determine if that rating or assessment would be given by another assessor.  Otherwise, the rating may be called into question on the basis of rater bias (the rater provides a skewed assessment that would not be confirmed by others).

Additionally, reliability also refers to the SAME rater providing the same rating or assessment at different times (would the score change if the same behavior were observed a year apart?).  Behavior or performance should be judged the same no matter when the assessment occurs.


Lastly, the assessment’s results must be relevant to the business outcomes being sought.  For instance, assessing a Manager’s ability to provide feedback by noting how many times they offer a compliment versus a criticism seems valid and passes the test of relatedness.  However, if the Manager’s assessment of feedback proficiency is judged by number of song titles known – it is clearly not a valid measure of the skill allegedly being assessed.  Making the components of the assessment directly related to the job being performed or to be performed is necessary for the assessment to be a reliable tool for determining training needs.

Assessments need not be complex to be effective.  However, they do need to be consistent with certain best practices for them to offer the value one expects from an assessment.


David Zahn