The 2015 CIPD learning and development survey highlighted that one in seven organisations do not evaluate the majority of their L&D initiatives – over a third limit their evaluations to the satisfaction of those that take part. One in five assess the transfer of learning into the workplace and a small minority evaluate the wider impact on the business or society. And at a recent conference, one expert was overheard saying that evaluation was just “too difficult, time-consuming and complicated, so we shouldn’t bother”.

So why does evaluation often sit in the too difficult pile? 

 1. Measuring behaviour change is time-consuming

Typically, what most clients want from learning and development is a change in behaviour.  This could be a manager who needs to shift their leadership style from overly directive to more engaging thus getting greater performance from their team; a junior employee who doesn’t understand customer service; a director who squashes innovation and great ideas; or the whole company to change culture and become more customer-focused.

Getting information and measuring behaviour change is both time-consuming and subjective.  Do you send out a questionnaire to colleagues three months after the learning event and ask if people have seen changes?  Or ask the individuals who were developed to evaluate their own behaviours?

There is a skill in clearly identifying the original issues and required outcomes  – ‘doesn’t understand customer service’ is vague.  What does this really mean?  What would great customer service look like?  Who is going to pull all this together?  And was it then worth the time and effort?

  1. Was it the L&D that produced change?

Whilst behaviour change may lead to tangible business improvements, how much is any improved profit or increased sales down to the training and coaching?  Or could it be attributed to a sales drive in a new market or a cost-cutting exercise?  Many are quick to attribute business success to every factor – other than learning and development.

  1. L&D professionals are data-phobic

L&D professionals are perhaps rather unfairly referred to as data phobic.  Do they turn away from statistics and IT based solutions?

  1. The sponsors move on

Too often a learning need is identified, but by the time the learning has been agreed and carried out, the sponsor has moved on physically or mentally – or the company itself has changed direction – and no-one presses for the evaluation or would be interested if it appeared.

As a team, we’ve spent years running learning and development activities and trying out numerous different ways of evaluation.  We have come up with four fundamental steps to ensure that evaluation is carried out every time, regardless of all the barriers and that it is always evaluated against business objectives.

Here I share these key steps, but would love to hear what others think is the most effective.  Of course, there are always more specific details to evaluate, but we see these as the over-arching framework.

  • Clarity of objectives for each learning and development activity: this goes back to the example above.  ‘Understanding customer service’ is too vague an outcome for any learning and development.  What do we really mean by this?  Clarity and being specific about behaviour changes and the impact on the business will ensure we evaluate the right things later on
  • Immediate response: completion of evaluation questionnaires that are tailored to the learning outcomes – a generic questionnaire across all programmes will not give the detail required.  In-room reviews are also conducted to capture what has resonated and what commitments are being taken away
  • 3 months on: communication with delegates to remind them of their commitments.  Webinars and action learning sets to review progress.  With the support of a coach, any barriers in applying learning are identified and overcome.  At this stage, a summary of learning evaluation will typically be sent to clients.  We then evaluate this against the original learning objectives and business impact
  • 1 year on:  interviews with delegates, their line managers and other colleagues to collect qualitative and quantitative data.  Often line managers will make behavioural observations, but only the delegates can say how much of this can be attributed to the learning intervention.  We then pull this into an overall summary for the initiative with a bottom line outcome and recommendations to improve our own training and develop delegates still further

For anyone in learning and development, evaluation is a two-way process.  Of course we are evaluating the impact on individuals but we also have to evaluate what is working and not – and courses that work one year may not the next. Why?  It could be anything from the profile of delegates changing – from language and culture to age or experience.  Or the business has changed direction or that delegate expectations have changed.

Evaluation has numerous benefits and if we can demonstrate return on investment (RoI) on one activity, it is easier to secure further investment from a board with multiple priorities.

Which parts of evaluation have you found hardest – and what has been your most successful way of evaluating?