Why is Learning and Development so hard to evaluate?

The 2015 CIPD learning and development survey highlighted that one in seven organisations do not evaluate the majority of their L&D initiatives – over a third limit their evaluations to the satisfaction of those that take part. One in five assess the transfer of learning into the workplace and a small minority evaluate the wider impact on the business or society. And at a recent conference, one expert was overheard saying that evaluation was just “too difficult, time-consuming and complicated, so we shouldn’t bother”.

So why does evaluation often sit in the too difficult pile? 

1. Measuring behaviour change is time-consuming

Typically, what most clients want from learning and development is a change in behaviour.  This could be a manager who needs to shift their leadership style from overly directive to more engaging thus getting greater performance from their team; a junior employee who doesn’t understand customer service; a director who squashes innovation and great ideas; or the whole company to change culture and become more customer-focused.

Getting information and measuring behaviour change is both time-consuming and subjective.  Do you send out a questionnaire to colleagues three months after the learning event and ask if people have seen changes?  Or ask the individuals who were developed to evaluate their own behaviours?

There is a skill in clearly identifying the original issues and required outcomes  – ‘doesn’t understand customer service’ is vague.  What does this really mean?  What would great customer service look like?  Who is going to pull all this together?  And was it then worth the time and effort?

2. Was it the L&D that produced change?

Whilst behaviour change may lead to tangible business improvements, how much is any improved profit or increased sales down to the training and coaching?  Or could it be attributed to a sales drive in a new market or a cost-cutting exercise?  Many are quick to attribute business success to every factor – other than learning and development.

3. L&D professionals are data-phobic

L&D professionals are perhaps rather unfairly referred to as data phobic.  Do they turn away from statistics and IT based solutions?

4. The sponsors move on

Too often a learning need is identified, but by the time the learning has been agreed and carried out, the sponsor has moved on physically or mentally – or the company itself has changed direction – and no-one presses for the evaluation or would be interested if it appeared.

As a team, we’ve spent years running learning and development activities and trying out numerous different ways of evaluation.  We have come up with four fundamental steps to ensure that evaluation is carried out every time, regardless of all the barriers and that it is always evaluated against business objectives.

Here I share these key steps, but would love to hear what others think is the most effective.  Of course, there are always more specific details to evaluate, but we see these as the over-arching framework.

For anyone in learning and development, evaluation is a two-way process.  Of course we are evaluating the impact on individuals but we also have to evaluate what is working and not – and courses that work one year may not the next. Why?  It could be anything from the profile of delegates changing – from language and culture to age or experience.  Or the business has changed direction or that delegate expectations have changed.

Evaluation has numerous benefits and if we can demonstrate return on investment (RoI) on one activity, it is easier to secure further investment from a board with multiple priorities.

Which parts of evaluation have you found hardest – and what has been your most successful way of evaluating?