Thh training world adopted this over-engineered rod for its own back. Senior managers don't want all of this superflous data, they want more convincing business arguments. It's the trainers that tell senior management that they need Kirkpatrick, not the other way round.
All the evidence points towards Levels three and four being rarely attempted as all of the resource focuses on Levels 1 and 2. It is not necessary to do all four levels. Given the time and resources needed in evaluation better to go straight to Level four.
Level 1 - keep 'em happy
Favourable reactions on happy sheets do not guarantee that the learners have learnt anything, so one has to be careful with these results. This data merely measures opinion. Learners can express satisfaction with a learning experience yet might still have failed to learn. For example, they may have enjoyed the experience just because the trainer told good jokes and kept them amused. Conversely, learning can occur and job performance improve, even though the participants thought the training was a waste of time! Learners often learn under duress or through experiences which although difficult at the time, prove to be useful later. This is especially true of learning through mistakes and failure.
Too often applied after the damage has been done. The data is gathered but by that time the cost has been incurred. More focus on evaluation prior to delivery, during analysis and design, is more likely to eliminate inefficiencies in learning.
I went to lots of brilliant comedy shows in the Edinburgh Festival this year, and was as happy as I've been allyear, but can't remember a single, damn joke.
Level 2 - Testing, testing
Recommends measuring difference between pre- and post-test results but pre-tests are often absent. End-point testing is often crude, often testing the learner’s short-term memory. With no adequate reinforcement and push into long-term memory, most of the knowledge will be forgotten, even if the learner did pass the post-test.
Level 3 - behave yourself
At this level the transfer of learning to actual performance is measured. This is complicated, time consuming and expensive and often requires the buy-in of line managers with no training background, as well as their time and effort.
Many people can speak languages and perform tasks without being able to articulate the rules they follow. Conversely, many people can articulate a set of rules well, but perform poorly at putting them into practice. This suggests that ultimately, Level three data should take precedence over Level two data.
Level 4 - does the business Fewer shortcomings. The ultimate justification for spending money on training should be its impact on the business. Measuring training in relation to business outcomes is exceedingly difficult. However, the difficulty of the task should not discourage efforts in this direction.
What to do? Should you evaluate at all? Of course, it is one thing to critique the Kirkpatrick model, another to come up with a credible alternative. I’d say apply Occam’s Razor - minimise the number of entities you need to reach your goal. Put the over-engineered, four-level, Kirkpatrick model to one side as it is costly, disruptive and statistically weak. Focus on one final quantitative and qualitative analysis.
I liked Stephen Kerr’s view, the CLO at GE, then Goldman Sachs - Kirkpatrick asks all the wrong questions, the task is to create the motivation and context for good learning and knowledge sharing, not to treat learning as an auditable commodity. He would literally like to see Kirkpatrick consigned to the bin.
12 comments:
Even better than the rant on Gagne, Don! And arguably a tougher topic as well.
I think the reason training evaluation remains elusive is that there are perhaps too many considerations that are being combined: relevance of the training program, appropriateness of the instructional methodology and the diligence of the learner, to name just three.
I don't know whether I would really question Kirkpatrick's model (though I am not too sure that level 1 is too important - it harks back to training being a perk and an entertainment). However, it may be worth considering training evaluation in two parts, a la the Balanced Scorecard methodology that companies tend to use to monitor organizational performance: PROCESS MEASURES and OUTCOME MEASURES. Process Measures are early indicators of whether the training is moving in the right direction - Kirkpatrick's levels 1 and 2 fit in reasonably well here. Outcome Measures are whether the training has led to business results - levels 3 and 4.
Having said that, I think the key in training evaluation (especially in the Outcome Measures) is not so much the "what" as the "how". How do we actually measure the business impact of training? Or can we actually? Reminds me of another industry's measurement problem: "How do I know whether my advertising has worked?" But that's a different story.
Your first point 'too many considerations' is right. Kirkpatrick (plus Philips) is too complicated, time-consuming and costly to implement. We know this, as year after year we get data showing that the 4-level evaluations hardly ever happen. We also know that the system is awash with useless happy-sheet data. It's time to sweep this stuff aside and start anew.
I think there are two reasons trainers concentrate on levels 1, 2 and 3: firstly, they are much easier to measure, without 'bothering' the rest of the business; secondly, the results are easy to fix (by handing out happy sheets when everyone's in a rush to go home and when the trainer's in the room and no-one wants to upset them, and by using invalid, unreliable and typically easy-to-pass assessments); and thirdly, because they are terrified of what they would discover if they measured transfer of learning or, God forbid, value for money.
Unlike you I believe that measurement at all four of these levels is likely to be useful, but only if the results are valid and reliable. As long as trainers are responsible for assessing their own results, you are unlikely to see any objective analysis, for obvious reasons - jobs are on the line.
You really like pushing buttons Don.
I do have to disagree with this one statement however:
It's the trainers that tell senior management that they need Kirkpatrick, not the other way round.
On every project I work, we are always constantly getting requests for learning and effectiveness evaluations, and because of past history, even the highest level exec looks for L1, L2, & now even L3 analysis.
Matthew Nehrling
http://mlearningworld.blogspot.com/
You're right but at the highest levels - CEOs and Board level, they will never have heard of Kirkpatrick. At the next level down Kirkpatrick has simply been fossilised into the system.
HR and training lacks innovation and these things hand around for decades. They are literally theoretical 'fossils' representing ideas that should have become extinct long ago.
We can continue down the Kirkpatrick route and mostly fail to deliver, or we can refresh our theory, get more productive and relevant.
Other areas of business evolve and move on with their underlying theory and measurements. Sales, marketing, finance and production all work in an evolving lardscape -training simply accepts what it has without reflection.
Pretty much agree Donald. We said pretty much the same thing from some research and presentations a couple of years ago. The trouble is though that the KP levels and thinking has become lingua franca for the training community, and whilst individuals may question it, collectively they just accept it and continue to promote it.
I also believe this has been further magnified by the complicity of ASTD with Kirkpatrick and Philips in the US. Even ROI = level 5 now seems to be taken for granted even though ROI is just one format of presenting financial results. The biggest crime of many may be of perpetuating a model where the training industry thinks that "evaluation" is something that happens after the event rather than before and during. The effect of training's adherence to KP-thinking and systems for L1,2,3,4 evaluation is that it actually obscures rather than highlights the real issues of impact.
I would much rather trainers focus on understanding clearly what impact you are trying to achieve, whether it is a "learning" issue at all, and if so, in what way, how to design and construct solutions that make that impact, as well as delivering them to ensure they do make that impact. If you do this simply and clearly, understanding how to measure whether you are successful or not, becomes much more obvious.
David
P.S.One other quick though on L1. Why is the training industry the only one in the world that thinks it needs a detailed customer service questionnaire filled out by every customer on every transaction? I do see the need for business functions to understand their customer satisfaction, and encourage feedback, particularly on critical or challenging things. I do not understand why training thinks it is valuable to force it for every transaction. Ever hear of "sampling" guys!
Very succinct critique. Your final point about the madness of gathering data on EVERY transaction from EVERY person shows the poverty of the KP approach (to be fair he never personally advocated this).
If this were any other area of business we'd be ridiculed for 'statistical lunacy'.
Come on, the reason why training departments collect information on every event has nothing to do with evaluation and everything to do with checking the reaction your trainers are creating. In other words, it's a management tool.
How effective is this? I'm not sure. When I was running a 17-room training centre in London, I noted the 3L effect (lunch-loos-liking). If there was something wrong with the lunch or the loos, the scores for the trainer were marked down, even if it was the same trainer, on the same course, in the same room. And no, it wasn't just that trainer having a bad day - all classroom scores were marked down.
So - collect 100% evaluations as a management tool if you like, but take the results with a pinch salt.
I couldn't agree more!
The reason evaluation efforts get stuck at level 1 or level 2 is that the real objectives of learning haven't been considered at the outset.
Then trainers scrabble about needing to prove the worth of their training.
By focussing on inputs rather than outcomes, evaluation as it is done in practice adds very little.
Let me offer another view: the reason training never gets to Level 4 is that they don't own the yardstick. If a training manager claimed to have achieved demonstrable business results, no senior manager would take him at his word. The four levels go from the unimportant to the impossible.
jay
Say what you like about the Four Levels, but everybody knows 'em. In that light, I believe they provide a context - even if it's just to have a rant at them! You can of course see the attraction of happy sheets and summative assessments - as training professionals, with high demands on our time and our skills, they provide vaguely meaningful data for sponsors of managers who need to enhance their team's/groups productivity and are pressuring us to show them "results" (what ever that means).
Have you ever tried to explain about social-constructivist learning paradigms, or mindotools, or the aggregated benefits to social learning to an ignorant Regional Sales Manager who in all probability cares more about how shiny their wristwatch is and has a narcissisitic obsession their next bonus than the organisational development potential of a long-term staff learning plan?
Levels 1 & 2 are for playing these idio.., err I mean positive and engaged co-workers at their own game: they load their unimaginatively tedious PPT presentations with pie charts, so send them a L1 set of metrics and and they're in their element.
Do Level 1 & 2 have any great great utility in achieving long-, medium-, or even short-term learning objectives and organisational goals? No. Feck all, in fact (to use my local vernacular).
What's do be done? Has anyone originated a better methodology to evaluate the "intangible," "more than dollars and cents" alluded to in Levels 3 and 4 in the last 50-odd years? If they had we'd be using it now, in much the same way that valves (vacuum tubes for those of you in North America) were superannuated by transistors and PCBs.
Don,
Several years ago I wrote an article for my Creative Training Techniques newsletter suggesting that we needed to turn Kirkpatrick upside down.
I served on ASTD's Board of Directors when Don was president in the early 80s and I count him as a friend.
Instead of eliminating his four levels, which have a lot of traction, my suggestion is to acknowledge them -- but start and focus on level four with your client by having a pain conversation.
Most of the difficulty with level four is that we try to do it academically. As training and performance profdessionals we are not doing research for a dissertation. We are trying to get results from the interventions we implement. The results that our clients are looking for.
The problem is that our clients often don't know what to ask for or to look for. Thus the need for a pain conversation.
What a pain conversation basically does is to help our client focus not on a solution, but on identifying the problem and the cost or consequence of the problem.
I'm running out of space, but I'll give a quick example:
A potential client called asking for a training program to improve their new teller orientation and training. Instead of providing that we asked, "Why?"
The response was that half the tellers quit after the training but before ever working as a teller.
Instead of simply responding with the requested training we helped the client dig a little deeper and we found:
1. The cost of hiring a teller and trainer was $15,000.
2. Their were 40 tellers in a class 3X a year.
So the cost (or pain) from this is $15K X 20 X 3= $900,000
3. Transcaction errors among 240 tellers in 13 branches totaled 1200 per month. No one had ever quantified what this cost. The senior team came up with a cost of $50 per error X 1200 per month X 12 months = $720,000 per year.
We now have a $1.6 million dollar per year pain. We had asked the CEO to be part of the one day consult -- for the first hour -- he stayed all day.
We ended up with a different project than simply delivering a two day training program.
Within a month we discovered that over half the people starting the training never intended to be tellers. They were only there until they found something better. So we had a selection problem, not a training problem.
Within a month we uncovered the job factors of being a teller that caused the other half of the turnover during the training. We moved those things to the first week of the training so that the turnover accelerated -- they quit at the end of the first week of training -- not week five.
Withing six months we helped redesign the program using our participant-centered techniques to reduce training time from five weeks to three.
Within six months we identified what was causing the transaction errors on the job and put in place team strategies and accountability strategies that reduced trasaction errors from 1200 per month to 240 per month.
Within 18 months the bank was receiving a $1.2 million annual return on a one time investment of $350K.
The CEO was willing to spend $20k on training, but willingly spent $350k for solutions. Can we prove that the things I just described were what made the difference in the metrics? No. Not at an academic level. But from the viewpoint of the CEO, because we started with a pain conversation where he clearly identified the costs associated with the pain, not just the pain itself, he would say without reservation that it was our recommendations and help in implementing the recoemmendations that caused the dramatic shifts in the metrics. And in the world of business, that's what counts.
Kirkpatrick has always said of Level Four -- look for proof, not evidence. He just never said "how". The pain conversation is at least one "how to" that works well for us.
I hope your readers find this contribution to your blog useful.
Post a Comment