Some weeks ago (I don’t remember exactly when) I responded to a statement made on Twitter by Sukh Pabial. He said something along the lines of ‘we need to stop collecting scores when evaluating training’ and I disagreed with him. Some tweets later I retired to bed thinking that was that. I was wrong.
The discussion continued the next day and last week Sukh suggested we put our respective view points in blog posts to try and stir up some debate and take it from there….so here goes.
This house believes that learning evaluation should utilise both quantitative and qualitative measurement…
I am tempted to use my Mother’s reasoning – “because I said so” or use a Toby quote from ‘The West Wing’ which goes, “But not because I’m right and you’re wrong. Although I am and you are. But because I am better at this than you are.” But that’s not true either (I am not often right) so I’ve actually had to think through why this house believes…well what it says up there.
1. Blanket Statements
I have an aversion to blanket statements – especially in the context of something as difficult to do well as evaluating learning. Take all the theory (Kirkpatrick, Phillips, Brinkerhoff etc) and you will find very little agreement on the best way to achieve this so making a blanket statement that says we should remove quant doesn’t work for me
2. What’s the need?
If you start any evaluation design with questions along the lines of 1) what do we need to know? 2) who wants to know it? 3) how does the evaluation information need to be used? Then I believe it will prove testing to design an evaluation without any quantitative data gathering that answers those 3 questions unless the answers are 1) the thoughts of the participants 2) me 3) to improve the learning experience
3. Different strokes
People express themselves differently – thank goodness. In this instance you may have some people who are very comfortable to express their thoughts and feelings about a particular learning experience through description and in response to open questions. The flip side of that however is those people who aren’t that comfortable or that able to express their sentiments that way and prefer to rate their experience through numbers.
I believe that quant gives qual context e.g. imagine a qualitative response that says “It was a very interesting workshop” – what does that mean? Interesting good or interesting bad? Now imagine that combined with a Likert scale from strongly disagree through a range to strongly agree in response to “I believe this learning event will allow me to safely complete <Insert task>.” Context.
I know there are some very clever methods and tools for analysing sentiment in qualitative responses. I know there are means through which you can compare answers given at different times. I also know that the day to day reality involves limited resources and a general absence of statistical brilliance. You want to understand how learner reactions to workshops change over time? Numbers. You want to understand how different functional populations respond to identical material? Numbers. You want to understand how different cultures respond to similar learning? Numbers.
Imagine you have an identical piece of training. This training is delivered by multiple trainers, in multiple locations and to diverse populations. Whilst qualitative data will give you insight into the impact, how the learning can be developed/improved and activity that can be included to help embed that learning, managing the consistent delivery of an identical product will be efficiently achieved quantitatively.
We live in the real world. In that world we have to sing for our suppers, our budgets and the licence to commit organisational resource (yes people) to learning. That requires demonstrating some form of success and whilst I acknowledge there is significant power in a summary of positive quotes taken from evaluations I am also acutely aware that numbers win arguments. Reporting and tracking numbers is far easier to do and I would rather focus my time, energy and team on improvement rather than making reporting and gaining/maintaining organisational licence even more difficult than it already is.
I had planned to research this post, be able to cite journals, quote the finest minds of the day and bring their insights to bare in adding weight to this side of the debate. But instead I met some friends for beers and am writing this at 11.30pm so instead you just get me and my personal insight.
Don’t misunderstand me I have at no point dismissed qualitative measures – I love them. They provide a richness of data you will never achieve using quantitative measures alone but for me, in my work I can’t see a time when no learning evaluation will contain any quantitative questions.
I urge you to consider both sides of this debate (you can find the other house here) and then realise that of course this house (i.e. me) is right!!