Wednesday, May 30, 2012

Raspberry Pi: 7 reasons why it won’t work

I can remember that era, when there was a buzz around the Spectrum, BBC Micro and C64. You were faced with nothing more than a command prompt and off you went. That was then, this is now. What strikes me about the Raspberry Pi initiative is the fact that it is a very British idea - good name but a bit rubbish. It seems to be premised on the idea of the nostalgic amateur tinkering about in his shed with an invention that serves no real purpose. Whenever you ask hard questions of the projects things get vague. Why do you actually need a new piece of hardware when we’ve all got computers? Why hardware and not software? How do you actually learn with this thing?
To my mind there’s lots of reasons why the Raspberry Pi looks worthy but is wrong-headed, and is likely to fail in its goal of reinvigorating interest in coding.
1. Amateurishness. It became apparent in the launch that this was more Sinclair C5 than Sinclair Spectrum. The website crashed, the hardware faulty and orders delayed. The deliberate anti-design ethos is carried to ridiculous extremes, as it looks like something ripped out of the back of an old telly.
2. Nostalgia. It’s clear that this is an attempt to resurrect the idea of amateur coders, a golden age of back-bedroom self-starters. Sorry, those days are gone. The last thing we need is a thinly disguised BBC Micro.
3. Lack of realism. Software and hardware is much more diverse and the competition is fierce. Coding can, and is, bought down a line, in countries where labour is much cheaper. We need a structured approach to the serious acquisition of relevant skills, not tinkering.
4. Hardware fixation. The world is full of cheap, fast, powerful and portable hardware. It’s the ‘software’ stupid. What’s needed is software not more hardware. In fact, there’s some brilliant games’ tools and app creation tools out there. Get kids to use the tools not buy an empty toolbox. It’s like teaching maths with just a calculator.
5. Learning ignored. It’s clear that the team don’t understand the learning process. How do you get started with this thing? It’s also a mistake to start with the heavyweight challenge of low-level coding. No real thought has gone into how coding will be taught using the device as there’s no quality learning materials and teachers are ill-equipped to handle the device in schools.
6. Wrong target audience. Interest has largely been from ageing men who love to tinker. That’s because the unplanned marketing echoed around this world and never really got out to the intended audience - youngsters.
7. Not cool. To succeed with this young audience you have to create a sense of urgency by being cool. You’re up against Apple and Mobile manufacturers. Ian Livingstone’s description of the device as the BBC Nano is laughable. Sorry, but it looks crap not cool.
If only those pesky kids would do what our generation did back in the day and get down and dirty with the hardware, we’ll be the leading software development country, again. Wrong. We got so bogged down with the BBC Micro and BBC Basic that the world raced by us in the fast lane with proper hardware and software. The primary problem is the lack of entrepreneurial spirit and business skills, not coding skills. That, in many ways, is exemplified by the project itself. It’s a geekfest.

Friday, May 04, 2012

Kirkpatrick 4-levels of evaluation: Happy sheets? Surely past its sell-by date?

Kirkpatrick has for decades been the only game in town in the evaluation of training, although hardly known in education. In his early Techniques for evaluation training programmes (1959) and Evaluating training programmes: The four levels (1994), he proposed a standard approach to the evaluation of training that became a de facto standard. It is a simple and sensible schema but has it stood the test of time?
Four levels of evaluation
Level 1 Reaction
At reaction level one asks learners, usually through ‘happy sheets’ to comment on the adequacy of the training, the approach and perceived relevance. The goal at this stage is to simply identify glaring problems. It is not, to determine whether the training worked.
Level 2 Learning
The learning level is more formal, requiring a pre- and post-test. This allows you to identify those who had existing knowledge, as well as those at the end who missed key learning points. It is designed to determine whether the learners actually acquired the identified knowledge and skills.
Level 3 Behaviour
At the behavioural level, you measure the transfer of the learning to the job. This may need a mix of questionnaires and interviews with the learners, their peers and their managers. Observation of the trainee on the job is also often necessary. It can include an immediate evaluation after the training and a follow-up after a couple of months.
Level 4 Results
The results level looks at improvement in the organisation. This can take the form of a return on investment (ROI) evaluation. The costs, benefits and payback period are fully evaluated in relation to the training deliverables.
JJ Phillips has argued for the addition of a separate, fifth, "Return on Investment (ROI)” level which is essentially about comparing the fourth level of the standard model to the overall costs of training. However, it is not that ROI is a separate level as it can be included in Level 4. Kaufman has argued that it is merely another internal measure and that of there were a fifth level it should be external validation from clients, customers and society.
Level 1 - keep 'em happy
Traci Sitzmann’s meta-studies (68,245 trainees, 354 research reports) ask ‘Do satisfied students learn more than dissatisfied students?’ and ’Are self-assessments of knowledge accurate?’ Self-assessment is only moderately related to learning. Self-assessment captures motivation and satisfaction, not actual knowledge levels. She recommends that self-assessments should NOT be included in course evaluations and should NOT be used as a substitute for objective learning measures.
So Favourable reactions on happy sheets do not guarantee that the learners have learnt anything, so one has to be careful with these results. This data merely measures opinion.
Learners can be happy and stupid. One can express satisfaction with a learning experience yet still have failed to learn. For example, you may have enjoyed the experience just because the trainer told good jokes and kept them amused. Conversely, learning can occur and job performance improve, even though the participants thought the training was a waste of time. Learners often learn under duress, through failure or through experiences which, although difficult at the time, prove to be useful later.
Happy sheet data is often flawed as it is neither sampled nor representative. In fact, it is often a skewed sample from those that have pens, are prompted, liked or disliked the experience. In any case it is too often applied after the damage has been done. The data is gathered but by that time the cost has been incurred. More focus on evaluation prior to delivery, during analysis and design, is more likely to eliminate inefficiencies in learning.
Level 2 - Testing, testing
Level 2 recommends measuring difference between pre- and post-test results but pre-tests are often ignored. In addition, end-point testing is often crude, usually testing the learner’s short-term memory. With no adequate reinforcement and push into long-term memory, most of the knowledge will be forgotten, even if the learner did pass the post-test.
Tests are often primitive and narrow, testing knowledge and facts, not real understanding and performance. Again, level2 is inappropriate for informal learning.
Level 3 – Good behaviour
At this level the transfer of learning to actual performance is measured. Many people can perform tasks without being able to articulate the rules they follow. Conversely, many people can articulate a set of rules well, but perform poorly at putting them into practice. This suggests that ultimately, Level three data should take precedence over Level two data. However, this is complicated, time consuming and expensive and often requires the buy-in of line managers with no training background, as well as their time and effort. In practice it is highly relevant but usually ignored.
Level 4 - Does the business
The ultimate justification for spending money on training should be its impact on the business. Measuring training in relation to business outcomes is exceedingly difficult. However, the difficulty of the task should, perhaps, not discourage efforts in this direction. In practice Level 4 is often ignored in favour of counting courses, attendance and pass marks.
General criticisms
First, Kirkpatrick is the first to admit that there is no research or scientific background to his theory. This is not quite true, as it is clearly steeped in the behaviourism that was current when it was written. It is summative, ignores context and ignores methods of delivery. Some therefore think Kirkpatrick asks all the wrong questions, the task is to create the motivation and context for good learning and knowledge sharing, not to treat learning as an auditable commodity. It is also totally inappropriate for informal learning.
Senior managers rarely want all four levels of data. They want more convincing business arguments. It's the training community that tell senior management that they need Kirkpatrick, not the other way round. In this sense it is over-engineered. The 4 linear levels too much. All the evidence shows that Levels 3 and 4 are rarely attempted, as all of the effort and resource focuses on the easier to collect Levels 1 and 2. Some therefore argue that it is not necessary to do all four levels. Given the time and resources needed, and demand from the organisation for relevant data, it is surely better to go straight to Level four. In practice, Level 4 is rarely reached as fear, disinterest, time, cost, disruption and low skills in statistics mitigate against this type of analysis.
The Kirkpatrick model can therefore be seen as often irrelevant, costly, long-winded, and statistically weak. It rarely involves sampling, and both the collection and analysis of the data is crude and often not significant. As an over-engineered, 50 year old theory, it is badly in need of an overhaul (and not just by adding another Level).
Evaluation should be done externally. The rewards to internal evaluators for producing a favourable evaluation report vastly outweigh the rewards for producing an unfavourable report. There are also lots of shorter, sharper and more relevant approaches; Brinkerhoff’s Success Case Method, Daniel Stufflebeam's CIPP Model, Robert Stake's Responsive Evaluation, Kaufman's Five Levels of Evaluation, CIRO (Context, Input, Reaction, Outcome), PERT (Program Evaluation and Review Technique), Alkins' UCLA Model, Provus's Discrepancy Model and Eisner's Connoisseurship Evaluation Model. However, Kirkpatrick is too deeply embedded in the culture of training, a culture that tends to get stuck with theories that are often 50 years, or more, old.
Evaluation is all about decisions. So it makes sense to customise to decisions and decision makers. And if one asks ‘To what problem is evaluation a solution’ one may find that it may be costs, low productivity, staff retention, customer dissatisfaction and so on. In a sense Kirkpatrick may stop relevant evaluation.
Kirkpatrick’s four levels of evaluation have soldiered on for over 50 years as, like much training theory, it is the result of strong marketing, now by his son James Kirkpatrick and has become fossilised in ‘train the trainer’ courses. It has no real researched or empirical background, is over-engineered, linear and focuses too much on less relevant Level 1 and 2 data drawing effort away from the more relevant Level 4.
Kirkpatrick, D. (1959). Techniques for evaluation training programmes.
Kirkpatrick, D. (1994). Evaluating training programmes: The four levels.
Kirkpatrick, D. and Kirkpatrick J.D. (2006). Evaluating Training Programs (3rd ed.). San Francisco, CA: Berrett-Koehler Publishers
Phillips, J. (1996). How much is the training worth? Training and Development, 50(4),20-24.
Kaufman, R. (1996). Strategic Thinking: A Guide to Identifying and Solving Problems. Arlington, VA. & Washington, D.C. Jointly published by the American Society for Training & Development and the International Society for Performance Improvement
Kaufman, R. (2000). Mega Planning: Practical Tools for Organizational Success. Thousand Oaks, CA. Sage Publications.
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.
Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. N. (in press). Self-assessment of knowledge: An affective or cognitive learning measure? Academy of Management Learning and Education.

Thursday, May 03, 2012

Gardner Multiple Intelligences or school subjects mirrored?

Howard Gardner’s theory of multiple intelligences is opposed to the idea of intelligence being a single measurable attribute. His is a direct attack on the practice of psychometric tests and behaviourism, relying more on genetic, instinctual and evolutionary arguments to build a picture of the mind. He also disputes the Piaget notion of fixed developmental stages, claiming that a child can be at various stages of development across different intelligences.
Evidence for intelligences
He viewed intelligence as “the capacity to solve problems or to fashion products that are valued in one or more cultural setting” (Gardner and Hatch, 1989). These criteria were identified by him as 'signs' of an intelligence: 
1. Potential isolation by brain damage.
2. The existence of idiot savants, prodigies and other exceptional individuals.
3. An identifiable core operation or set of operations.
4. A distinctive development history, along with a definable set of 'end-state' performances.
5. An evolutionary history and evolutionary plausibility.
6. Support from experimental psychological tasks.
7. Support from psychometric findings.
8. Susceptibility to encoding in a symbol system.
Multiple intelligences (8)
These criteria were used to identify a list of eight ‘intelligences’. His thoughts on what constitute intelligence have developed over time. The first two are ones that have been typically valued in schools; the next three are usually associated with the arts; and the final three are what Howard Gardner called 'personal intelligences'.
1. Linguistic: To learn, use and be sensitive to language(s).
2. Logical-mathematical: Analysis, maths, science and investigative abilities.
3. Musical: Perform, compose and appreciate music, specifically pitch, tone and rhythm.
4. Bodily-kinaesthetic: Co-ordination and use of whole or parts of body.
5. Spatial: Recognise, use and solve spatial problems both large and confined.
6. Interpersonal: Ability to read others’ intentions, motivations, desires and feelings.
7. Intrapersonal: Self-knowledge and ability to understand and use one’s inner knowledge.
8. Naturalist: Ability to draw upon the immediate environment to make judgements.
It is important to understand that these intelligences operate together and complement each other. He has described people as having blends of intelligences.
(Note that this last intelligence was added later, in 1999.)
Application of the theory
Gardner has also worked towards a full set of recommendations on the use of multiple intelligence theory in schools. The Unschooled Mind, Intelligence Reframed, and The Disciplined Mind look at how the theory may be applied by educators. This has led to a broader more holistic view of education, being less rigid about abstract and academic learning. It demands knowledge fo these intelligences among teachers, an aspirational approach to learning, more collaboration between teachers of different disciplines, better and more meaningful curriculum choices and a wider use of the arts.
John White has criticised the theory as being subjective and not validated by evidence. Rather than being derived from solid empirical evidence, Gardner seems to draw his taxonomy from broad observations. It is also not clear how this maps on to actual cognitive functions, as it depends (variably) on the learner dealing with actual content in various forms. In fact, it also bears an uncanny resemblance to the current curriculum subjects. White suggests that this is why it has been so enthusiastically adopted by teachers. 
Gardner has also been criticised for simply perpetuating the idea of ‘intelligences’, pigeon-holing students, rather than exploring their potential. again this is a general problem with learning styles and multiple intelligences theory. It may actually thwart attempts to teach and learn skills that the students has not yet mastered, thereby doing more harm than good.
Gardner himself has been surprised and at times disappointed by the way his theory has been applied in schools, in one case as, “a mish-mash of practices…Left Right brain contrasts….learning styles….NLP, all mixed up with dazzling promiscuity”. In the US some schools have redesigned the whole curriculum, classrooms and even entire schools around the theory, which may be several steps too far. The point is to be sensitive to these intelligences, not to let them prescribe all practice. However, Project SUMIT (Schools Using Multiple Intelligences Theory) does claims to have identified real progress across the board in schools that have indeed been sensitive to Gardner’s theories.
In Gardner’s 2003 paper in the American Educational Research Association, Multiple Intelligences after Twenty Years, he states, 
I have come to realize that once one releases an idea – “meme” – into the world, one cannot completely control its behaviour – anymore than one can control those products of our genes we call children.” 
Absolutely. One of the problems with Gardner’s ‘Multiple Intelligences’ was its seductiveness. A teacher could simply say, everyone’s smart, we’re all just smart in different ways. There’s a truth in this, in terms of a narrowly academic curriculum, but when adopted as ‘science’ in schools, Multiple Intelligences can be a dumbing-down, destructive force. In general people confuse the critique of single IQ scores as a measure of intelligence, with Gardner’s theory, as if he were the final world on the matter. He is not.
Not neuroscience
First, teachers who quote and use the theory are unlikely to have fully understood its status and further development by Gardner himself. Few will have understood that it is not supported in the world of neuroscience, despite the perception by educators that it arose from there. Gardner’s first book, Frames of the Mind: The Theory of Multiple Intelligences (2003) laid out the first version of the theory, followed 16 years later by a reformulation in Intelligence Reframed (1999), then again in Multiple Intelligences after Twenty Years (2003). Few have followed its development after 1983 or the critiques and Gardner’s subsequent distancing of the theory from brain science.
Lynn Waterhouse laid out the lack of scientific evidence for the theory in Multiple Intelligences, the Mozart Effect, and Emotional Intelligence: A Critical Review in Educational Psychologista paper to which Gardner felt duty bound to respond. In fact, in response to the absence of neurological evidence for his separate 'intelligence' components, Gardner had to redefine his intelligences as “composites of fine-grained neurological subprocesses but not those subprocesses themselves”(Gardner and Moran, 2006). In fact, many areas of learning such as reason, emotion, action, music, language and so on are characterised by their overlapping, dispersed and complex patterns of activity in the brain, as shown in brain scans. Islands of functional specificity are extremely rare. In short, Gardner suffers from conceptual invention and simplicity. Brain science simply does not support the theory.
Gardener himself admits that the science has yet to come, but teachers assume it’s already there and that the theory arose from the science. Big mistake. Pickering and Howard-Jones found that teachers associate multiple intelligences with neuroscience, but as Howard-Jones stated in his BECTA report, “In terms of the science, however, it seems an unhelpful simplification as no clearly defined set of capabilities arises from either the biological or psychological research”.
Gardner has strong appeal to educators looking for practical support for existing subject specialisms but there are doubts about the theory as serious experimental psychology. Many do not see his ‘intelligences’ as a comparable set of abilities as some, such as musical intelligence, do not have the same consequential impact as others. He has also been criticised for not testing his theories experimentally and failing to identify exactly why he chose his particular criteria for intelligence. What is clear, however, is that Gardner has opened up the debate and affected real practice in educational institutions around the whole person with a spread of subjects and approaches to learning. This fits teachers’ intuitive feel for the abilities of those they teach. While the theory may be rather speculative, his identified intelligences represent real dispositions, abilities, talents and potential, which many schools ignore. 
Gardner, Howard (1983; 1993) Frames of Mind: The theory of multiple intelligences, New York: Basic Books.
Gardner, Howard (1989) To Open Minds: Chinese clues to the dilemma of contemporary education, New York: Basic Books. 
Gardner, H. (1991) The Unschooled Mind: How children think and how schools should teach, New York: Basic Books.
Gardner, Howard (1999) Intelligence Reframed. Multiple intelligences for the 21st century, New York: Basic Books.
Gardner, H. (2003) "Multiple Intelligences after Twenty Years." American Educational Research Association.
Gardner, H., and Moran, S. (2006) The Science of Multiple Intelligences Theory: A Response to Lynn Waterhouse, Educational Psychologist, 41.4, 227-32.
Pickering, S.J., and Howard-Jones, P. (2007) Educators' Views on the Role of Neuroscience in Education: Findings from a Study of UK and International Perspectives, Mind, Brain and Education, 1.3, 109-13.
Waterhouse L. (2006) Multiple Intelligences, the Mozart Effect, and Emotional Intelligence: A Critical Review, Educational Psychologist, 41.4, 207-25.
White, J. (1998) Do Howard Gardner's multiple intelligences add up? London: Institute of Education, University of London.

Wednesday, May 02, 2012

Eysenck (1916-1997) Bad ass of assessment?

Binet, the man responsible for inventing the IQ (intelligence quotient) test, warned against it being seen as a sound measure for individual intelligence or that it should be seen as ‘fixed’. His warnings were not heeded as education itself became fixated with the search and definition of a single measure of intelligence – IQ. Hans Eysenck was the figure around whom much of the IQ debate figured in the 20th century. What is less well known is his work on personality types and his opposition to psychoanalysis and Freud in particular, explained in The Decline and Fall of the Freudian Empire.
A controversial figure, he put forward the proposition that intelligence had a hereditary component and was not wholly, socially determined. Although this area is highly controversial and complex, the fact that genetic heritability has some role has become the scientific orthodoxy. What is still controversial is the definition and variability of ‘intelligence’ and the role which intelligence and other tests have in education and training. The environment has been shown to play an increasing role but the nature/nurture debate is a complex area, now a rather esoteric debate around the relevance of different statistical methods.
IQ theory has come under attack on several fronts.  Stephen Jay Gould’s 1981 book The Mismeasure of Man is only one of many that have criticised IQ research as narrow, subject to reification (turns abstract concepts into concrete realities) and linear ranking, when cognition is, in fact, a complex phenomenon. IQ research has also been criticised for repeatedly confusing correlation with cause, not only in heritability, where it is difficult to untangle nature from nurture, but also when comparing scores in tests with future achievement. Class, culture and gender may also play a role and the tests are not adjusted for these variables. Work by Howe and Eriksson and others explains extraordinary achievement as being the result of early specialisation and a focused investment in over 10,000 hours of practice and not measurable IQ.
The focus on IQ, a search for a single, unitary measure of the mind, is now seen by many as narrow and misleading. Most modern theories of mind have moved on to more sophisticated views of the mind as with different but interrelated cognitive abilities. More modular theories and theories of multiple intelligence have come to the fore. Sternberger’s three-part (analytic, creative, practical) was followed by Gardner’s eight intelligences in Frames of Mind. Goleman’s Emotional Intelligence (EQ), reflected in other more academic and well researched work, also challenged the unitary theory of intelligence, with its emphasis on the ability to harness emotion in self-awareness, thinking, decision making and in dealing with others. It is not that IQ is the antithesis of EQ, they are merely different. The educational system in many countries is now being criticised for failing to teach this wider set of skills, that many now agree are useful in adult life.
Eysenck worked with Cyril Burt at the University of London, the man responsible for the introduction of the standardised 11+ examination in the UK, enshrined in the 1944 Butler Education Act, an examination that, incredibly, still exists in parts of the UK. Burt was subsequently discredited for publishing largely in a journal that he himself edited, falsifying, not only the data upon which he based his work, but also co-workers on the research.
This is just one of many standardised tests that have become common in education but many believe that tests of this type serve little useful purpose and are unnecessary, even socially divisive. On the other hand supporters of test regimes point towards the meritocratic and objective nature of tests. Some, however, argue that standard tests have led to a culture of constant summative testing, which has become a destructive force in education, demotivating and acting as an end-point and filter, rather than a useful mark of success. Narrow academic assessment has become almost an obsession in some countries, fuelled by international pressure from PISA.
Personality traits
Eysenck also contributed (with his wife) to the idea that personality can be defined in terms of psychoticism, extraversion and neuroticism. This provided the basis for the now widely respected OCEAN model proposed by Costa & McCrae:
Eysenck rejected the Costa & McCrae model but in the end it has become the more persuasive theory. This well researched area ‘personality types’, has largely been ignored in learning, in favour of the more faddish ‘learning styles’ theory. However, it has ben argued that this type of differentiation is far more useful when dealing with different types of learners.
Interestingly, when measuring IQ, the Flynn Effect, taken from military records, shows that scores have been increasing at the rate of about 3 points per decade and there is further evidence that the rate is increasing This was used by Stephen Johnson in his book Everything bad is Good for You to hypothesise that exposure to new media is responsible, a position with which Flynn himself now agrees. This throws open a whole debate and line of research around the benefits of new media in education and learning. Highly complex and interactive technology may be making us smarter. If true, this has huge implications for the use of technology in education and society in general.
Unfortunately, Eysenck and many other psychologists  throughout the middle of the 20th century may have focused too much on narrow IQ tests. This has led to some dubious approaches to early assessment that has, to a degree, socially engineered the future educational opportunities and lives of young people. IQ theorists like Eysenck tended to focus on logical and mathematical skills, to the detriment of other abilities, leading some to conclude that education has been over-academic. This, they argue, has led to a serious skew on curricula, assessment and the funding of education to the detriment of vocational and other skills.
Eysenck, H.J. (1967) The Biological Basis of Personality. Springfield, IL: Charles C. Thomas.
Eysenck, H.J. (1971) The IQ Argument: Race, Intelligence, and Education. New York: Library Press.
Eysenck, H.J. (1985) Decline and Fall of the Freudian Empire
Eysenck, H.J. & Eysenck, S.B.G. (1969). Personality Structure and Measurement. London: Routledge.
Gould, S. J. (1981).The mismeasure of man. New York: Norton.
Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.
Goleman, D. (1995). Emotional intelligence. New York: Bantam Books.
Howe, M. J. A. (1999). Genius explained. Cambridge, U.K: Cambridge University Press.
Johnson, S. (2005). Everything bad is good for you. London: Allen Lane.
McCrae, R. R., & Costa, P. T. (2003). Personality in adulthood: A five-factor theory perspective. New York: Guilford Press.

Tuesday, May 01, 2012

Honey & Mumford All styles no substance?

If VAK became a well-marketed, viral success in education, Honey & Mumford was the viral success in adult education and training. Once again, a derivative model, this time from Kolb, rather than NLP, took an experiential model and applied to general management skills
Four learning styles
Their learning styles were then labelled:
1. Activist – dive in and learn by doing
2. Reflector – stand- back, observe, think and then act
3. Theorist – require theory, models, and concepts and analysis
4. Pragmatist – experimenters who like to apply things in the real world
The learner is asked to complete an expensive, copyrighted questionnaire that diagnoses their learning style by asking what the learner does in the real workplace. Their learning style is then used to identify weaknesses that need building. To be fair, unlike the VAK evangelists, they did not fall into the trap of labelling learners, then teaching them in that styles alone. The idea was not to see these qualities as fixed but to recognise your learning style but also tackle your weaknesses.
All styles no substance
Honey and Mumford’s model, although marketed heavily, and used widely in adult education and training, seems to have no serious academic validity. As a theory it does attempt to widen the trainers’ view of learning, and trainees’ view of themselves as learners. However, beyond this intuitive appeal to difference, the theory is crude, crudely applied and even when the learning styles questionnaire is applied, rarely carried through to different types of learning experience for the supposed different types of learners.
This issue has been addressed in several commissioned reports. A review of learning styles commissioned by the Association of Psychological Science examined the evidence and found it wanting. "We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number. However, given the lack of methodologically sound studies of learning styles, it would be an error to conclude that all possible versions of learning styles have been tested and found wanting; many have simply not been tested at all.
Frank Coffield, through Learning and Skills Development Agency research, found a ‘bedlam of contradictory claims’ with a ‘proliferation of concepts, instruments and strategies’. In total they uncovered 71 competing theories. All were found ‘seriously wanting’ with ‘serious deficiencies’. Many were downright dangerous as they ‘over-simplify, label and stereotype’.
Learning styles theories, in general, have been diagnosed as being flaky and faddish. They have an intuitive appeal but, given the proliferation of these theories, with success based more on marketing than evidence, it is a largely discredited field. In practice, it tends to be a dodgy diagnosis without any real carry through to treatment. Trainers rarely provide learning experiences that respond in any real way to the four-way schema. No sooner is the questionnaire complete than the PowerPoint is out. Given the stereotyping of learners and dangers exposed by recent research, it would seem that these theories should no longer be applied in real learning.
Honey P, Mumford A. (1992) The Manual of Learning Styles 3rd Ed. Maidenhead, Peter Honey.
Honey, P & Mumford, A (2006). The Learning Styles Questionnaire, 80-item version. Maidenhead, UK, Peter Honey Publications
Coffield, F., Moseley, D., Hall, E., Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning. A systematic and critical review. London: Learning and Skills Research Centre.
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9, 105-119.

Fleming VAKuous learning styles?

In education during the 1980s and 90s we saw the rise of learning theories that were weak on research but strong on marketing. Learning styles, literally dozens of different theories that categorise types of learners, began to be promoted but one, above all, won the viral battle in schools – VAK.
Fleming's VAK/VARK model
An unfortunate offspring of the pseudoscience that is NLP, Neil Fleming’s 1987 variation on VAK, was the VARK learning styles model. This took the unproven proposition in NLP that we approach learning with a dominant sensory mode, namely visual, auditory or kinaesthetic.
1. Visual learners
2. Auditory learners
3. Kinaesthetic learners

Fleming took the existing VAK model and added Read/write. As usual it has its own learning styles questionnaire (16 questions).
Fleming claimed that learners have clear preferences for learning on one of these three styles. Visual learners prefer to learn from images such as photographs, graphs, diagrams and so on. Auditory learners prefer listening to teachers speak, lectures, tapes and so on. Kinaesthetic learners prefer doing things such as tactile exploration and physical experimentation.
Despite being a crude categorisation, unresearched and taken from a field of learning widely regarded in academic and professional psychology as bogus, this classification has been widely adopted in schools. In some cases children have been given badges with their stated V, A or K learning style and taught in separate groups. Despite serious criticism from almost every angle, government research reports, neuroscientists, educational think-tanks and actual research, this pop-psychology has become deeply rooted in education. Even Government departments, quality organistations and educational authorities willingly support and publicise the theory and ‘personalised’ learning for many, means adopting ‘learning styles’.
Fleming’s (Dunn and Dunn in the US) claims seem to be based on supposition and not researched evidence. Learning styles in their many guises proved wrong on a number of fronts. First the research backing the VAK scheme did not exist. According to Coffield in a damning Government funded report on learning styles, “Despite a large and evolving research programme, forceful claims made for impact are questionable because of limitations in many of the supporting studies and the lack of independent research on the model.” Second, the scheme is far too simple and heavily criticised by neuroscientists and professional psychologists as being at best a gross simplification at worst, misleading and wrong.  Many claim that learning a complex and integrated process that is put in jeopardy by the practice of learning styles.  Some researchers accuse teachers of pigeon-holing students, leading to stereotyping. Even worse, it may lead to impoverished learning as the student is not building the right range of learning skills. The weaknesses may be the very things that need attention. The great danger is that we label learners and limit progress, rather than enhance, their educational aspirations. Guy Claxton makes this very point regretting the use of VAK in classroom practice on the basis that it restricts learning. Stahl claims there has been an "utter failure to find that assessing children's learning styles and matching to instructional methods has any effect on their learning." Roger Schank believes that teachers are confusing ‘learning styles’ with a much stronger phenomenon, ‘personality’. He quite simply thinks that learning styles do not exist.
Despite reports funded but Government, academic institutions and professional psychologists, decrying learning styles theory, and VAK in particular, it persists across the learning world, promulgated by poor teacher training and ‘train the trainer’ courses. It would not be far wrong to describe it as a theoretical virus that has infected education and training on a global scale, kept alive by companies peddling CPD to teachers. Its appeal is clearly in the intuitive appeal that learners are different, which is certainly true but there appears to be little evidence to support the idea that they can be put into these simple boxes. Learning professionals certainly need to understand the considerable differences between learners but the debate seems to have fossilised around this caricature of a theory.
Fleming ND (2001) Teaching and Learning Styles: VARK Strategies. Honolulu Community College
Dunn R, Dunn K. (1978) Teaching Students Through Their Individual Learning Styles: A Practical Approach. Virginia, Reston Publishing.
Dunn, R., Dunn, K., & Price, G. E. (1984). Learning style inventory. Lawrence, KS, USA: Price Systems
Coffield, F., Moseley, D., Hall, E., Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning. A systematic and critical review. London: Learning and Skills Research Centre.
Stahl, S. A. (2002). Different strokes for different folks? In L. Abbeduto (Ed.), Taking sides: Clashing on controversial issues in educational psychology (pp. 98-107). Guilford, CT, USA: McGraw-Hill.