Tuesday, June 21, 2016

Data analytics in HE – 7 examples of why the real data is ignored

Everywhere I go these days there’s a Data Analytics’ initiative. They’re collecting data like kids at a wedding scatter; data at the organisational level, data at the course level, data at the student level. But it’s trainspotting and all very ‘groupthinky’. They’re gathering the wrong data on the wrong things, as the things that really matter are awkward and uncomfortable. Worse still, there’s perhaps a wilful avoidance of the prickly data, data that really could lead to decisions and change. So let’s look at seven serious areas where data should be gathered but is not.
1. University rankings data
These must go down as the most obvious example of the ‘misuse’ of data in HE.
There are many uncomfortable truths about the data con that is University rankings. It’s a case study in fraudulent data analytics. Universities play the game by being selective about what ranking table they choose, bait and switch by pretending the rankings reflect teaching quality (when they don’t), ignore the fact that they’re qualitative, self-fulfilling ratings that often compare apples with oranges. Even worse, they skew strategies, spending and agendas, while playing to crude marketing strategies.  Will HE stop playing fast and loose with this data. Not a chance.
2. Data on pay differentials
Want to be ethical on data collection? That’s fair. So let’s look at the salaries across the organisation and work out the differential between those at the top and the bottom? Now look at how that range has widened, not narrowed. Senior academic staff pay is rising three times faster than other academics, especially those of Vice-Chancellors (and that’s not counting the perks). Average pay for senior staff is £82,321. average pay for non-professorial staff is $43,327. We have this data – it’s damning. Are senior academics robbing other academics blind? Yes. Will any decisions be made? No.
3. Data on pensions
Forget the astonishing data point that cancer Researchers among others invested £211 million in British American Tobacco last year. The Universities Superannuation Scheme is the UKs largest private sector scheme with a whopping £57 billion of liabilities and an £8 billion deficit. At present, the pension payments are nowhere near filling in this hole. Why? It would wipe out the surpluses of many Universities across the land. It’s a badly run scheme and there is little chance that the hole will be filled. The consequences, however, are severe. It is likely that costs will rise and that student fees will rise. Not good news.
4. Data on researchers as teachers
You would like to know whether researchers make good teachers? But what if the data suggests this may not be true. Teaching skills demand social skills, communication skills, the ability to hold an audience, keep to the right level, avoid cognitive overload, good pedagogic skills and the ability to deliver constructive feedback. Some contend that the personality types necessary for teaching are very different and, statistically, do not largely match those of researchers. So what about the evidence? The major research and text is Astin, who studied longitudinal data on 24,847 students at 309 different institutions and the correlation between faculty orientation towards research and student/teaching orientation. The two were strongly negatively correlated. The student orientation was also negatively related to compensation. He concludes that “there is a significant institutional price to be paid, in terms of student development, for a very strong faculty emphasis on research” and recommends stronger leadership to rebalance the incentives and time spent on teaching v research. To be fair, academics are now largely judged on research not teaching. Too much effort in teaching can hamper your career. It rarely enhances your prospects. The dramatic swing away from teaching towards research was tracked by Boyer who identified year on year abandonment of teaching commitment towards research. However, it wouldn’t matter how much data we collected on this topic, the Red Sea has a greater chance of parting than this separation.
5. Data on Journals
More Journals, more papers, fewer readers. Good data on actual citation rates and readers are hard to come by. The famous Indiana figure of 90% remaining unread has been refuted but other sources show that, although it’s not that awful, it’s still pretty awful. This is a pretty good state of play paper, as it’s a complicated issue – but nevertheless worrying. We really should be concerned as the push towards research (data suggests lots of it is 2nd and 3rd rate as unread), away from teaching may be doing the real harm in HE. We simply don’t know, nor do we dare ask. 
6. Data on lecture attendance
Let’s start with a question. Do you record (accurately and consistently), the number if students who attend lectures? If the answer is NO, don’t even pretend that you’re off the block on data analytics on teaching. A recent study from Harvard, over ten courses, showed that only 60% of students on average even turned up. In any other area of human endeavour, when one has paid tens of thousands of dollars up front for high-quality teaching this came as a shock. Similar studies show similar results in other institutions. If you were running a restaurant, and 40% of your students were not turning up to lectures (often it’s a lot worse) you’d seriously question your ability, not only as a chef, but also a restaurant owner. Remember that students have paid for the meal, through the nose, and if they aren’t turning up, something’s wrong. So, I have the data that students are not turning up to lectures, what do I do with that data? Nothing – we’ll still call ourselves ‘lecturers’ and lecture on.
Suppose I also show you data that says students, especially those struggling, with English as a second language and so on, benefit from recorded lectures. You’d seriously consider recording your lectures. My point is that collecting data is unlikely to change anything other than dig deeper holes, if you’re not asking the right questions and taking action.
7. Data on assessment
Take assessment. What data do you have on this? The grades for the occasional essay? This is not big data, it’s not even little data, it’s back of an envelope data. Student submits essay, waits weeks for a grade and some light feedback – too late. Why bother? The problem is the method of assessment. Will this change – no. What you really need at this level is constant data from regular low stakes testing. Not only is this proven to increase retention and recall, you know where students are in terms of progress. Here’s a tougher question? Do you really know how many essays submitted by students are written by others? And by others, I mean parents, or essay mills? Look at the number of these companies and their accounts. That’s interesting data. Why will this not happen? Because there’s reputations at risk, pesky journalists to appease.
Conclusion

Henry Ford famously said that if he had gathered data from his customers, they’d have asked for faster horses. Actually, he said no such thing, but what do we care in education, a good quote is good data, and it captures exactly what is happening with data analytics in HE. Data is not the new oil in education. Data is the new slurry. Dead pools of lifeless, inert stuff that ends up in unread and unloved reports. By going for low relevance data, in areas that are less than critical, it’s wilful avoidance of the real issues. It gives the illusion of problem solving and neatly avoids the hard and often detailed decisions that have to be made around cultural and fiscal change.

Data analytics in HE – 7 examples of why the real data is ignored

Everywhere I go these days there’s a Data Analytics’ initiative. They’re collecting data like kids at a wedding scatter; data at the organisational level, data at the course level, data at the student level. But it’s trainspotting and all very ‘groupthinky’. They’re gathering the wrong data on the wrong things, as the things that really matter are awkward and uncomfortable. Worse still, there’s perhaps a wilful avoidance of the prickly data, data that really could lead to decisions and change. So let’s look at seven serious areas where data should be gathered but is not.
1. University rankings data
These must go down as the most obvious example of the ‘misuse’ of data in HE.
There are many uncomfortable truths about the data con that is University rankings. It’s a case study in fraudulent data analytics. Universities play the game by being selective about what ranking table they choose, bait and switch by pretending the rankings reflect teaching quality (when they don’t), ignore the fact that they’re qualitative, self-fulfilling ratings that often compare apples with oranges. Even worse, they skew strategies, spending and agendas, while playing to crude marketing strategies.  Will HE stop playing fast and loose with this data. Not a chance.
2. Data on pay differentials
Want to be ethical on data collection? That’s fair. So let’s look at the salaries across the organisation and work out the differential between those at the top and the bottom? Now look at how that range has widened, not narrowed. Senior academic staff pay is rising three times faster than other academics, especially those of Vice-Chancellors (and that’s not counting the perks). Average pay for senior staff is £82,321. average pay for non-professorial staff is $43,327. We have this data – it’s damning. Are senior academics robbing other academics blind? Yes. Will any decisions be made? No.
3. Data on pensions
Forget the astonishing data point that cancer Researchers among others invested £211 million in British American Tobacco last year. The Universities Superannuation Scheme is the UKs largest private sector scheme with a whopping £57 billion of liabilities and an £8 billion deficit. At present, the pension payments are nowhere near filling in this hole. Why? It would wipe out the surpluses of many Universities across the land. It’s a badly run scheme and there is little chance that the hole will be filled. The consequences, however, are severe. It is likely that costs will rise and that student fees will rise. Not good news.
4. Data on researchers as teachers
You would like to know whether researchers make good teachers? But what if the data suggests this may not be true. Teaching skills demand social skills, communication skills, the ability to hold an audience, keep to the right level, avoid cognitive overload, good pedagogic skills and the ability to deliver constructive feedback. Some contend that the personality types necessary for teaching are very different and, statistically, do not largely match those of researchers. So what about the evidence? The major research and text is Astin, who studied longitudinal data on 24,847 students at 309 different institutions and the correlation between faculty orientation towards research and student/teaching orientation. The two were strongly negatively correlated. The student orientation was also negatively related to compensation. He concludes that “there is a significant institutional price to be paid, in terms of student development, for a very strong faculty emphasis on research” and recommends stronger leadership to rebalance the incentives and time spent on teaching v research. To be fair, academics are now largely judged on research not teaching. Too much effort in teaching can hamper your career. It rarely enhances your prospects. The dramatic swing away from teaching towards research was tracked by Boyer who identified year on year abandonment of teaching commitment towards research. However, it wouldn’t matter how much data we collected on this topic, the Red Sea has a greater chance of parting than this separation.
5. Data on Journals
More Journals, more papers, fewer readers. Good data on actual citation rates and readers are hard to come by. The famous Indiana figure of 90% remaining unread has been refuted but other sources show that, although it’s not that awful, it’s still pretty awful. This is a pretty good state of play paper, as it’s a complicated issue – but nevertheless worrying. We really should be concerned as the push towards research (data suggests lots of it is 2nd and 3rd rate as unread), away from teaching may be doing the real harm in HE. We simply don’t know, nor do we dare ask. 
6. Data on lecture attendance
Let’s start with a question. Do you record (accurately and consistently), the number if students who attend lectures? If the answer is NO, don’t even pretend that you’re off the block on data analytics on teaching. A recent study from Harvard, over ten courses, showed that only 60% of students on average even turned up. In any other area of human endeavour, when one has paid tens of thousands of dollars up front for high-quality teaching this came as a shock. Similar studies show similar results in other institutions. If you were running a restaurant, and 40% of your students were not turning up to lectures (often it’s a lot worse) you’d seriously question your ability, not only as a chef, but also a restaurant owner. Remember that students have paid for the meal, through the nose, and if they aren’t turning up, something’s wrong. So, I have the data that students are not turning up to lectures, what do I do with that data? Nothing – we’ll still call ourselves ‘lecturers’ and lecture on.
Suppose I also show you data that says students, especially those struggling, with English as a second language and so on, benefit from recorded lectures. You’d seriously consider recording your lectures. My point is that collecting data is unlikely to change anything other than dig deeper holes, if you’re not asking the right questions and taking action.
7. Data on assessment
Take assessment. What data do you have on this? The grades for the occasional essay? This is not big data, it’s not even little data, it’s back of an envelope data. Student submits essay, waits weeks for a grade and some light feedback – too late. Why bother? The problem is the method of assessment. Will this change – no. What you really need at this level is constant data from regular low stakes testing. Not only is this proven to increase retention and recall, you know where students are in terms of progress. Here’s a tougher question? Do you really know how many essays submitted by students are written by others? And by others, I mean parents, or essay mills? Look at the number of these companies and their accounts. That’s interesting data. Why will this not happen? Because there’s reputations at risk, pesky journalists to appease.
Conclusion

Henry Ford famously said that if he had gathered data from his customers, they’d have asked for faster horses. Actually, he said no such thing, but what do we care in education, a good quote is good data, and it captures exactly what is happening with data analytics in HE. Data is not the new oil in education. Data is the new slurry. Dead pools of lifeless, inert stuff that ends up in unread and unloved reports. By going for low relevance data, in areas that are less than critical, it’s wilful avoidance of the real issues. It gives the illusion of problem solving and neatly avoids the hard and often detailed decisions that have to be made around cultural and fiscal change.

Friday, June 10, 2016

Has the old 'graphics-text-graphics-text-MCs had its day? Evidence that effortful, open-response learning much better

To what degree does contemporary online learning reflect contemporary learning theory? The old paradigm of graphic-text-MCQ is way out of line with recent (and past) learning theory, so that, no matter how much glitz, animation and graphics you produce, the fundamentals of retention and recall are ignored. On a quick walk round the Learning technologies show this year I saw much the same as I've seen for the last 30 years, some worse. The same old, over-engineered content that takes months to make at high cost but with low retention value.
So, over the last year or so I’ve been working on online learning (WildFire) that largely abandons the multiple choice question, with lots of graphics production and glitz, for a more stripped-down approach that focuses on effortful learning. I wanted to produce content quickly, in minutes not months, cheaply (at least 80% cheaper) and to a higher quality than existing online learning (based on retention and recall).
Science of learning
There are good and bad ways to learn. Unfortunately much of what feels intuitive, is in fact, wrong. The science of learning has shown that researched, counterintuitive strategies, often ignored in practice, produce optimal learning.
For example, much advice and most practice from educational institutions – re-reading, highlighting and underlining – is wasteful. In fact, these traditional techniques can be dangerous, as they give the illusion of mastery. Indeed, learners who use reading and re-reading show overconfidence in their mastery, compared to learners who take advantage of effortful learning.
Yet significant progress has been made in cognitive science research to identify more potent strategies for learning. The first strategy, mentioned as far back as Aristotle, Francis Bacon then William James, is ‘effortful’ learning. It is what the learner does that matters.
Simply reading, listening or watching, even repeating these experiences, is not enough. The learning is in the doing. The learner must be pushed to make the effort to retrieve their learning to make it stick in long-term memory. This one act is the best defence against the brain’s other propensity, identified by Ebbinghaus in 1885 – forgetting.
With this in mind, I wanted to design learning experiences to deliver good learning using some fundamental principles in researched learning theory:
1. Active retrieval
2. Multiple-choice v open-response
3. Typing in words

1. Active retrieval

To be specific about effortful learning, by effort I mean ‘active retrieval’ as the most powerful learning strategy at your disposal. The brain, the organ that named itself, is a unique organ in that it can test itself to see what it knows or doesn’t know. At the same time this act of retrieval consolidates has been found to be even more powerful than the original learning experience.
Study 1 - Gates
The first solid research on retrieval was by Gates (1917), who tested children aged 8-16 on short biographies. Some simply re-read the material several times, others were told to look up and silently recite what they had read. The latter, who actively retrieved knowledge, showed better recall.
Study 2 - Spitzer
Spitzer (1939) made over 3000 11-12 year olds read 600 word articles then tested students at periods over 2 months. The greater the gap between testing (retrieval) and the original exposure or test, the greater the forgetting. The tests themselves seemed to halt forgetting.
Study 3 - Tulving
Tulving (1967) took this further with lists of 36 words, with repeated testing and retrieval. The retrieval led to as much learning as the original act of studying. This shifted the focus away from testing as just assessment to testing as retrieval, as an act of learning in itself.
Study 4 – Roediger
Roediger et al. (2011) did a study on text material covering Egypt, Mesopotamia, India and China, in the real context of real classes in a real school, a Middle School in Columbia, Illinois. Retrieval tests, only a few minutes long, produced a full grade-level increase on the material that had been subject to retrieval.
Study 5 – McDaniel
McDaniel (2011) did a further study on science subjects, with 16 year olds, on genetics, evolution and anatomy. Students who used retrieval quizzes scored 92% (A-) compared to 79% for those who did not. More than this, the effect of retrieval lasted longer, when the students were tested eight months later.
Design implications
So I’ve been designing learning as a retrieval learning experience, largely using open-input, where you have to pull things from your memory and make a real effort to type in the missing words, given their context in a sentence. First, as the research shows, this tells you what you know, half-know or don’t know. Second, it consolidates what you know in long-term memory. Third, it encourages you to improve your performance.

2. Open-response v multiple-choice

Most online learning relies heavily on multiple-choice questions, which have become the staple of much e-learning content. These have been shown to be effective, as almost any type of test item is effective to a degree, but they have been shown to be less effective than open-response, as they test recognition from a list, not whether it is actually known.
Study 1 - Duchastel and Nungester
Duchastel and Nungester (1982) found that multiple-choice tests improve your performance on recognition in subsequent multiple-choice tests and open input improves performance on recall from memory. This is called the ‘test practice effect’.
Study 2 – Kang
Kang et al. (2007) showed that, with 48 undergraduates, reading academic Journal quality material, open input is superior to multiple-choice (recognition) tasks. Multiple choice testing had an affect similar to that of re-reading whereas open-input resulted in more effective student learning.
Study 3 – McDaniel
McDaniel et al. (2007) repeated this experiment in a real course with 35 students enrolled in a web-based Brain and Behavior course at the University of New Mexico. The open-input quizzes produced more robust benefits than multiple-choice quizzes.
Study 4 - Bjork
‘Desirable difficulties’ is a concept coined by Elizabeth and Robert Bjork, to describe the desirability of creating learning experiences that trigger effort, deeper processing, encoding and retrieval, to enhance learning. The Bjorks have researched this phenomenon in detail to show that effortful retrieval and recall is desirable in learning, as it is the effort taken in retrieval that reinforces and consolidates that learning.
Design implications
A multiple-choice question is a test of recognition from a list. They do not elicit full recall from memory. Studies comparing multiple-choice with open retrieval show that when more effort is demanded of students, they have better retention.. As open-response takes cognitive effort, the very act of recalling knowledge also reinforces that knowledge in memory. The act of active recall develops and strengthens memory. It improves the process of recall in ways that passive recall – reading, listening and watching do not. Active recall, pulling something out of memory, is therefore more effective in terms of future performance.
Note that multiple-choice questions are useful for situations where they are deemed necessary, for example, in common misconceptions or lists, which is why they are also used but not the dominant form of learning.
A fascinating finding by Jacoby was the precise identification of filling in the missing letters of a word as a powerful act of consolidation of memory. This is called the ‘generation effect’. In other words, only a relatively small amount of retrieval effort is needed to have a powerful effect on memory.
Study 1 – Jacoby
Jacoby (1978) uncovered the fact that cramming led to short-term gains but long-term forgetting. Learners achieved high scores on the first, immediate test but forgot 50% in subsequent tests, compared to those who retrieved material, and forgot only 13%. He also showed that, in presenting word pairs, where some learners got the entire word pair, others got one word with two or more letters deleted from the interior of that word, a bit like an unfinished crossword, the simple act of filling-in-the-blanks resulted in higher retention and recall.
Study 2 – McDaniel
McDaniel et al (1986) also pinpointed leaving out letters as a way to stimulate retrieval. Learners were asked to fill-in-the-blank missing letters in Fairy Tales and showed significant gains in recall. The cognitive effort required to complete the terms strengthened retention and recall.
Study 3 – Hirshman & Bjork
Hirshman & Bjork (1988) got learners to type in the missing letters in words (salt-p_pp_r), which resulted in higher retention rates for conceptual pairs than the words being read on their own.
Study 4 – Richland
Richland et al. (2005) took this research into a real world environment and showed similarly positive effects. They concluded that it is the effortful engagement in the process of retrieval that leads to better recall.
Design implications
Meaning matters and so I rely primarily on reading and open response, where meaningful recall is stimulated. This act alone, even when you don’t know the answer, is a strong reinforcer, stronger indeed, than the original exposure. Interestingly, even when the answer is not known, the act of trying to answer is also a powerful form of learning.
So, the deliberate choice of open-response questions, where the user types in the words until they are correct, is a deliberate, design strategy to take advantage of known techniques to increase recall and retention. Note that no learner is subjected to the undesirable difficulty of getting stuck, as letters are revealed one by one, and the answer given after three attempts. Hints are also possible in the system.

For more information on WildFire click here.

Bibliography
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.
Bower G. H. (1972) Mental imagery in associative learning in Gregg L,W. Cognition in learning and memory New York, Wiley
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36, 604-616.
Duchastel, P. C., & Nungester, R. J. (1982). Testing effects measured with alternate test forms. Journal of Educational Research, 75, 309-313.
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Gates, A. I. (1917). Recitation as a factor in memorizing. Archives of Psychology, No. 40, 1-104.
Hirshman, E. L., & Bjork, R. A. (1988). The generation effect: Support for a two-factor theory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 484–494.
Jacoby, L. L. (1978). On interpreting the effects of repetition: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior, 17, 649-667.
Kang, S. H. K., McDermott, K. B., & Roediger, H. L., III. (2007). Test format and corrective feedback modulate the effect of testing on long-term retention. European Journal of Cognitive Psychology, 19, 528-558.
McDaniel, M. A., Einstein, G. O., Dunay, P. K., & Cobb, R.  (1986).  Encoding difficulty and memory:  Toward a unifying theory.  Journal of Memory and Language, 25, 645-656.
McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103, 399-414
Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97.
Richland, L. E., Bjork, R. A., Finley, J. R., & Linn, M. C. (2005). Linking cognitive science to education: Generation and interleaving effects. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the twenty-seventh annual conference of the cognitive science society. Mahwah, NJ: Erlbaum.
Roediger, H. L., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17, 382-395.
Spitzer, H. F. (1939). Studies in retention. Journal of Educational Psychology, 30, 641-656.
Tulving, E. (1967). The effects of presentation and recall of material in free-recall learning. Journal of Verbal Learning and Verbal Behavior, 6, 175􏰀184.

Wednesday, June 08, 2016

10 fundamental ways voice recognition could change learning

‘Voice’ is one of Mary Meeker’s top five internet trends. Voice could be a game changer in interface behaviour, like keyboards, mice, joysticks and touch, but what impact could it have in learning?
Voice recognition, driven by leaps in AI performance, allows personalised voice recognition, even tonal and emotional recognition, and it is hands-free. It’s also low cost, requiring just a microphone and speaker and chimes with the rise of the Internet of Things. The three big barriers to adoption are accuracy, latency and social awkwardness.
Recognition and response must be accurate and fast. Failures and slow speeds turn users off. The good news is that we’ve punched through the 90% accuracy barrier and moving fast. At 95-99% (not easy) the show really is on the road. Already, the number of smartphone users using voice went over 60% in 2015 and that number is rising as the technology gets better and habits change. 
The uses, ranked, are; 1) General information (30%), 2) Personal assistant (27%), 3) Local information (22%), 4) Fun and entertainment (21%). An astonishing 1 in 5 searches on mobile in US (Android) are now by voice. But there are problems Meeker doesn’t mention. Sure we can speak (150 wpm) faster than we can type (40 wpm) but we can also read faster than we can hear. There’s also the huge embarrassment hurdle of speaking to non-humans in public spaces. She herself sees the initial impact in hands free environments – home (43%), car (36%), on the move (19%), at work (3%). This is when avoiding typing & menus, speed and convenience really matter. But in many other contexts, silence may remain golden.
AI driven voice
With Siri, Google Home and Amazon Echo, we see early signs of its power. Viv is coming and a slew of innovations in AI have improved its efficacy. Jeff Bezos thinks that AI will underpin tech for the foreseeable future. He goes further and thinks we currently understate its potential impact. Of course AI is not one thing, it is many very different things and voice recognition is just one of its many stunning applications. Sci-fi films have been showing us voice activated worlds for decades – it is now a reality. Natural language voice recognition, with the help of machine learning, has accelerated in just the last few years to become a mainstream consumer product.
Echo
Amazon’s Echo is a home-based, voice activated personal assistant, a competitor to Siri, using a platform called Alexa. Alexa has two software development kits; a voice service and a skills kit. As a customer, you get a weekly email telling you about these new skills (1000). This is a big push with over 1000 Amazon staff and tons of folk doing 3rd party apps. It will play your music from Spotify, using just voice commands, even from the far side of the room when music is playing (clever), handle Google Calendar, read audiobooks, deliver news, sports results, weather, order a pizza, get a cab on Uber and control lights, switches, thermostats and so on. It is a frictionless interface to the ‘Internet of Things’. (An interesting tangent for voice activation is its use by those who are disabled.)
Above all, as a cloud-based AI service, on tap, learns fast and is always adapting to your speech, vocabulary and preferences. It becomes, in effect, a personal assistant that learns, not only about you but as an agent which also learns from aggregated data. This is where it gets interesting.
Enter Google Home, launched with new phones (Pixel) which activate the voice assistant when you switch them on. The aim is to get the Internet of Things going in the home through 'voice'- seamless music, activation of devices and ordering things.
Voice and learning
Google is great but we still largely write our requests. This is partly because it is quicker than speech. However, as speech recognition gets better, it will become quicker and easier to simply ask verbally. As Google Home, Amazon Echo, Siri and other services take many of us into the Internet of Things, in our homes, cars and other places, we will want voice to trigger events, get help, find answers and arrange our lives. The car is now a room, somewhere you can learn? The home is now networked, a place you can learn. Your mobile is voice ready, a place where you can learn. In some of these environments, having your hands free is essential (driving) and useful (home). How to tasks, like cooking, repairing things and finding things out make sense.
Behind this shift from text to voice is an interesting debate.
1. Speech is quick and easy
One could argue that it could push us towards more authentic, and I would argue, balanced, form of education and learning. Typing will always be an awkward interface. It is difficult to learn, error prone and requires physical input devices.
2. Listening is quick and easy
Reading is another skill that takes years to master. The spoken word is a skill we do not have to learn. We are grammatical geniuses aged three. Speech is primary and normal, reading and writing relatively recent adjuncts. So, when it comes to learning, speech recognition (output) and voice (input), gives us frictionless dialogue. It could stimulate a return to more Socratic forms of teaching and learning.
3. Less 'text' based learning
This may result in significant improvements in teaching and learning, both of which have, arguably, been over-colonised by ‘text’. Schooling, in all of its manifestations, has become ever more obsessed by text and there is a good argument for rebalancing the system away from an endless series of written tasks, essays and dissertations towards more efficient, meaningful and relevant teaching and learning.
4. Less text based teaching
The blackboard has a lot to not answer for. At that moment, teachers turned their backs on dialogue and conversation with learners and began to lecture, mediating their teaching with text, it can be even worse with text-laden PowerPoint. The teacher’s voice started to get lost. Nowhere was this more evident than in HE, where the blackboard reinforced and deeply embedded the ‘lecture’. To this day, especially in maths and sciences, ‘lecturers’ (a job title that uniquely identifies the profession’s problem), turned their back to learners and started writing. I, and millions of others, have endured the ‘three huge blackboard’ method of teaching. It is the opposite of teaching, it is writing. You may as well have emailed it to me.
5. Less text-based subjects
In schooling we also had the drift towards text-based subjects. Latin is the most surreal example, a dead language, no longer spoken, no longer even written, taught for no other reason than the fact that it got embedded in the curriculum. What a waste. Shakespeare, largely taught off the page, killing it stone dead for many who should have been excited by its searing effect when spoken on the stage. The obsession with ‘maths’, in slavish adherence to PISA, which was never their intention, is also made easier by its essentially written nature.
6. Less text-based assessment
The essay as assessment has now descended into a game where students know that they will not get feedback for days, even weeks (often a grade with a few skimpy comments). So they share, plagiarise, buy essays and in exams, memorise them, so that critical thinking is abandoned in favour of regurgitation. This is not to argue for the abandonment of writing, or essays, just less dependence on this one-size-fits-all form of assessment.
We have a system that teaches to the text and the tests of the text. Almost everything we test is in the form of the written word. Oral and social skills count for little in education. Practical skills are shoved below stairs and we send our kids off in lock-step to universities where the process is extended for year after year, often an inefficient and expensive paper trail that results in a huge paper IOU for the student and state.
7. Less focus on paper outputs
In my lifetime I have seen HE morph into a global paper farm, with exponential growth of Journals and text output, matched only by the inverse growth in readers. Research is falsely equated with paper output, where the paper is the end-in-itself. Teaching is often side-lined as this paper mill becomes the dominant goal.
In the professions, and especially in institutions, I have witnessed bureaucratic systems whose function is often to simply to produce ‘reports’. These are invariably overwritten, skimmed, then often binned. Report writing, plans and rhetoric are so often substitutes for action. Nowhere is this more apparent in the report than invariably conclude that “more research is needed in…”. Reports beget reports.
8. Less long-form output
In a way I think the historic, educational obsession with long-form text has been saved by the internet, where writing returned to a broader set of forms. Young people have taken to writing like demons, in txts, messages, posts, Tweets and blogs. There has been a renaissance of writing, reflecting a wide set of forms of communication, supplemented by images and video.
So how will voice manifest itself in learning?
9. Rebalance academic and skills
It may also help redress the balance between the academic and vocational. When learners leave the confines of school, college or university, they by and large have to exercise skills that are oral - dealing with work colleagues,  interacting in meetings, being effective on the phone, dealing with customers and so on. You will spend a lot of time speaking and listening - these become primary activities and skills. These are not skills that are taught in many educational institutions. A return to voice-based learning may help here.
10. More dialogue
Dialogue with smart people on any topic is often a powerful form of learning. They challenge, probe, contradict. This type of collaborative learning may come into its own with speech and dialogue. There is also the sense in which some topics benefit from the lack of images and writing. It allows the imagination to construct personal imagery and links to what is being heard.
Adaptive learning, intelligent tutoring, chatbots… all of these are with us now. This form of technology enhanced teaching can be further enhanced with voice recognition and feedback. One can see how AI, adaptive, tutoring software could turn this, first into a homework support tool, then a tutoring tool, through to the delivery of more sophisticated learning. It has the advantage of being able to both push and pull learning. I like this idea of encouraging habitual learning, the delivery of short questions, quizzes and spaced practice, via voice on the echo, in a personalised sequence. In the privacy of your own home, this takes away the public embarrassment factor.
Voice moves us one step closer to frictionless, anywhere, anytime learning. Places other than institutions and classrooms become learning spaces. The classroom and lecture hall were never the places where the majority of learning took place. Context matters and as learning becomes a utility, like water, we ill be able to call upon it at any time and see learning as habitual and informal, not timetabled and formal.
Conclusion

I am not denigrating the written word. It matters. What matters more is a rebalance in education towards knowledge and skills that are not wholly text-based, but recognise that speech is as important, sometimes more important, and that skills also matter. Imagine a world where the only response to a request or problem is… I’ll write that up. That’s a problem. Education in its current form is not the solution to that problem but part of the problem itself.