Monday, September 17, 2018

Breakthrough that literally opens up online learning? Using AI for free text input

When teachers ask learners whether they know something they rarely ask them multiple choice questions. Yet the MCQ remains the staple in online learning, even at the level of scenario based learning. Open input remains rare. Yet there is ample evidence that it is superior in terms of retention and recall. Imagine allowing learners to type, in their own words, what they think they know about something, and AI does the job of interpreting that input?

Open input
We’ve developed different levels of more natural open input that takes online learning forward. The first involves using AI to identify the main concepts and getting learners to enter the text/numbers, rather than choosing from a list. The cognitive advantage is that the learner focuses on recalling the idea into their own minds, an act that has been shown to increases retention and recall. There is even evidence that this type of retrieval has a stronger learning effect than the original act of being taught the concept. These concepts then act a ’cues’ which learners hang their learning upon, for recall. We know this works well.

Free text input


But let’s take this a stage further and try more general open input. The learners reads a number of related concepts in text, with graphics, even watching video, and has to type in a longer piece of text, in their own words. This we have also done. This longer form of open-input allows the learner to rephrase and generate their thoughts and the AI software does analysis on this text.

Ideally, one takes the learner through three levels:

1. Read text/interpret graphics/watch video
2. AI generated open-input with cues
3. AI generated open-input of fuller freeform text in your own words

This gives us a learning gradient in terms and of increasing levels of difficulty and retrieval. You move from exposure and reflection, to guided effortful retrieval and full, unaided retrieval. Our approach increases the efficacy of learning in terms of speed of learning, better retrieval and better recall, all generated and executed by AI.

The process of interpretation on the generated text, in your own words, copes with synonyms, words close in meaning and different sentence constructions, as it uses the very latest form of AI. It also uses the more formal data from the structured learning. We have also got this working by voice only input, another breakthrough in learning, as it is a more natural form of expression in practice. 
The opportunities for chatbots is also immense. 

If you work in corporate learning and want to know more, please contact us at WildFire and we can show you this in action
.
Evidence for this approach
Much advice and most practice from educational institutions – re-reading, highlighting and underlining – is wasteful. In fact, these traditional techniques can be dangerous, as they give the illusion of mastery. Indeed, learners who use reading and re-reading show overconfidence in their mastery, compared to learners who take advantage of effortful learning.

Yet significant progress has been made in cognitive science research to identify more potent strategies for learning. The first strategy, mentioned as far back as Aristotle, Francis Bacon then William James, is ‘effortful’ learning. It is what the learner does that matters. 

Simply reading, listening or watching, even repeating these experiences, is not enough. The learning is in the doing. The learner must be pushed to make the effort to retrieve their learning to make it stick in long-term memory.

Active retrieval
 ‘Active retrieval’ is the most powerful learning strategy, even more powerful than the original learning experience.  The first solid research on retrieval was by Gates (1917), who tested children aged 8-16 on short biographies. Some simply re-read the material several times, others were told to look up and silently recite what they had read. The latter, who actively retrieved knowledge, showed better recall. Spitzer (1939) made over 3000 11-12 year olds read 600 word articles then tested students at periods over 2 months. The greater the gap between testing (retrieval) and the original exposure or test, the greater the forgetting. The tests themselves seemed to halt forgetting. Tulving (1967) took this further with lists of 36 words, with repeated testing and retrieval. The retrieval led to as much learning as the original act of studying. This shifted the focus away from testing as just assessment to testing as retrieval, as an act of learning in itself. Roediger et al. (2011) did a study on text material covering Egypt, Mesopotamia, India and China, in the real context of real classes in a real school, a Middle School in Columbia, Illinois. Retrieval tests, only a few minutes long, produced a full grade-level increase on the material that had been subject to retrieval. McDaniel (2011) did a further study on science subjects, with 16 year olds, on genetics, evolution and anatomy. Students who used retrieval quizzes scored 92% (A-) compared to 79% for those who did not. More than this, the effect of retrieval lasted longer, when the students were tested eight months later. So designing learning as a retrieval learning experience, largely using open-input, where you have to pull things from your memory and make a real effort to type in the missing words, given their context in a sentence.

Open input
Most online learning relies heavily on multiple-choice questions, which have become the staple of much e-learning content. These have been shown to be effective, as almost any type of test item is effective to a degree, but they have been shown to be less effective than open-response, as they test recognition from a list, not whether it is actually known.

Duchastel and Nungester (1982) found that multiple-choice tests improve your performance on recognition in subsequent multiple-choice tests and open input improves performance on recall from memory. This is called the ‘test practice effect’. Kang et al. (2007) showed that, with 48 undergraduates, reading academic Journal quality material, open input is superior to multiple-choice (recognition) tasks. Multiple choice testing had an affect similar to that of re-reading whereas open-input resulted in more effective student learning. McDaniel et al. (2007) repeated this experiment in a real course with 35 students enrolled in a web-based Brain and Behavior course at the University of New Mexico. The open-input quizzes produced more robust benefits than multiple-choice quizzes. ‘Desirable difficulties’ is a concept coined by Elizabeth and Robert Bjork, to describe the desirability of creating learning experiences that trigger effort, deeper processing, encoding and retrieval, to enhance learning. The Bjorks have researched this phenomenon in detail to show that effortful retrieval and recall is desirable in learning, as it is the effort taken in retrieval that reinforces and consolidates that learning. A multiple-choice question is a test of recognition from a list. They do not elicit full recall from memory. Studies comparing multiple-choice with open retrieval show that when more effort is demanded of students, they have better retention.As open-response takes cognitive effort, the very act of recalling knowledge also reinforces that knowledge in memory. The act of active recall develops and strengthens memory. It improves the process of recall in ways that passive recall – reading, listening and watching do not.

Design implications
Meaning matters and so we rely primarily on reading and open response, where meaningful recall is stimulated. This act alone, even when you don’t know the answer, is a strong reinforcer, stronger indeed, than the original exposure. Interestingly, even when the answer is not known, the act of trying to answer is also a powerful form of learning. 

So, the deliberate choice of open-response questions, where the user types in the words, then more  substantial open input, is a deliberate, design strategy to take advantage of known AI and learning techniques to increase recall and retention. Note that no learner is subjected to the undesirable difficulty of getting stuck, as letters are revealed one by one, and the answer given after three attempts. Hints are also possible in the system.

Bibliography
Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185–205). Cambridge, MA: MIT Press.
Bower G. H. (1972) Mental imagery in associative learning in Gregg L,W. Cognition in learning and memory New York, Wiley
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Butler, A. C., & Roediger, H. L. (2008). Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition, 36, 604-616.
Duchastel, P. C., & Nungester, R. J. (1982). Testing effects measured with alternate test forms. Journal of Educational Research75, 309-313.
Gardener (1988) Generation and priming effects in word fragment completion Journal of Experimental Psychology: Learning, Memory and Cognition 14, 495-501
Gates, A. I. (1917). Recitation as a factor in memorizing. Archives of Psychology, No. 40, 1-104. 
Hirshman, E. L., & Bjork, R. A. (1988). The generation effect: Support for a two-factor theory. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 484–494. 
Jacoby, L. L. (1978). On interpreting the effects of repetition: Solving a problem versus remembering a solution. Journal of Verbal Learning and Verbal Behavior17, 649-667.
Kang, S. H. K., McDermott, K. B., & Roediger, H. L., III. (2007). Test format and corrective feedback modulate the effect of testing on long-term retention. European Journal of Cognitive Psychology19, 528-558. 
McDaniel, M. A., Einstein, G. O., Dunay, P. K., & Cobb, R.  (1986).  Encoding difficulty and memory:  Toward a unifying theory.  Journal of Memory and Language25, 645-656.
McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103, 399-414
Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. 
Richland, L. E., Bjork, R. A., Finley, J. R., & Linn, M. C. (2005). Linking cognitive science to education: Generation and interleaving effects. In B. G. Bara, L. Barsalou, & M. Bucciarelli (Eds.), Proceedings of the twenty-seventh annual conference of the cognitive science society. Mahwah, NJ: Erlbaum. 
Roediger, H. L., Agarwal, P. K., McDaniel, M. A., & McDermott, K. B. (2011). Test-enhanced learning in the classroom: Long-term improvements from quizzing. Journal of Experimental Psychology: Applied, 17, 382-395.
Spitzer, H. F. (1939). Studies in retention. Journal of Educational Psychology30, 641-656. 
Tulving, E. (1967). The effects of presentation and recall of material in free-recall learning. Journal of Verbal Learning and Verbal Behavior6, 175􏰀184.

No comments: