Saturday, October 02, 2021

Clark - Media, direct instruction, analysis, agents, games and AI in learning

Richard E. Clark is Professor Emeritus of Educational Psychology and Technology at the University of South California. He has produced a huge corpus of research and writing on technology and workplace learning and has put in a huge effort to bridge the gap between known theory, its boundaries, and practice. With a focus on instruction and not media, he has produced valuable work on performance gaps and how to do a thorough analysis using Cognitive Gap Analysis, to determine optimal training. His interest in learning technology has led to conclusions on everything from visual agents to games, in learning. He is also an advocate for the use of AI in training.

Learning from media

Clark asks us not to confuse methods of instruction with media and famously claimed 

in Clark (1983), considering research on learning from media, that there is no significant difference, in terms of benefits, between using different media to deliver instruction. His argument is that non-media methods, for example, instructional methods, different content, or assessment plans can be presented by any medium including teachers. What he does say is that different media can affect the number and variety of students who can access learning and, of course, that some media are more scalable and cost-effective than others (Clark, 2012).

Direct Instruction versus Constructivism

Evidence from learning theory, in particular the limitations of working memory, suggest that direct instruction may be more effective than unguided or lightly guided learning experiences. Guidance can be tapered off as learners gain competence and expertise. This is a direct challenge to the constructivist approach recommended by Bruner and Papert, that learners must discover or construct essential information for themselves. While the constructivist ‘description of learning’ may be accurate, the research shows Kirschner, Sweller, & Clark (2006) that the instructional methods recommended by constructivists are flawed.

Cognitive Task Analysis

In Turning Research Into Results: A guide to selecting the right performance solutions (2012) Clark focuses on the diagnosis of performance gaps in terms of; Knowledge and skills, Motivational and Organisational gaps. He then separates the use of  Job aids from Training and then Education. 

The solution, he recommends, is Cognitive Task Analysis (CTA), which is a front-end process to improve design. He sees this as the weakest part of most current analysis models. Research has found that experts are often unaware of what they actually do, as around 70% is executed automatically, without much self-awareness or reflection. The job of the learning designer is to uncover that 70%.

He starts with the selection of experts. You need experts who have been doing this for a while with proven success, who are not just trainers. 

  1. Interviewing experts, focus on sequence and tasks, the action (physical) and decision steps (psychological). Listening matters here. Get transcripts of these interviews. An important extra is to ask where trainees have difficulties. Where do they get stuck? 

  2. Edit transcripts to get descriptions that are meaningful to trainers.

  3. Interview experts (3-4) separately, and ask each what they think was missed.

  4. Go back to 1 say what 2 & 3 said and ask them to agree on one version.

  5. Collapse separate versions into one final version.

  6. Collect information about equipment, standards and examples from experts.

Motivation

His emphasis on motivation has always been a strong characteristic of his work. Motivation is not just what people need to learn, it can also be negative or harmful. People, for example, are often overconfident and reject advice. He sees four critical motivational features: 

  1. Values mismatch (for what they’re doing) that are meaningful to them. There’s huge variation. 

  2. Lack of self-confidence or self-efficacy.

  3. Disruptive emotions - being anxious, angry, depressed, negative, all stop people from learning

  4. Barriers that knock people, negative evaluations, attribution failures

Animated agents

Clark (2005) doubts that virtual, animated agents add any value in learning as the research shows a mix of results from well-designed studies. While guidance and attention may be focussed by agents they can also distract and place extra cognitive load on learners. Research is required that compares agents and non-agents in same designed learning experiences.

Games for learning

Clark sees no clear definition of what constitutes a ‘game’ and the research as insubstantial. Previous meta analyses of games have failed to include unpublished studies containing no significant gain results, which is what he believes is the most likely outcome in a well designed study. Games that use the discovery method of learning are, he claims, less effective than fully guided instruction for novice to intermediate learners and he thinks that the learning benefits, when found in games, are the product of instructional methods that can be presented, without the extra design and costs, in non-game contexts. They are a distraction from real, evidence-based instructional methods. In fact, games may result in less mental effort invested in learning because of the belief that they make learning ‘easier’. He concludes that less expensive instructional designs may be preferable and that increased motivation in games may actually hinder learning. He is also sceptical about the emphasis put on social learning and their implementation in chat rooms.

AI and learning

Clark is using AI to automate the CTA process and believes that all of training and development will, at some time, become AI supported. This will decrease the cost of front-end analysis and design, forcing it to become more evidence-based, free from biases and overcome the resistance that human beings bring to the table. This will, in his words “be a revolution”.

Influence

With MAyer, Clark has a reputation for rigorous adherence to research and evidence, providing guidance for practitioners in the design of learning experiences. Although online learning is often still blind to research-driven practices, Clark remains a rich and deep resource for professionals in the field.

Bibliography

Clark, R. E. (2012) Learning from media: Arguments, analysis and evidence, second edition. Greenwich Conn: Information Age Publishing.

Clark, R. E. (2011). Games for Instruction?. Presentation at the American Educational Research Association, New Orleans, LA

Clark, R. E. (May-June 2007) Learning from serious games? Arguments, evidence and research suggestions.  Educational Technology.  56 – 59

Clark, R. E. and Choi, S. (2005). Five design principles for experiments on the effects of animated pedagogical agents.  Journal of Educational Computing Research. 32(3). 209-225.

Sweller, J., Kirschner, P. A., & Clark, R. E. (2007). Why minimally guided teaching techniques do not work: A reply to commentaries. Educational Psychologist, 42(2), 115-121

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based experiential and inquiry-based teaching. Educational Psychologist, 41(2), 75-86

Elen, J., Clark, R.E. and Lowick, J., 2006. Handling complexity in learning environments: Research and theory. In : (p. 283e297). Oxford: Elsevier.

Clark, R.E. ed., 2001. Learning from media: Arguments, analysis, and evidence. IAP.

Clark, R. E. (1983) Reconsidering research on learning from media., Review of Educational Research, 53(4), 445-459


No comments: