Saturday, August 29, 2020

More important than man on the moon - the melding of mind and machine

Last night we witnessed a live streamed event that may prove more significant that the moon landing. Elon Musk showed the remarkable progress of Neurlink. AI, robotics, physics, material science, medicine and biology collided in a Big Bang event, where we saw an affordable device that can be inserted into your brain to solve important spinal and brain problems. By problems they meant memory loss, hearing loss, blindness, paralysis, extreme pain, seizures, strokes and brain damage. They also included mental health issues such as depression, anxiety, insomnia and addiction. Ultimately, I have no doubt that this will lead to huge decrease in human suffering. God doesn’t seem to have solved the problem of human suffering, we as a species, through science are on the brink of doing it by and for ourselves.

Current tech

Current technology (Utah array) has only 100 channels per array and the wires are rigid, inserted crudely with an air hammer. You have to wear a box on your head, with the risk of infection, and it requires a great deal of medical expertise. It does a valuable job but is low bandwidth and destroys about a sugarcube of brain matter. Nevertheless, it has greatly improved the lives of over 150,000 people.

Neuralink tech

Musk showed three little piggies in pens, one without an implant, one that had an implant, now removed without any effects and one with an implant (they showed the signal live). Using a robot as surgeon the Neuralink tech can be inserted in an hour, without a general anaesthetic and you can be out of hospital the same day. The coin size device is inserted in the skull, beneath the skull. Its fibres are only 5 microns in diameter (a human hair is 100 microns) and it has ten times the channels of he Utah array, with a megabit bandwidth rate, to and from your smartphone. All channels are read and write.

Smartphone talks and listens to brain

When writing to the brain, you don’t want to damage anything and you need precise control over a range of electric fields in both time and space, also delivering a wide range of currents to different parts of the brain. The device uses Bluetooth to and from your smartphone. Indeed, it is the mass production of smartphone chips and sensors that have made this breakthrough possible.  

Team Musk

What really made this possible was Elon Musk, a remarkable man, who brought together this remarkable team of AI experts, roboticists, material scientists, mechanical engineers, electrical engineers and neurologists. In the Q&A session afterwards, they were brilliant.

What next?

I discussed Neurolink in my book ‘AI for Learning’ speculating that at some distant time machine would meld with mind, and this would open up possibilities for learning. I didn’t imagine that it would be kicked off just a few days after the book’s release… but here we have it. So what are the possibilities for learning?

Insights

At the very least this will give us insights into the way the brain works. We can ‘read’ the brain more precisely but also experiment to prove/disprove hypotheses on memory and learning. This will take a lot more than just reading ‘spikes’ (electrical impulses from one neuron to many) but it is a huge leap in terms of an affordable window into the brain. If we unlock memory formation, we have the key to efficient learning.

Interfaces

Our current interfaces, keyboards, touchscreen, gestures and voice, could also be bypassed, giving much faster ‘thought to and from machine’ communication by tapping into the phonological loop. This would be an altogether different form of interface, more akin to VR. Consciousness is a reconstructed representation of reality anyway and these new interfaces would be much more experiential as forms of consciousness, not just language.

Read memories

Memories are of many types and complex, distributed things in the brain. Musk talked eloquently about being able to read memories, that means they can be stored for later retrieval. Imagine having cherished memories stored to be later experienced, like your wedding photos, only as felt conscious events, like episodic memories. There are conceptual problems with this, as memory is a reconstructive event, but at least these reconstructions could be read for later retrieval. At the wilder end of speculation Musk imagined that you could ‘read’ your entire brain, with all of its memories, store this and implant in another device. 

Imagination

This is not just about memories. It is our faculty of the imagination that drives us as a species forward, whether in mathematics, AI and science (Neuralink is an exemplar) but also in art and creativity. Think of the possibilities in music and other art forms, the opportunities around the creative process, where we can have imagination prostheses.

Write memories

Reading memories is one thing. Imagine being able to ‘write’ memories to the brain. That is, essentially a form of learning. If we can do this, we can accelerate learning. This would be a massive leap for our species. Learning is a slow and laborious process. It takes 20nyears or more before we become functioning members of society, even then we forget much of what we were taught and learned. Our brains are seriously hindered by the limited bandwidth and processing power of our working memory. Overcoming that block, by direct writing to the brain, would allow much faster learning. Could we eliminate great tranches of boring schooling? Such reading and writing of memories would, of course, be encrypted for privacy. You wouldn’t want your brain hacked!

Consciousness

In my book I talk about the philosophical discussion around extended consciousness and cognition. Some think the internet and personal devices like smartphones have already extended cognition. The Neuralink team are keenly aware that they may have opened up a window on the mind that may ultimately solve the hard problem of consciousness, something that has puzzled us for thousands of years. If we can really identify correlates between what we think in consciousness and what is happening in the brain and can even simulate and create consciousness, we are well on the way to solving that problem.

End to suffering

But the real win here, is the opportunity to limit suffering, pain, physical disabilities, autism, learning difficulties and many forms of mental illness. It may also be able to read electrical and chemical signals for other diseases, leading to their prevention. This is only the beginning, like the first transistor or telephone call. It is a scalable solution and as versions roll out with more channels, better interpretation using AI, in more areas of the brain, there are endless possibilities. This event was, for me, more important than man landing on the moon as it has its focus, not on grand gestures and political showmanship, but on reducing human suffering. That is a far more noble goal. It is about time we stopped obsessing with the ethics of AI, with endless dystopian navel gazing, to recognise that it has revolutionary possibilities in the reduction of suffering.

FDA approved

The good news is that they have FDA Breakthrough Device designation and will be doing human trials soon. 

Sunday, August 23, 2020

Taylor (1856 – 1915) training as a formal function within organisations, essential to business growth

Frederick Winslow Taylor turned down Harvard for an apprenticeship, competed nationally at tennis and made his fortune in steel. After a four year apprenticeship, he worked his way up to senior management roles, invented patented techniques and so his theories were grounded in real organisational experience, practice and success. 

He is best known for his work in applying the scientific method to management. Taylor’s Principles were long respected in organisational planning and training but ‘Taylorism’ became a pejorative term, as we moved out of mass manufacturing and production into services. Yet his fundamental idea, that efficiencies should be sought in organisations, far from being abandoned, has remained the mainstay of management theory and practice for over a century. The Principles of Scientific Management (1911) was voted the most influential management book of the 20th century by the Academy of Management.

Four principles

Taylor's four principles of scientific management are worth repeating:

1.     Replace rule-of-thumb work methods with methods based on the scientific study of tasks

2.     Scientifically select, train, and develop each employee rather than passively leaving them to train themselves

3.     Provide detailed instruction and supervision of each worker in the performance of that worker's discrete task

4.     Divide work nearly equally between managers and workers, so that the managers apply scientific management principles to planning the work and the workers actually perform the tasks

This can be reduced to the scientific and analytic approach to productivity through the focus on tasks or process, then formal training, with a focus on performance. Management is the science of planning for performance. When stated that way, we can see why Drucker, a huge admirer of Taylor, saw him not only as the father of modern management but also the person who shaped the great wealth creating industries that lifted millions out of poverty. This is far from the derogatory descriptions of many who see him as the architect of exploitative capitalism.

What characterises Taylor’s Principles is his focus on measurement, standardisation, management and he division of labour. The modern obsession with management, as opposed to general employees, even the obsessions with yet another class of management, leaders, all stems from Taylor. This fundamental distinction between management (who think and plan) and workers (who do and make things) was, of course more pronounced in the great era of manufacturing. But who can deny that, even in the modern era, more dominated by services, it has been carried over in all aspects of organisational structure, planning and training.

Training

His principles put training at the centre of his scientific process, with the selection, development and training of staff to be based on scientific principles. His legacy was therefore to have training as a formal function within organisations, essential to business growth. Formal, direct training was the key to improving productivity.

This focus on training, not in a general sense, but in precise competences has also has a lasting effect. Whatever the business gaol or process, he recommends a scientific approach to the training of those performances, not as pure theory but as doing. Practice was essential and the transfer of learning to actual competence was essential. In many ways we have backtracked on this with the separation of training off into a different realm, not the workplace but the classroom and now online. We may have drifted back off Taylor’s base principle about training being about actual proven competences that transfer into practice in the workplace. In some ways we have forgotten these scientific principles, as training became, in places, more faddish, with less reliance on scientific research on how we learn and evidence-based practice. There is a contemporary movement to debunk these fads and myths that have crept into learning and training, which is Taylorist in approach. 

Criticism

Taylor’s world was one where most jobs were manual, so his focus on physical process was understandable. We now have the inverse, where manual work is now less than a tenth of all jobs, so his principles have to be adapted towards knowledge work. This means less focus on manual skills and more on cognitive skills.

As Taylor wanted to find ‘scientific’ solutions to production and performance problems, he recommended a single solution, with a binary split, where managers manage and plan, then workers do and make. This single scientific solution was replaced by less hierarchical approaches that distributed responsibility more widely in organisations, so that more personal responsibility is taken by all. Also, managers are no longer separated off to do pure planning. They take a more active role in the personal development and supervision of those they manage. Teamwork and collaboration, defined and researched by Belbin, Salas and Stodd have also led to a more democratised structures. Leadership has also been layered on to the management category.

In many organisations extreme and narrow specialisation is seen as inflexible. Indeed, it is seen as demotivating. A more humanistic approach to management where motivation, support, appraisal and personal development is seen as leading to higher productivity. Yet, Taylor was not blind to these issues. Two of his four principles were about training people.

There have also been changes in the way business processes are perceived, with more focus on continuous improvement. Quality management, control and now sophisticated data-driven approaches address the sheer complexity of procurement, supply chain management, production and distribution.

Influence

Although modern commentators are often critical of Taylor, they effectively parrot his approach. Management consultants unwittingly apply his original schema, that separated out managers from workers and now, leaders.

His methods resulted in both rejection by some owners and workers but also in significantly higher wages in others that adopted his methods, when wages were linked to productivity, so the charge that he was merely a stooge of the owners is not entirely true. However, there is little doubt that he had a rather negative view of the working class. Overall, however, Drucker is right  in saying the result of his management techniques lifted many out of poverty. The downside is that this focus on paying managers and leaders well, has also led to massive levels of inequality, as the modern economists like Picketty and data shows. 

What is striking, however, to see what little has changed. His basic distinction between management and workers has survived. Specialism still exists and the focus on business process that leads to increased performance and productivity, remains intact.

Bibliography

Taylor, F.W., (1911). The principles of scientific management. New York, 202.

Drucker, Peter (1974). Management: Tasks, Responsibilities, Practices. New York: Harper & Row.

Picketty, T., 2014. Capital in the 21st Century, trans. Arthur Goldhammer.

Friday, August 21, 2020

Universities are the perfect hub and spoke network for viral spread

Here’s an idea. Let’s take hundreds of thousands of young people, get them to travel to another city in the country, preferably far from their home town, put them in closed rooms together for hours on end with older people, let them mix, go to bars and party. Throw in a healthy dose of foreigner students from countries all over the globe. Now after getting them all into one container, you shake this lethal cocktail, give it time to ripen, then send them all back home, just before Christmas, as the flu and other viruses peak. That’s essentially what Universities are doing around the world. You couldn’t design a better, more optimised system for viral spread, as it reaches almost every village, town and city in the country and abroad.

Why would you take such a risk? We know from recent exam results in schools, that results dog-legged up, even though the school children were not at school. We know that lectures can be online. We also know that tutorials can be held online. In fact, we know that entire degrees can be delivered online because they are, on scale. I’ve attended graduation ceremonies for years helping hand out degrees to such students.

In truth, Covid is exposing the hard reality of Higher Education, that is it mostly about hanging out with other young people. This is what administrators call the ‘student experience’. That’s fine but let’s be honest about where all that money goes. Beyond this there’s a lot of signalling – basically get a degree and put a sticker on your head saying hire me. Unfortunately, that sticker is starting to fall off as so many people have degrees, their value has been commoditised.

Toggling

But let’s get back to Covid. What is happening in the US is illustrative. There is so much cash at stake, from sports, accommodation, food and other non-educational services, and the institutions are scared shitless about having to lower costs or refund students and their parents for an online only experience, that they’re ‘toggling’.

‘Toggling’ is a term invented by Bryan Alexander for switching to and fro between campus and online provision, effectively playing chicken. What many Universities are doing is saying:

It’s OK, come to Campus…

And by the way pay up…

Oh no, the students are partying and infections are rising, we have to close…

Sorry no refunds, it’s their fault…

Future

Scott Galloway threw a grenade into this car crash, by publishing a spreadsheet that categorised institutions into those that will:

Thrive

Includes the elite Universities with strong brands as they double-down and adopt some online provision.

Survive

Universities that weather the storm with good brand equity, credential-to-cost ratio, and/or endowments to weather the storm.

Struggle

He describes these as having ‘comorbidities’ high tuition rates, low numbers, poor endowments.

Be challenged (he had perish!)

High tuition costs, low endowments, dependence on international students, and weak brands.

His spreadsheet is here.

Conclusion

I’m in favour of K12 schools returning, if carefully monitored, as it is localised. On the whole most kids attend their local school and tracing can be managed. Universities are different. It is a massive, national and evenly spread distribution network that spokes out to international locations. This is exactly what an evolving virus wants, an efficient and optimised delivery mechanism. 

Monday, August 17, 2020

Study on retention using Video plus AI-generated retrieval practice


Abstract

The aim of this trial was to test the effectiveness of chunking video and placing effortful, retrieval practice after each chunk of video. Chunking is the slicing of video content down into several, separate video segments or chuinks, so that there is less cognitive load and forgetting. Retrieval practice is making the learner recall what they think they know to reinforce therefore increasing retention and subsequent recall. Two groups were compared. One was shown only a training video on Equality & Diversity produced for a large company, the other the same video, chunked into smaller segments, with AI generated practice at the end of each short segment. Both groups were tested immediately after the learning experience. The results showed a 61.5% greater score in the Video + AI generated practice group over the Video only group. This study shows that video significantly benefits from enhancement through chunking, reinforcement, retention and recall by adding Video plus AI-generated retrieval practice.

Introduction

Video has become commonplace in learning, through YouTube and Vimeo in both the public domain and on private channels. It has also become common to deliver learning video content from a VLE (Virtual Learning Environment), LMS (Learning Management System) or LXP (Learning eXperience Platform). Other video specific platforms use Netflix-style carousel and other interfaces to deliver learning video content.

Yet little attention has been paid to the research that suggests video should be enhanced with active learning. Research into the use of video for learning recommends several techniques to enhance the watching of video on its own, (Reeves & Nass, 1996; Zhang, 2006; Mayer, 2008: Brame, 2016; Chaohua, 2019).

Method

Twenty-six participants were selected. The first group of thirteen watched the video only. The second group of thirteen watched the same video but chunked down into four meaningful segments, edited to match separate topics, interspersed with AI-generated, retrieval practice. The AI-generated, retrieval practice group required the learner to recall key ideas and concepts and type them in. This involved acts of recall and writing created by the AI tool, that reinforce learning, where the learner was required to recall concepts as well as type in those concepts. Any items that were not correct had to be repeatedly input until all were correct. A separate and identical written, recall test was completed immediately after the learning experience for both groups.

Note that the retrieval practice tool used was WildFire. It creates online learning from the chunks of video, applying AI to the automatically generated,video transcript , using the AI to identify the key learning points, create questions, as well as generate links to external content to enhance the learning experience. If the learner has not been able to retrieve the relevant concepts, it provides remedial practice until that concept is known. On input it accepts spelling variants, as well as British and American English. 

Results

The Video + AI group scored significantly higher than the video group.

Figure 1 shows that Video + AI group had a 61.5% increase in mean retention, from a mean value of 9.00 to 14.54. 


Figure 2 shows that Video + AI group had a 61.5% increase in mean retention, from a mean value of 9.00 to 14.54.


In Figure 3, histograms of the two groups are compared showing that the Video + AI group has a higher mean and users scored higher more frequently.

In Figure 4, a box and whisker plot gives more insight into the respective distributions. The Video only group had a lower median value of 8 and a smaller range than the Video + AI group. The Video + AI group had a 75% increase in median score over the Video group.

Discussion

We know from (Guo, 2014), from a large data set of learning video data gathered from MOOCs (Massive Open Online Courses), that learners drop out in large numbers at around six minutes. This drops dramatically down to 50% at 9-12 minutes and 20% beyond this. Evidence from other studies on attention, using eye-tracking, confirm this rapid drop in arousal (Risko, 2012). The suggestion is that learning videos should be 6 minutes or less. Chunking video down into smaller and meaningful segments achieves this aim and relieves the load on working memory.

Many can recall scenes from films and videos but far fewer can remember what was actually said. That is because our episodic memory is strong and video appeals to that form of visual memory and recall but video is poor for semantic memory and semantic knowledge, what we need to know in terms of language. One remembers the scene and can literally play it back in one’s mind but it is more difficult to remember facts and speech. This is why video is not so good at imparting detail and knowledge. There is a big difference between recalling episodes and knowing things.

Learning is a lasting change in long-term memory and video suffers from the lack of opportunity to encode and consolidate memories. Your working memory lasts about 20 seconds and can only hold three or four things in the mind at one time. Without the time to encode, these things can be quickly forgotten through cognitive overload or the failure to consolidate into long-term memory (Sweller, 1988). Our minds move through video at the pace of the narrator but like a shooting star, the memories burn up behind us, as we have not had the opportunity to encode them into long-term memory. Without additional active, effortful learning, we forget. An additional researched problem is that people often ‘think’ or ‘feel’ they have learnt from video but as (Bjork, 2013) and others have shown, this can be ‘illusory’ learning. The learner mistakes the feeling that they have learnt things for actual learning. When tested they are shown to have learned less than they thought they had learned.

How do we reduce cognitive load in video for learning? (Mayer, 2003) and others have shown that text plus audio plus video on the screen, commonly seen in lecture capture, actually inhibits learning. One should not put captions, text or scripts on the screen while the narrator or person on the screen is talking. (Florella 2019) proposes that learning improves when there are “visual rests” and memory is enhanced when ”people have a chance to stop and think about the information presented". Chunking video down to smaller, meaningful segments and providing the opportunity for active, effortful learning will both enhance learning by reducing cognitive load and increasing reinforcement, retention and recall.  

But what exactly should learners do after and between these video chunks? (MacHardy, 2015) shows that the relationship between video and the active learning must be meaningful and closely related. In a large data mining exercise, they showed that if the two are too loosely related, it inhibits student attainment. To increase reinforcement, retention and recall, (Szpunar, 2013; Roediger, 2006; Vural, 2013) suggest that retrieving key concepts is a powerful learning technique. This was the aim of this study, to test the hypothesis that chunked video with video plus AI-generated retrieval practice increases reinforcement, retention and recall.

Practical applications

There are several possible applications of this form of enhanced video learning:

1.     Existing video learning libraries can be made into far more effective learning experiences 

2.     New videos for learning can be made into far more effective learning experiences

Note that additional design recommendations identified during the study include:

1.     Scripting the videos into a more ‘chaptered’ structure

2.     Clear edit points on visuals and audio at the end of each planned chunk of video 

3.     Close relationship between the video and the retrieval practice

Conclusion

This trial provides evidence that the use of both chunked videos and AI-generated retrieval practice, in combination, significantly increases retention and recall and can be strongly recommended for both existing and new video learning content.

Bibliography

Bjork, R.A., Dunlosky, J. and Kornell, N., 2013. Self-regulated learning: Beliefs, techniques, and illusions. Annual review of psychology64, pp.417-444.

Brame, C.J., 2016. Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sciences Education15(4), p.es6.

Chaohua, O, Joyner, D, Goel, A., 2019. Developing Videos for Online Learning: A 7-Principle Model. Online Learning

Fiorella, L., van Gog, T., Hoogerheide, V. and Mayer, R.E., 2017. It’s all a matter of perspective: Viewing first-person video modeling examples promotes learning of an assembly task. Journal of Educational Psychology109(5), p.653.

Fiorella, L., Stull, A.T., Kuhlmann, S. and Mayer, R.E., 2019. Fostering generative learning from video lessons: Benefits of instructor-generated drawings and learner-generated explanations. Journal of Educational Psychology.

Guo PJ, Kim J, Robin R.  L@S’14 Proceedings of the First ACM Conference on Learning at Scale.New York: ACM; 2014. How video production affects student engagement: an empirical study of MOOC videos; pp. 41–50.

MacHardy Z, Pardos ZA., 2015 Evaluating the relevance of educational videos using BKT and big data. In: Santos OC, Boticario JG, Romero C, Pechenizkiy M, Merceron A, Mitros P, Luna JM, Mihaescu C, Moreno P, Hershkovitz A, Ventura S, Desmarais M, editors. Proceedings of the 8th International Conference on Educational Data Mining, Madrid, Spain.

Mayer, R.E. and Moreno, R., 2003. Nine ways to reduce cognitive load in multimedia learning. Educational psychologist38(1), pp.43-52.

Mayer, R.E., 2008. Applying the science of learning: Evidence-based principles for the design of multimedia instruction. American psychologist63(8), p.760.

Reeves, B. and Nass, C.I., 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press.

Risko, E.F., Anderson, N., Sarwal, A., Engelhardt, M. and Kingstone, A., 2012. Everyday attention: Variation in mind wandering and memory in a lecture. Applied Cognitive Psychology26(2), pp.234-242.

Roediger III, H.L. and Karpicke, J.D., 2006. The power of testing memory: Basic research and implications for educational practice. Perspectives on psychological science1(3), pp.181-210.

Sweller, J., 1988. Cognitive load during problem solving: Effects on learning. Cognitive science12(2), pp.257-285

Szpunar, K.K., Khan, N.Y. and Schacter, D.L., 2013. Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences110(16), pp.6313-6317.

Vural, O.F., 2013. The Impact of a Question-Embedded Video-based Learning Tool on E-learning. Educational Sciences: Theory and Practice13(2), pp.1315-1323.

WildFire www.wildfirelearning.co.uk

Zhang, D., Zhou, L., Briggs, R.O. and Nunamaker Jr, J.F., 2006. Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. Information & management43(1), pp.15-27.

Sunday, August 16, 2020

Pixel is a powerful, portal, personal pocketful of AI....

My son’s an AI lad. He has expertise in object recognition (currently best in world on fruit recognition, which can be used to increase yield). He’s also been involved in AI for Learning, as he’s coded the new version of WildFire, an AI-driven content creation service for learning. So he’s my go-to-guy for recommendations and swears by his Pixel smartphone from Google. As he says “it’s literally AI in your pocket”. For me, the Pixel is a little sandbox for consumer AI, so gives us insights into the way technology is moving and therefore the way online learning will move.

Not many products get better after you buy them, but that can be said for this smartphone. As a device it really does deserve to be called ‘smart’ as it uses in-device machine learning. The Pixel 4 uses Neural Core, a TPU chip with tons of on board AI features for everything from song recognition to computational photography. The Adaptive Battery feature even uses AI to predict when your battery will run out from your use patterns, and automatically reduce behind the scenes activity to lengthen battery time. The Pixel phones takes AI to a new level with:

Talking

Language & image recognition

Learner support 

Capture media

Location

 

Talking

Speech to text has come of age and the Pixel automatically transcribes videos. Live caption will also handle podcasts and audio messages. You can record and export these transcripts and, as text is searchable, keyword triggers can also be set up. Note taking can be transformed if you use Google’s transcription service, during Zoom calls. 

You can start, save, and search recordings in the Recorder app using Google Assistant. Just say “Hey Google, start recording my voice” to start recording, or “Hey Google, find my voice recording about LXPs” to find that session you had recorded. The saved transcripts can also be easily exported to Google Docs, just choose a recording, tap “Transcript” to show the transcript, then tap the three dots menu on the top-right corner, and tap “Save text to Google Docs.” 

There are all sorts of NLP (Natural Language Processing) tricks you can pull off here and we already use this in WildFire to transcribe videos, going further by using AI to automatically generate powerful, online learning. We have also been using voice as input. Imagine online learning allowing open voice responses from a learner and their automatic, semantic interpretation so that feedback can be provided until you get things right, that’s exactly what we’ve done in WildFire.

Language and image recognition

Google Lens must be one of the best, but least used, AI features on smartphones. You simply point and shoot at a plant, tree, flower, animal, work of art, landmark, restaurant or product, and much more. Then good news is that an ‘education’ feature for Lens is in the works. You will be able to point your camera at an assignment or homework question and get instant help. The word is that Google will focus initially on maths and we’ve seen how Photomaths uses AI to problem solve and unpack the steps from question to answer in a mathematical problem. There is huge scope here for learner engagement, support and eventually online teaching with this line of development. 

In languages Lens already translates in real time, whether it be a foreign menu, or words on the page. You needed to be online in the past but it looks as though this will be possible offline. 

In some subjects, imagery may be important; biology, geography, architecture, art. Image recognition leading to relevant education links is already in there, with a purely educational mode it could be made more relevant to education.

But the real advantage comes with its text recognition, which springs off into interpretation, recommendations, transcription and translation. Most subject would benefit from help from limited test. What most people will use it for in an education or training context is its ability to take text from the real world; from a document, manual, whiteboard, book or business card… as it turns it into text on the phone, hat can be used in any way you want. It’s the links from the text that matter – links to a free educational service, person who can help, possible training course.

Learner support

The primary problem with most assistant interaction is that it is ‘single shot’. You ask for something and it responds – once. Google Assistant is, as expected, forging ahead with continued conversation or multi-turn dialogue. You say, “Okay, Google” Google Assistant will respond but also continue to listen for additional commands, to continue the dialogue until you say “Stop” or “Thank you” to end the conversation. This is a fiendishly difficult software problem to solve and needs AI to do it well.

This could be a big leap for learning, as you can take deeper dives into topics; linked to actions and sharing. What makes it easier is the transcription of your words on the screen to confirm that it has captured what you intended. You find that this all increases that sense of flow, of it being a conversation. This is the direction of travel for conversational interfaces and chatbots. True dialogue promises to provide more than just answers to questions, as it will also provide, at some point, real Socratic dialogue, in other words – teaching.

Capture media

Phones have largely replaced cameras for most consumer use. Taking pictures and videos for Facebook, Twitter, Instagram and TikTok (see why TikTok is relevant to on line learning), has become a core use of smartphones. Social media has migrated across media; from text, to text and images, to images only, to video and now to those media with the ability to create, filter, edit media. In Pixel phones you see this happen at a very sophisticated level. 

Want a sharp portrait, good picture at night, images of the Milky Way, get a good Zoomed image; Google kicks ass on computational photography. Using machine learning-based white balance and multiple exposures to fill out problems on an image, it turns you into an impressive photographer, and that’s where machine learning, on-device neural engines, and overall improvements in both hardware and software component performance, raise the game in photography. The Pixel literally uses AI as a creative force in photography. 

As video has become an important medium in lea ring, so smartphones like these combined with the sharing capability on online platforms, allows learning through video to happen with ease.

Location

Basically, your smartphone is getting smarter, as it is now self-aware of where it is, not simply its GPS position, but where it is in relation to the world around itself. Google’s Soli, a motion sense chip, uses smart sensors and data analysis to detect how big something is, where it is and how close your phone is to that object. It shoots out electromagnetic waves and these waves bounce back to be interpreted by AI so that the position and objects can be recognised. It has a 180 degree view, better than the human eye, with only 120 degrees, concentrated in a much smaller arc, as most of it is peripheral vision.

This is crazy, but as you go to pick up your Pixel, it sees your hand, switches on the face recognition sensors, recognises you and unlocks your phone, as it recognises lots of face orientations, even upside down for unlocking secure payment… all in one motion. This is only one of a number of its applications for motion sense. And if you’re worried, as some were, about a phone that can be unlocked when you are sleeping, even dead, they have introduced a blink recognition system. 

Motion sense also delivers gesture recognition. This touchless future could be huge in the future, especially in a more Covid aware world. We have contactless payment and contactless interfaces are now here. A swipe of the hand for moving back and through songs, a pinch of the fingers for a button press, we could soon see an agreed language, like sign-language for interactions on lots of different devices; smartphones, laptops, with AR, within VR, control within cars. We gesture all the time, almost unconsciously pointing to imaginary watches when describing time and we’ve moved towards ever more transparent interfaces, with touchscreen, voice and now gestures. 

AI for Learning
As I explain in my book ‘AI for Learning’, it is the invisible hand and eye of AI that fueled this change. In learning, these frictionless interfaces provide interfaces that are easier to learn and use. They also reduce cognitive, leaving more bandwidth to learn. 

Your phone may also know, not just where you are, but what is around you, allowing the start of more sophisticated context reading for online job aids and learning. Suppose my phone knows what building I’m in, where I am in a building, close to an object, and also know what project I’m working on, it can have an educated guess as to what I’m likely to need in terms of push and pull nudges and support. This could be performance support on steroids where the whole move towards learning in the workflow is enriched by AI.

Conclusion

The smartphone has been astoundingly successful as a consumer and professional device. From its brick-like dimensions in the 1970s and 80s it quickly developed out of voice-only into text, photographs, then video. On interfaces, from buttons to touchscreens and is now a powerful computer that can do much of what a desktop computer can do and more. But the real leap is their AI capabilities, as they have AI embedded hardware for a lot more offline punch, as well as useful functionality. Your phone learns about you, personalises your experiences, knows where you are and now what’s around you. This all helps deliver the support you need to work, learn and improve your own expertise. We would be wise to look at the evolution of these devices as the evolution of how learners have and will interface with online learning. The main lesson is that the AI in every modern smartphone will be. In all online learning in the future.

Friday, August 14, 2020

AI and ethics - it's not as good as you think and not as bad as you fear

Joanna Bryson, one of the world’s experts in AI and ethics is right when she points out that the big problem in AI and Ethics is ‘anthropomorphising’. AI is competence without comprehension. It can beat you at chess, Go and poker but doesn’t know it has won. Literally hundreds of AI and ethics groups have sprung up over the last couple of years. Some are serious international bodies like the EU, IEEE and so on, but it is important to examine but remain level-headed on this issue. The danger is that we destroy the social goods that AI offer, by demonising it  before it has been tried.

Having just launched a new book ‘AI for Learning’ in which I tackle these ethical issues in some detail, I thought I’d provide a taster for the ethical concerns as they may affect the world of learning. 

Existential

Let’s get one moral issue out of the way – the existential threat. This often centres around Ray Kurzweil's ‘Singularity’, the idea that AI will at one point transcend human intelligence and become uncontrollable. Other AI experts like Stuart Russell, Brett Frischmann and Nick Bostrom have speculated at length on ways in which runaway AI could be a threat to our species. Although there are possible scenarios where runaway AI will lead to our demise as a species, this is not an issue that should worry us much in using AI for learning. Many, such as Stephen Pinker, Daniel Dennett and other serious researchers in AI are sceptical of these end-of-days theories. In any case, it is highly unlikely that AI for education will do much other than protect us from such scenarios.

Bias

Much more relevant is the topic of ‘bias’. The problem with many of the discussions around bias in AI, is that the discussions themselves are loaded with biases; confirmation bias, negativity bias, immediacy bias and so on. Remember that AI is ‘competence without comprehension’ competences that can be changed, whereas all humans have cognitive biases, which are difficult to change. AI is just maths, software and data. This is mathematical bias, for which there are definitions. It is easy to anthropomorphize these problems by seeing one form of bias as the same as the other. That aside, mathematical bias can be built into algorithms and data sets. What the science of statistics, and therefore AI, does, is quantify and try to eliminate such biases. This is, essentially, a design problem, and I don’t see much of a problem in the learning game, where datasets tend to be quite small, for example in adaptive learning. It gets to be a greater problem when using a model such as GPT-3 for learning, where the data set is massive. It can literally produce essay-like content at the click of a button. Nevertheless, I think that the ability of AI to be blind to gender, race, sexuality and social class may, in learning, make it less biased than humans. We need to be careful when I comes to making decisions that humans often make, but at the level of learning engagement, support there’s lots of low hanging fruit that need be of little ethical concern.

Race

The most valuable companies in the world are AI companies, in that their core strategic technology is AI. As to the common charge that AI is largely written by white coders, I can only respond by saying that the total number of white AI coders is massively outgunned by Chinese, Asian and Indian coders. The CEOs of Microsoft and Alphabet (Google) were both born and educated in India. And the CEOs of the three top Chinese tech companies are Chinese. Having spent some time in Silicon Valley last year, it is one of the most diverse working environment I’ve seen in terms of race. We can always do better but this should, in my view not be seen as a crippling ethical issue.

Gender

Gender is an altogether different issue and a much more intractable problem. There seems to be bias in the educational system among parents, teachers and others to steer girls away from STEM subjects and computer studies. But the idea that all algorithms are gender-biased is naïve. If such bias does arise one can work to eliminate the bias. Eliminating human gender bias is much more difficult.

Transparency

It is true that some AI is not wholly transparent, especially deep learning using neural networks. However, we shouldn’t throw out the baby with the bathwater… and the bath. We all use Google and academics use Google Scholar, because they are reliably useful. They are not transparent. The problems arise when AI is used to say, select or assess students. Here, we must ensure that we use systems that are fair. A lot of work is going into technology that interprets other AI software and reveals their inner workings.

Dehumanisation

A danger expressed by some educators is that AI may automate and therefore dehumanise the process of learning. This is often within discussions of robot teachers. I discuss the fallacy of robot teachers in the book. It is largely a silly idea, as silly as having a robot driver in a self-driving car. It is literally getting the wrong end of the stick, as AI in learning is largely about support for learners. Far from dehumanising learning it may empower learners.

Employment

The impact of AI on employment is a lively political and economic topic. Yet, before Covid, we had record levels of employment in the US, UK and China. There seems to be a fair amount of scaremongering at learning conferences, where you commonly see completely fictional quotes, such as ‘65% of children entering primary school today will be doing jobs that have yet to exist’. Even academic studies tend to be hyperbolic, such as the Frey and Osborne (2013) report from Oxford University that claimed ‘47% of jobs will be automated in the next two decades’. Seven years in and the evidence that this is true is slim. What is clear is that skills in creating and using AI for learning will be necessary. Indeed, Covid has accelerated this process. I categorise and list these new skills in the book.

Conclusion

I touch upon all of these issues in the book and stick to my original premise that AI is ‘not as good as you think it is and not as bad as you fear’. Sure there are ethical issues, but these are similar to general ethical issues in software and any area of human endeavour where technology is used. It is important not to see AI as separate from software and technology in general. That’s why I’m on the side of Pinker and Dennett in saying these are manageable problems. We can use technology to police technology. Indeed AI is used to stop sexist, racist and hate text and imagery from appearing online. Technology is always a balance between good and bad. We drive cars despite the fact that 1.3 million people die horrible deaths every year from crashes and many more have serious injuries. Let’s not demonise AI to such a degree that its benefits are not realised and , as I discuss in the book, in education and training the benefits are considerable.

 

AI for Learning

The book ‘AI for Learning’ is available on Amazon. In addition to ethics it covers many facets of AI for learning; teaching, learning, learning support, content creation, chatbots, learning analytics, sentiment analysis, and assessment.

 

Bibliography

Bryson, J.J., Diamantis, M.E. and Grant, T.D., 2017. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law25(3), pp.273-291.

Kurzweil, R., 2005. The singularity is near: When humans transcend biology. Penguin.

Russell, S., 2019. Human compatible: Artificial intelligence and the problem of control. Penguin.

Clark, D., Review of Human Compatible https://donaldclarkplanb.blogspot.com/search?q=Human+Compatible+by+Stuart+Russell+-+go+to+guy+on+AI+-+a+must+read..

Frischmann, B. and Selinger, E., 2018. Re-engineering humanity. Cambridge University Press.

Clark, D., Review of Re-engineering humanity

https://donaldclarkplanb.blogspot.com/search?q=Frischmann

Bostrom, N., 2017. Superintelligence. Dunod.

Pinker, S., 2018. Enlightenment now: The case for reason, science, humanism, and progress. Penguin.

Dennett, D.C., 2017. From bacteria to Bach and back: The evolution of minds. WW Norton & Company.

Clark, D., Review of From bacteria to Bach and back

https://donaldclarkplanb.blogspot.com/search?q=Dennett+-+why+we+need+polymaths+in+the+AI+ethics+debate


Friday, August 07, 2020

Wellness, Happiness and Mindfulness - Holy Trinity of bogus therapy culture

Wellness, Happiness and Mindfulness

In the 1850s Dr. John Harvey Kellogg invented Corn Flakes but his reasoning behind the invention is surprising. He was obsessed with sin, and in particular masturbation, seeing bland foods as a suppressor of such appetites. There is more than a touch of the Kellogg motivation in modern wellness, happiness and mindfulness training. We are seen as in need of redemption with deficits that need corrected by HR. We are instructed on how to be well, happy and mindful… as that will lead to greater productivity. How on earth did this happen, that HR became the supposed masters of our innermost feelings?

A battery of techniques has emerged in organisations from the therapy culture that grew out of psychoanalysis and other fashionable social trends in the 1960s, such as meditation. Several narratives underpin these fads; the therapy narrative where all are in need of cognitive cures, deficit narrative where all suffer from some sort of emotional deficit and binary narratives where the language of deficits is reinforced; well - unwell, happy - unhappy, mindful - mindless. Yet, the evidence is strangely absent. What went wrong?

All is not well with wellness

This is a huge business, around $8 billion in the US alone. Yet it is largely based on articles of faith, not research. The first large, randomised-controlled trial of an employee Wellbeing programme suggested they are a waste of money. Jones et al (2018) in their study What Do Workplace Wellness Programmes Do, took 12000 employees, randomly assigned them into groups, but found no “significant causal effects of treatment on total medical expenditures, health behaviors, employee productivity, or self-reported health status in the first year”. This study is important, as it avoids the self-selecting nature of the audiences so prevalent in other studies on wellbeing. The lack of controls renders most studies in this field largely useless as the basis for recommendations. 

Did they reduce sickness? No they didn’t. Did it result in staying in your job, getting promotion or a pay rise? No, it didn’t.  Did it reduce medication or hospital visits? No, it didn’t. This was true for almost every one of the 37 features studied. The bottom line is that there is no bottom line, no return on investment. The interesting conclusion by the authors of the study is that wellness programmes, far from helping the intended audience (the obese, smokers etc.), simply screens out those who are already healthy, yet the burden of cost is borne by all.

Workplace ‘wellness’ programmes abound, largely surveys and weak documents no sooner read than forgotten. Since when did HR think they have the right to take over the role therapists and responsibility for the emotional welfare of employees? HR, rather than sticking to the worthy role of employee development, pay and rations, has always wanted to be taken more seriously. But what gave them the right to take control of our emotional lives? Why do they think they are qualified to become therapeutic and moral experts? In practice, this often means reading one or two self-help books or a short course run by people who themselves cobble together some evidence-free, PowerPoint and downloaded survey template. In truth it ends up being superficial, if not hollow.

And it is not only in the workplace that therapy culture has taken root. In schools, wellbeing is seen as a necessary condition for learning and attainment. Yet a longitudinal study that looked at the relationship between attainment and subjective wellbeing, measured three times over six months on 807, 790 and 792 students respectively, showed that wellbeing did not predict academic achievement.

In some US Universities, students are asked to sign Wellness contracts. The University of Massachusetts, along with many others, has a Campus Wellness Contract. Undergraduates are asked to sign a contract that commits them to a healthy lifestyle (roughly conforming to white, Christian values). Perhaps the last thing many need at that age of joy, curiosity, exploration and risk, is some contract that turns you into a dull, conformist. Is that the real goal of education, to be ‘well’, as defined by some dull, abstentious benchmark?

Wellness Syndrome by Carl Cederstrom and Andre Spicer, is another welcome antidote to this wave of woolliness. The authors rightly expose it as a faddish syndrome, really a moral obligation and imperative to regulate your feelings and behaviour. The well - unwell, happy - unhappy dualism slips into the good - bad moral imperative. What they posit as the real mechanism for this movement is an appeal to narcissism. It is a programme actually appeals to the ‘me’ in all of us. Their main point is that it is counterproductive. The more you seek wellness, the less well or happy you become. 

If you have any doubts about the commercial pressure, remember the Australian ‘wellness’ blogger, Belle Gibson, who lied about having terminal cancer, just to sell her blog and book. Belle is a foolish young girl that deserves pity rather than scorn but many proponents of mindfulness, wellness and happiness are playing a similar game. It is a game that appears time and time again in HR. A book appears, training courses appear, ‘practitioners’ pop up, then an army of HR people get out there promising utopian increases in efficiency and organisational productivity on the back of their own self-propelled beliefs. The whole thing becomes a marketing exercise that uses its own hot air to fuel itself.

Happiness

The wellness, happiness and now mindfulness debate goes back to the Greeks and reached its peak with Bentham, Mill and subsequent philosophical and political debate around `Utilitarianism’ in the late 19th C. ‘The Greatest Happiness Principle’ led to a definition of happiness in terms of pleasure and the absence of pain. However, Bentham’s ‘hedonic calculus’ proved too primitive and awkward to use in any practical sense. Mill opted for quality, not quantity, with a focus on higher pleasures, but there were still problems of definition, and measurability. The arguments that ‘happiness’ is vague, difficult to measure and cannot be used as a guide for moral or social well-being, remain a problem for positive psychology.

Unfortunately, just as we thought it had receded into history, specious psychoanalysis brought all of this back under another guise; therapy culture. It all started with Freud but it was Rogers and more recently Seligman, that dragged it into the world of education and training. The idea that ‘happiness’ is the sole purpose of life, or even an end-in-itself, seems to have taken root in our therapeutic culture. Life is not a simple calculus of happiness - unhappiness. Even a cursory look at the complexity of human feelings, emotions and behaviour make that idea seem childish. Even Seligman, the pied-piper of happiness, came to reject this simple term and moved towards ‘flourishing’.

Constantly worrying about how well you are is no way to live your life. In these two clever studiesCan seeking happiness make people unhappy? Paradoxical effects of valuing happiness, two groups watched a happiness inducing video. Those who had undergone exposure to ‘happiness’ treatment before watching the video felt worse than those who had not. The authors argue that valuing happiness is self-defeating as the more it is valued, the more disappointed you become. It would seem that happiness expectations can lead to disappointment, and therefore feeling less happy, when faced with real world situations.

Unfortunately HR has caught a bad dose of ‘happy clapping’ and middle managers have latched onto the idea that we should try to engineer this happiness. You see it in the work-life balance debate (read work=unhappy, life=happy). You also see it within organisations, as HR tries to take control of the emotional welfare of employees. Self-appointed armies of mentors, coaches, counsellors and therapists are all over organisations searching for pathological deficits. Everyday emotions and ordinary contention are diagnosed as illnesses and people are offered cures, well bromides. This is not a plea for grumpiness, it is a plea for realism and sanity, before the therapeutic culture starts seeing the whole of society as an asylum full of pathological patients who need to pay for their sins. People deserve dignity at work, fair pay and conditions, a safe workplace and a good work environment. They are adults, not children. my happiness is MY business. 

The great Barbara Ehrenreich, in Smile or Die, is one of many who have criticised the rise of positive psychology and thinking. She thinks the ‘wellness’ and ‘happiness’ movement replaces reality with positive illusions. You can think positively but “at the cost of less realism”. Seligman’s book Authentic Happiness was been seen by Ehrenreich as a “jumble of anecdotes” and found his formula for happiness banal: H = S+F+C (Happiness = set range, circumstances and voluntary control). In the Journal of Happiness Studies she reads study after study linking happiness to every conceivable outcome but it is a lop-sided view of the world, with no room for the realism of negative results.

‘Mindfulness’ yet another mindless fad in education 

More recently, a particular species of wellness swept through education and corporate training – mindfulness. In truth, it is not new at all. It goes back to Buddhism, Freud, then Rogers and the relentless effort to get therapeutic theory into education. But there is plenty of reasons for rejecting this particular manifestation of the wellbeing madness.

Mindfulness is yet another example of adults taking their new-age, adult fixations and forcing them on the young. It is not as if kids take naturally to such unnatural behaviours, as they are naturally exuberant. Education should be about opening up young minds not forcing them to do things that faddish adults think is right for them. Education is about both mind and body but that means being alive and kicking, socialising with others through play, games and sport. Kids are lively and locking them up for most of the day in classrooms, often accompanied by enforced silence, is bad enough, without forcing them to sit in even more complete, communal silence. They are gloriously alive at that age and should play and learn, be lively and curious, not mimic artificial, adult fads. Education is about both mind and body but that means being alive and kicking, socialising with others through play, games and sport. Kids are lively and locking them up for most of the day in classrooms, often accompanied by enforced silence, is bad enough, without forcing them to sit in even more complete, communal silence. They are gloriously alive at that age and should play and learn, be lively and curious, not mimic artificial, adult fads.

Enforced silence and focus can sometimes be in order, especially when learning to think, reflect and generate meaningful analysis, synthesis and written work but to fetishize non-productive silence as part of self-development, is a stretch. For adults, it represents an easy but illusory solution to what is actually quite difficult, facing up to the fact that many things in life are actually quite difficult and complex. When. The solution is to simply ignore this by periods of forced inaction, we are perhaps exacerbating problems, not solving them.

Mindfulness plays a neat trick. It is a wolf in sheep’s clothing as it is actually mindless meditation under the guise of mindful attention. What we need is more mindful, external attention on learning, teachers and other people in learning. This means getting involved, not idle internalizing. It means being alert and attentive, as we know with certainty that outward-looking, psychological attention is a necessary condition for learning. The sort of internal attention that is needed for learning is to do with the coding, elaboration, scene setting, deep processing and practice, especially spaced practice, that leads to cognitive improvement.

The therapy business, and it is very much a business, finds it difficult to define ‘mindfulness’. Some relate it directly to Buddhist meditation, others to reflection on your physiological processes, others to internal cognitive reflection. In fact, it is somewhat contradictory, a stilling of the mind yet a strong sense of presence or attention to self, using a selfless, meditation-based practice. There’s no consistency as mindfulness is many things to many people. This is always a worry and often a sign that all is not well with a practice. It has all the hallmarks of a fad; not evidence-based (in terms of learning), promoted by celebrities and suddenly erupts as the ‘next big thing’. Of course mindfulness will have been long forgotten in a few of years’ time, as another temporary bromide hits the market.

Behind every fad, there is often a book. In this case, it is Mindful Work: How Meditation Is Changing Business from the Inside by David Gelles. His evidence is largely anecdotal, mainly the testimonies of the stressed-out executives who dabble a little in meditation, like it and do a top-down job applying their hobby to their employees. Even when workplace studies are considered they are of such poor design that they can be discounted. The key examples are, of course, companies who have the luxury of trying this stuff out. Already, massively successful, cash-rich companies in tech, health insurance and finance. Google, Aetna and Goldman Sachs - yes Goldman Sachs! Imagine using the company that was instrumental in the financial crisis, disastrous destruction of the Greek economy and participator in Malaysian corruption, to sell the idea of being ‘mindful’. A company that has inflicted financial misery on millions used as an argument for increasing ‘compassion’? This is an Orwellian world, where crooks define good behaviour. Hedge-fund managers are even quoted. Meditate in order to rape the markets but feel good about yourself at the same time.

Ultimately Gelles does not answer the key question, that many of these companies are in the game of making huge profits, avoiding tax. It is capitalism, not compassion that drives them. Mindfulness schemes allow them to mask heir compassion and pretend to be compassionate.  These therapeutic approaches in the workplace are fundamentally about PR and money, not mental health. "Militaries round the globe are using it for their snipers,” says Gelles. Well that is good to know. Feel calm while you blow someone’s brains out.

Here is a thought experiment. Let’s suppose you run a factory or hours billable law firm and you are faced with a recommendation for a ‘Mindfulness’ programme, which was recommended as 20 minutes a day. In a 40 hour week you’d have to guarantee a 4.6 % increase in productivity just to break even. Note that in the Gelles book, there is only one solitary example of this being used in a blue collar environment, for good reason. Are we being asked to believe that factories, shops, rubbish collection, bar staff and dozens of other jobs will see these increases in productivity through meditation? Of course not. It is a luxury only the swindlers can justify.

Learning styles, L/R brain theory, whole word literacy, Brain Gym, playing Mozart while kids learn –we have seen this stuff served up in real schools, driven by nothing more than the need for ‘fillers’ in ill-organised INSET days. Education does itself no favours by snatching at these crazes. It opens teachers and trainers up to the sort of unnecessary mocking that their enemies adore. Similarly in organisational training, where adults are increasingly having to participate in what many regard as infantile crazes.

When it comes to the evidence, let’s be careful here and ask the usual questions. What is the source? What was the method? There are far too many self-proclaimed, survey-monkey theorists ready to promote something which they already make a living from. As John Higgins (to be fair a supporter of wellness programmes) says, the evidence for the impact of these programmes is never clear, as “those who took advantage of the programs were likely individuals who already highly driven, motivated, and oriented toward self-improvement”. This has far more to do with the on-going obsession HR has with binary, therapeutic and even Silicon Valley narratives, than science.

Therapeutic narratives

The dominant narrative that underlies all three is the therapeutic narrative that goes back to Freud but includes many others, especially Carl Rogers. This narrative lies deeper than the one above, as it draws on a Freudian view of the world that sees almost everyone in need of therapy. It has its origins in Europe but reached its apotheosis in the US and California in particular. Carl Rogers is known as the founder of 'client-centred' therapy and his promotion of counselling. He also had a keen interest in education and his therapy-oriented methods became widely adopted in education and training through coaching and mentoring. His influence can be felt everywhere in the learning world, especially through counselling and therapeutic techniques in education and the workplace.

This narrative refuses to die and has morphed from fairly benign mentoring to more intrusive counselling and now onto wellness, happiness and mindfulness. Descriptive definitions suddenly become prescriptive techniques to be applied to all. Just as the underlying Freudian theory fades, this narrative, the therapeutic narrative, described well by Frank Furedi in Therapy Culture (2004) gets resurrected. Employees are not patients, the workplace is not an experimental therapy sandbox and HR are not psychotherapists.

Deficit narratives

The language plays into another more general narrative that lies beneath therapy culture – the deficit narrative. The weird assumption that all learners and employees are mentally deficient and in need of therapeutic help from educators and HR has taken hold, resulting in mindfulness, wellness and happiness jargon being bandied about like ant-depression tablets. Well-meaning but , they assume emotional deficits in us all and demand that it be reduced through half-baked, new age fads.

The conceit of therapy culture is that the answer to school attainment or productivity is always more wellness, happiness or mindfulness. The glass is always half empty. We always seem to have deep 'deficits' and his deficit mindset calls for reducing the deficit. What  is worse is education and training’s tendency to turn the deficit definition of emotions into something far worse – the pathological definition of education and training, where our emotional well-being and health is a key target for schooling and training. When education and training is seen as a cure and cognitive deficiency a disease, we need to worry.

False binary narratives

We can applaud attempts to make life less stressful and the use of therapy techniques for mental illness but there is a dangerous line that is crossed with wellness, happiness and mindfulness. That line is the push of therapy culture into the workplace. While these three mini-movements are different, they are all part of the same broad pathological narrative, where employees are seen as having something wrong, a form of original sin. The language used betrays the problem. 

Wellness v unwell

With wellness or wellbeing, the hidden assumption is that we are unwell and need to be made ‘well’ by whatever craze hits the HR conference circuit. Those who do not take part in dancing to the new company tune are branded as the unwell. It is an odd form of binary benchmarking.

Happiness v unhappiness

With the cult of happiness we have the simplistic ‘unhappy’ versus ‘happy’ assumption. If you are not being made happy, you are dysfunctional and unhappy. In practice, the emotional landscape of all humans is far more complex that this binary suggests. People have complex emotional lives that are tied up with their lives at home and outside the workplace. People are neuro-diverse and rarely fit into this sort of classification.

Mindful v mindless

Note the odd juxtaposition of ‘mindful’ with ‘mindless’. Am I really less fulfilled in my life than those who practice Mindfulness? Mindfulness becomes righteousness when it dismisses the rest of us as falling short of its some self-proclaimed cognitive and moral standard because we don’t practice an obscure meditative technique. That is where the line is crossed, the assumption that one is not mindful if not practicing some meditative technique.

These are precisely the false binary choices that these movements lever to peddle one-sided solutions. It poses mutually exclusive language to artificially bolster a case for the product (usually consultancy or a training course). By all means make the workplace a better place but these simple, binary oppositions in no way reflect the rich and complex mental states of people at work. These programmes assume simple dualisms. Treat people well, respect them, make sure they are fairly rewarded, listen to what they have to say, develop their skills but don't cross that line and become their pseudo-therapist.

Conclusion

Beware of words ending in –ness – wellness, happiness, mindfulness. They are catch-all terms that seem to mean everything but in the end, when implemented by HR in organisations, mean nothing. Life and work is not an illness. There is no problem in anyone choosing to partake in yoga, reflexology, mindfulness, wellness, laughter therapy, happiness – whatever – but that is a lifestyle choice, not a workplace imperative. This lifestyle training is something HR are neither qualified nor suited to manage. Often perfunctory conference talks or potboiler paperbacks on the subject, get turned into designing or buying ‘courses’, with the dubious and non-evidence-based claim that it will transform the business. What is far more likely to solve psychological problems in the workplace are direct actions that reduce pressures, from more equity on pay, professional management, good working conditions to flexible working. It is not the organisations job to solve mental health problems. Indeed this sort of meddling may make things worse.

Bibliography

Jones, D., Molitor, D. and Reif, J., 2019. What do workplace wellness programs do? Evidence from the Illinois workplace wellness study. The Quarterly Journal of Economics134(4), pp.1747-1791.

Mauss, I.B., Tamir, M., Anderson, C.L. and Savino, N.S., 2011. Can seeking happiness make people unhappy? Paradoxical effects of valuing happiness. Emotion, 11(4), p.807.

Yang, Q., Tian, L., Huebner, E.S. and Zhu, X., 2019. Relations among academic achievement, self-esteem, and subjective well-being in school among elementary school students: A longitudinal mediation model. School Psychology, 34(3), p.328.

Seligman, M.E., 2012. Flourish: A visionary new understanding of happiness and well-being. Simon and Schuster.

Barbara Ehrenreich, 2010. Smile or die: How positive thinking fooled America and the world. Granta books.

Furedi, F., 2004. Therapy culture: Cultivating vulnerability in an uncertain age. Psychology Press.

Gelles, D., 2015. Mindful work: How meditation is changing business from the inside out. Houghton Mifflin Harcourt.