Showing posts sorted by relevance for query 7 objections to social media in learning (and answers). Sort by date Show all posts
Showing posts sorted by relevance for query 7 objections to social media in learning (and answers). Sort by date Show all posts

Saturday, June 11, 2011

7 objections to social media in learning (and answers)


Social media – I’m a fan. I blog, facebook and tweet daily, and love all of the additional resources and tools. But when an important social and technological phenomenon turns into a bubble of evangelism, we’ve got to handle it with care. I’ll present on the use of Social Media in organisations in Zurich this week, to Directors of many of Europe’s top companies, and explain the upside but it’s just as important to be open about the downside. I agree with the Nick Shackleton-Jones Tweet, “When the tide comes in you’d better don your trunks and not bury yourself in the sand” but it’s also rational, for some, to walk up to dry land to avoid getting wet. Even the Vatican had a Devil’s Advocate department when discussing canonisation, so before giving Social Media the status of sainthood, let’s consider some of the downsides.
Objection 1: Dumbness of crowds
We have ‘constructivists’ who wouldn’t be able to string two sentences together when asked what that actually means in terms of real psychology. Then the woolly ‘social learning’ advocates who see all learning as social (ridiculous) and can’t see that some of it is a waste of time, like going over the top of your head to scratch your ear. Much of my productive learning is completely solitary and I’ve spent far too much time in my life, in wasteful, long-winded social contexts, like classrooms, training rooms, lecture theatres, meeting and conference rooms, learning little or nothing.
It’s a matter of balance, not blind belief in half-baked social theory. We need to see a mix of approaches that include social learning but not to the exclusion of focused, solitary learning. Reading, writing, reflecting and deep processing needs isolation from others, not chattering classes.
Objection 2: Weapons of mass distraction
Employees and learners can get stuck in a tar-pit of unproductivity as social media is sticky, seductive and addictive. Most parents have experienced concern about the amount of time their kids have spent on say, Facebook and Twitter, when they claim to have been studying or doing assignments. At work, it’s easy to avoid doing things you don’t want to do by escaping into social chat.
First, if you’re really that worried, monitor usage, which many organisations already do. That’s fine, as it’s a way of managing excessive use, but it’s far better to police by policy. Simply add a few words to your existing HR policy around the excessive use of social media for non-organisational purposes. In any case, in the end, in the workplace, employees have to be trusted.
Objection 3: Confidentiality, libel & harassment
Many organisations have examples of naïve, even malicious use of social media. There are genuine fears around the leaking of confidential information and reputational damage. In addition, individuals have been libelled and harassed, leading to complicated and expensive HR management issues and court cases.
To be honest, I think the fears are exaggerated here, but they do have to be managed. Again, police through policy, pointing out the dangers of inadvertently leaking information and expected behaviour towards others. To be frank, these four words should suffice ‘Don’t be a dick!’
Objection 4: Non-alignment
In this survey, less than 18% of decision makers at 100 of the UKs top 500 companies (by turnover), thought that L&D was aligned with the goals of the business. It is not always clear that social media solves this problem, as it can encourage divergence of task, as one link leads to another and one is led, not by goals, but interest. This can be worse than simply ‘not seeing the wood for the trees’, as social media can be so random, fragmented, long-winded and unstructured, that it is difficult to square off effort with relevance.
Anders Mørch of the University of Oslo sees this as one of many ‘double-edged’ sword phenomena in social learning. Say what you will about informal learning, there’s still a massive role for ‘aligned’ formal learning. Many things can’t be left to the vagaries of a social approach, as they have to be tackled within a fixed timescale.
Objection 5: Crap content
The mixed quality of user-generated content is also a concern. Even in media sharing the poor quality of lectures on YouTube EDU and other media sharing sites, show that sharing in itself is not always a virtue if the content being shared lacks quality or relevance. Putting one’s faith in user generated content can be a disaster if you’re relying on that alone.
Wikis solve this problem by having a process of communal and tracked amendments, but you need volumes of contributors to raise the quality of the content. Rankings and strong social recommendations by trusted colleagues is another useful control, feeding high quality links and content from outside the organisation.
Objection 6: Redundancy
Many of the productivity tools are here today, gone tomorrow. Some simply collapse, as they have no sustainable way to monetise the product. Some get dropped (even Google products), others get bought by the bigger boys and suddenly disappear or become part of a larger software suite. It can be hard to keep up.
There seems little danger of the major entities, such as Google, Facebook and Twitter disappearing, so these are safe bets. However, it would be wise to regard others as useful even though they are temporary, especially tools such as Doodle etc. Data storage is another issue, however, Google and Apple are as stable as anyone in this regard.
Objection 7: Security
Many organisations, obviously the military, government and banks, but also many other organisations, are nervous about DoS attacks and data theft, and are rightly nervous about unlimited access to social media and tools. Global Corporations are under siege from hacker groups and online organised crime. Even Julian Assange won’t use Facebook as he’s sure the data has already been sucked out by non-desirables. This is not irrational fear, it’s the real deal.
However, HR and training bods should not be making this decision. They need to ask the IT experts about the dangers. This is fair as they wouldn’t be expected to restrict your behaviour in teaching or training. Once a real examination of the issues has been done, it can be allowed. Point to other organisations that have done this and have had no problems.
Conclusion
OK, that’s the Devil’s Advocate stuff over. The reality is the astounding rise of the internet as a social intermediary with social media being the number 1 use of the web, 600 million Facebook users. Potential employees, employees, learners and customers, are using this stuff in anger. The modern executive, manager, teacher or trainer can’t really call themselves a professional without at least a knowledge of social media. You’ve got to play with this stuff to understand its virtues and vices.
You also need to understand, plan and assume its use, for there’s no way that it will not be used. Every one of your employees has a mobile which is a pipe to the outside world beyond your control.
However, it’s easy for academics and advisors who have never really had to ‘run’ an organisation, or take responsibility for real jobs and lives, to get over-excited about their passions. They themselves can be subject to social conformity, groupthink, non-alignment and hype. It’s important that this type of over-optimism is not at the expense of realism.
To be fair, people like Jane Hart, Jay Cross, Charles Jennings and Harold Jarche et al, understand all of this, the danger is the bandwagon effect and evangelistic groupthink, which can lead to the abandonment of good practice elsewhere. Social media is not the answer to every problem, but it’s a undoubtedly a useful and powerful advance in learning.

Friday, September 07, 2018

Are 'chatbots' a gamechanger in learning? 10 reasons and some warnings!

This was the debate motion at the LPI conference in London. I was FOR the motion (Henry Stewart was AGAINST) and let me explain why....
1. AI is a gamechanger
AI will change the very nature of work. It may even change what it is to be human. This is a technological revolution as big as the internet and will therefore change what we learn, why we learn and how we learn. The Top Seven companies by market cap all have AI as a core strategy; AppleAlphabetMicrosoftAmazonTencentFacebook and Alibaba. AI is a strategic concern for every sector and every business, even learning. Nevertheless we must be careful not to hype their functionality. They are not capable of fully understanding every question you throw at them, neither do they have the general human capabilities of a teacher. They are, essentially, good within narrow parameters. We must manage expectations here.
2. Evidence from consumers
Several radical shifts in consumer online behavior move us towards chatbots. First the entry of voice activated bots into the home and connected the the IoT (Internet of Things) – Amazon Alexa and Google Home. My Alexa is linked to my internet music service, lights in my home and I use it a a timer for Skype calls and events during the day. It is integrated into my workflow.
3. Rise of voice
The rise of ‘voice’ as a natural form of communicating with technology – Siri, Cortana and other similar services. Over 10% of all search is now by voice. We have been using computer generated voice from text files in WildFire for some time. It adds some humanity to what can often be seen as the sterility of online learning.
4. Chat has superseded social media
The switch from social media to messaging/chat apps took place in late 2014 and the gap is growing – chat is the home screen for most young people. Look over someone's shoulder and you're far more likely to see a 'chat' screen than a website. The lesson here is that chatbots allow us to play to the natural online behaviours of learners. Then again, chat with another human is a little different from chat with a chatbot, far more limited.
5. Social
We are social animals and it was no accident that chatbots first emerged in social tools such as facebook and Slack. They are a natural extension of existing social learning, allowing us to place them in the workflow. Chatbots, like Otto, are designed to lie within these workflow tools, moving learning from the LMS to a more demand-driven model.
6. Pedagogy in chatbots
Most teaching is through dialogue. The Socratic method may have been undermined by blackboards and their successors through to PowerPoint, but voice and dialogue are making a comeback. Speaking and listening through dialogue is our most natural interface. We’ve evolved these capabilities over 2 million years, it’s natural and we’re grammatical geniuses aged 3, without having to be taught to speak and listen. Within dialogue lies lots of pedagogically strong learning techniques; retrieval, elaboration, questions, answers, follow ups, examples and so on. It just feels more natural. Once again, however, we must be careful in thinking of chatbots as people. They are not conscious and not capable of full-flow, open dialogue.
7. Evidence in learning
An exit poll taken by Donald Taylor, from Learning and Technologies conference this year, showed Personalised learning at No 1 and AI and No 3. The interest is clearly strong and there’s lots of real projects being delivered to real clients from WildFire, Learning Pool and so on. However, be careful about vendors telling you their chatbot is true AI. Many are not. It is fiendishly difficult to do this well, so most are very structured, branching bots with limited functionality.
8. Chatbots across the learning journey
There are now real chatbot applications at points across the entire learning journey. I showed actual chatbot applications in learning in the following areas:
   Recruitment bots   
   Onboarding bots
   Learner engagement bots
   Learner support bots
   Invisible LMS bots
   Mentor bots
   Reflective bots
   Practice bots
   Assessment bots
   Wellbeing bots
The problem we have is that most bots are actually just FAQ bots. They are pixies for search. In the learning game they have much more potential. My own view is that we'll see a range of bot types emerge that will match the needs of learners and organisations. We must think more expansively around bots if they are to play a significant role in learning.
9. They’re learners
An important feature of modern chatbots, compared to ELIZA from the 1960s, is the fact that they now learn. This matters as the more you train and use them, the better they get. We used to have just human teachers and learners, we now have technology that is both a teacher and learner. This means one can take advantage of bot services from some of the large tech companies. But be careful - it's not cheap and they tend t swap out functionality with little sensitivity around your delivery.
10. It’s started
Technology is always ahead of the sociology, which is always ahead of learning and development. Yet, we see in these many projects, even with relatively primitive technology and emerging trend – the use of technology delivered chatbot learning. In time, this will happen. I've been involved in several projects now across a range of chatbot types.
Objections
Nigel Paine chaired the debate with his usual panache and teased questions out of the audience and the real debate ensued. The questions were rather good.
Q Has AI has passed the Turing test?
First, there are many versions of the Turing test but the evidence from the many chatbots on social media all the way to Google Duplex, shows that it has been passed but only in limited areas. Not for long, sustained and very detailed dialogue, but certainly within limited domains. Google Duplex showed that we’re getting there on sustained dialogue and the next generation of Amazon’s Alexa and Google Home will have memory, context and personalisation in their chatbot software. It will come in time.
Q AI can never match the human brain
This is true but not always the point. We didn’t learn to fly by copying the wings of a bird – we invented new technology – the airplane. We didn’t go faster by looking at the legs of a cheetah, we invented the wheel. The human brain is actually a rather fragile entity. It takes 20 years and more of training to make it even remotely useful in the workplace, it is inattentive, easily overloaded, has fallible memory, forgets most of what its tries to learn, has tons of biases (we're all racist and sexist), sleeps 8 hours a day, we can’t download, can’t network and we die. But it is true that it is rather good at general things. This is why chatbots are best targeted at specific uses and domains, such as the species of chatbot I listed earlier.
Q Chatbots v people
Michelle Parry-Slater made a good point about chatbots not replacing people but working alongside people. This is important. Chatbots may replace some functions and roles but few suppose that all people will be eliminated by chatbots. We have to see them as being part of the learning landscape.
Q Chatbots need to capture pedagogy
Good question from Martin Couzins. Chatbots have to embody good pedagogy and already do. Whether it’s models of engagement, support, learning objectives, invisible LMS, practice, assessment or well being, the whole point is to use both the interface and back-end functionality (important area for pedagogic capture) to deliver powerful learning based on evidence-based theory, such as retrieval, effortful learning, spaced-practice and so on. This will improve rather than diminish or ignore pedagogy. In all of the examples I showed, pedagogy was first and foremost.
Q Will L and D skills have to change
Indeed. I have been training Interactive Designers on chatbot and AI skills as this is already in demand. The days of simply producing media assets and multiple choice questions is coming to a close – thankfully.
Conclusion
Oh and we won the debate by some margin, with a significant number changing their minds from sceptics to believers along the way! That doesn't really matter, as it was a self-selecting audience - they came, I'd imagine, as they were curious and had some affinity with the idea that chatbots have a role. My view is these debates are good at conferences - by starting with a polarised position, the audience can move and shift around in the middle. The audience in this session were excellent, with great questions, as you've seen above. Note to conference organisers - we need more of this - it energises debate and audience participation.