Thursday, March 28, 2019

Chatbots are being abused – but they’re fighting back!

Folk ask chatbots the weirdest of things. That’s fine of your chatbot is, say, a Dominatrix (yes they do exist). But in customer care or learning chatbots, it seems surprising – it’s not. Users know that chatbots are really pieces of software, so test it with rude and awkward questions. Swearing, sexual suggestions, requests to do odd things, and just being plain rude are common. 
The Cleo chatbot has been asked out on a date over 2000 times and asked to send naked photographs on over 1000 occasions. To the latter it sends back a picture of a circuit board. Nice touch and humour is often the best response. The financial chatbot Plum responds to swearing by saying "I might be a robot but I have digital feelings. Please don't swear." These are sensible responses, as Nass and Reeves found in their studies of humans with technology, that we humans expect our tech to be polite. 
There are even worse disasters in ‘botland’. InspiroBot creates inspiring quotes on nice photographs but often comes up with ridiculous rot. Tay, released by Microsoft, quickly became a sex-crazed Nazi and BabyQ recommended that young Chinese people should go to the US to realise their dreams. They were, of course shut down in hours. This is one of the problem with open, machine learning bots, they have a life of their own. But awkward questions can be useful…
Play
People want to play with chatbots – that’s fine. You often find that these questions are asked when someone first uses a chatbot or buys Alexa. It’s a sort of on-boarding process, where the new user gets used to the idea of typing replies or speaking to a machine.
Test limits
The odd questions tends to come at the start, as people stress-test the bot, then drops off dramatically. This is telling and actually quite useful, as users get to see how the bot works. They’re sometimes window shopping or simply seeing where the limits lie. One can see where the semantic interpretation of the Natural Language Interface lies by variants on the same question. Note that you can quickly tell whether it uses something like Google’s Dialogueflow, as opposed to a fixed non-natural language system.
Expectations 
It also helps calibrate and manage expectations. Using a bot is a bit like speaking to a very young child. You ask it a few questions a bit of back and forth, then get its level. Actually, with some, it’s like speaking to a dog, where all you can do is variants on ‘fetch’. Once the user realises that the bot is not a general purpose companion, who will answer anything or teacher with super-teaching qualities, and has a purpose, usually a specific domain, like finance, health or a specific subject, and that questions beyond this are pointless, they settle down. You get that “fair enough’ response and they settle down to the actual business of the bot.
Engagement
These little touches of humour and politeness serve a further purpose, in that they actually engage the user. If you get a witty or clever reply, you have a little more respect for the bot or at least the designer of the bot. With a little clever scripting, this can make or break user acceptance. Some people will, inevitably, ask your bot to tell a joke – be ready for that one. A knock-knock joke is good as it involves a shot dialogue, or lightbulb joke.
Tone
These responses can also be used to set the tone of the bot. Good bots know their audience and set the right tone. It’s pointless being too hip and smart-assed with an older audience who may find it just annoying. Come to think of it, this is also true of younger audiences, who are similarly intolerant of clichés. You can use these responses to be edgy, light-hearted, serious, academic… whatever.
Conclusion
You’ll find yourself dead-ending a lot with bots. They’re nowhere near as smart as you at first think. That’s OK. They serve a function and are getting better. But it’s good to offer a little freedom, allow people to play, explore, find limits, set expectations and increase engagement. 

No comments: