Monday, May 22, 2017

Philosophy of technology - Plato, Aristotle, Nietzsche, Heidegger - technology is not a black box

Greek dystopia
The Greeks understood, profoundly, the philosophy of technology. In Aeschylus’s Prometheus Bound, when Zeus hands Prometheus the power of metallurgy, writing and mathematics, Prometheus gifts it to man, so Zeus punishes him, with eternal torture. This warning is the first dystopian view of technology in Western culture. Mary Shelley called Frankenstein ‘A Modern Prometheus’ and Hollywood has delivered for a nearly a century on that dystopian vision. Art has largely been wary and critical of technology.
God as maker
But there is another more considered view of technology in ancient Greece. Plato articulated the philosophy of technology, seeing the world, in his Timaeus, as the work of an ‘Artisan’, in other words the universe is a created entity, a technology. Aristotle makes the brilliant observation in his Physics, that technology not only mimics nature but continues “what nature cannot bring to a finish”. They set in train an idea that the universe was made and that there was a maker, the universe as a technological creation.
The following two thousand year history of Western culture bought into the myth of the universe as a piece of created technology. Paley, who formulated the modern argument for the existence of God from design, used technological imagery, the watch, to specify and prove the existence of a designed universe and therefore a designer - we call (him) God. In Natural Theology; or, Evidences of the Existence and Attributes of the Deity, he uses an argument from analogy to compare the workings of a watch with the observed movements of the planets in the solar system to conclude that it shows signs of design and that there must be a designer. Dawkins titled his book The Blind Watchmaker as its counterpoint. God as watchmaker, technologist, has been the dominant, popular, philosophical belief for two millennia. 
Technology, in this sense, helped generate this metaphysical deity. It is this binary separation of the subject from the object that allows us to create new realms, heaven and earth, which gets a moral patina and becomes good and evil, heaven and hell. The machinations of the pastoral heaven and fiery foundry that is hell  revealed the dystopian vision of the Greeks.
Technology is the manifestation of human conceptualization and action, as it creates objects that enhance human powers, first physical then psychological. With the first hand-held axes, we turned natural materials to our own ends. With such tools we could hunt, expand and thrive, then control the energy from felled trees to create metals and forge even more powerful tools. Tools beget tools.
Monotheism rose on the back of cultures in the fertile crescent of the Middle East, who literally lived on the fruits of their tool-aided labour. The spade, the plough and the scythe gave them time to reflect. Interestingly our first records, on that beautifully permanent piece of technology, the clay tablet, are largely the accounts of agricultural produce. The rise of writing and efficient alphabets make writing the technology of control. We are at heart accountants, holding everything to account, even our sins. The great religious books of accounts were the first global best sellers.
Technology slew God
Technology may have suggested, then created God, but in the end it slew him. With Copernicus, who drew upon technology-generated data, we found ourselves at some distance from the centre of the Universe, not even at the centre of our own little whirl of planets. Darwin then destroyed the last conceit, that we were unique and created in the eyes of a God. We were the product of the blind watchmaker, a mechanical, double-helix process, not a maker, reduced to mere accidents of genetic generation, the sons not of Gods but genetic mistakes.
Anchors lost, we were adrift, but we humans are a cunning species. We not only make things up, we make things and make things happen.
We are makers
Once God was dead, in the Nietzschean sense of a conceptual death, we were left with just technology. Radovan Richta’s theory of Technological Evolution posited three stages – tools, machines and automation. We got our solace not from being created forms but by creating forms ourselves. We became little Gods and began to create our own universe. We abandoned the fields for factories and designed machines that could do the work of many men. What we learned was scale. We scaled agricultural production through technology in the agricultural revolution, scaled factory production in the industrial revolution, scaled mass production in the consumer revolution. Then more machines to take us to far-off places – the seaside, another country, the moon. We now scale the very thing that created this technology, ourselves. We alchemists have learned to scale our own brains.
Maker destroy the Little Gods
Eventually we realized that even we, as creators, could make machines that could know and think on our behalf. God had died but now the Little Gods are dying. Gods have a habit of destroying their creators and we will return to that agricultural age, an age of an abundance of time and the death of distance. We, once more, will have to reflect on the folly of work and learn to accept that was never our fate, only an aberration. Technology now literally shapes our conception of place and space. With film, radio, TV and the web. As spiders we got entangled in our own web and it now begins to spin us.
Technology not a black box
Technology is not a ‘black box’, something separate from us. It has shaped our evolution, shaped our progress, shaped out thinking - it will shape our future. It may even be an existential threat. There is a complex dialectic between our species and technology that is far more multifaceted than the simplistic ‘it’s about people not technology’ trope one constantly hears on the subject. That dialectic has suddenly got a lot more complex with AI. As Martin Heidegger said in his famous Spiegel interview, “Only a God can save us”. What I think he meant by this was that technology has become something greater than us, something we now find difficult to even see, as its hand has become ever more invisible. It is vital that we reflect on technology, not as a ‘thing-in-itself’, separate from us, but as part of us. Now that we know there may be no maker God, no omnipotent technologist, we have to face up to our own future as makers. For that we need to turn to philosophy – Plato, Aristotle, Nietzsche and Heidegger are a good start….
The postscrip is that AI may, in the end, be the way forward even in philosophy. In the same way that the brain has limits on its ability to play chess or GO, it may also have limits on the application of reason and logic. Philosophical problems themseleves may need the power of AI to find solutions to these intractable problems. AI may be the God that saves us....

Thursday, May 04, 2017

10 uses for Amazon Echo in corporates

OK she’s been in my kitchen for months and I’m in the habit of asking her to give me Radio 4 while I’m making my morning coffee. Useful for music as well, especially when a tune comes into your head. But it’s usually some question I have in my head or topic I want some detail on. My wife’s getting used to hearing me talk to someone else while in another room. But what about the more formal use of Alexa in a business? Could its frictionless, hands-free, natural language interface be of use in the office environment?
1. Timer
How often have you been in a meeting that’s overrun? You can set multiple timers on Alexa and she will light up and alarm you (softly) towards the end of each agenda item, say one minute before the next agenda item. It could also be useful as a timer for speakers and presenters. Ten minutes each? Set her up and she provides both visual and aural timed cues. I guess it would pay for itself at the end of the first meeting!
2.  Calendar functionality
As Alexa can be integrated with your Google calendar, you simply say, “Alexa, tell Quick Events to add go to see Tuesday 4th March at 11 a.m.". It prompts you until it has the complete scheduled entry.
3. To do lists
Alexa will add things to a To Do list. This could be an action list from a meeting or a personal list.
4. Calculator
Need numbers added, subtracted, multiplied, divided? You can read them in quickly and Alexa relpies quickly.
5. Queries and questions
Quick questions or more detailed stuff from Wikipedia? Alexa will oblige. You can also get stock quotes and even do banking through Capital One. Expect others to follow.
6. Domain specific knowledge
Product knowledge, company specific knowledge, Alexa can be trained to respond to voice queries. Deliver a large range of text files and Alexa can find the relevant one on request.
7. Training
You can provide text (text to speech) or your own audio briefings. Indeed, you can have as many of these as you want. Or go one step further with a quiz app that delivers audio training.
8. Music
Set yourself up for the day or have some ambient music on while you work? Better still music that responds to your mood and requests, Alexa is your DJ on demand.
9.  Order sandwiches, Pizza or Uber
As Alexa is connected to several suppliers, you can get these delivered to your business door. Saves all of that running out for lunchtime sandwiches or pizza.
10. Control office environment
You can control your office environment through the Smart Home Skill API. This will work with existing smart home devices but there’s a developer’s kit so that you can develop your own. It can control lights, thermostats, security systems and so on.
Conclusion

As natural language AI applications progress, we will see these business uses become more responsive and sophisticated. This is likely to eat into that huge portion of management that the Harvard Business Review identified as admin. Beyond this are applications that deliver services, knowledge and training, specific to your organisation and you as an individual. Working on this as an application in training as we speak.

Wednesday, May 03, 2017

AI moving towards the invisible interface

AI is the new UI
What do the most popular online applications all have in common? They all use AI-driven interfaces. AI is the new UI. Google, Facebook, Twitter, Snapchat, Email, Amazon, Google Maps, Google Translate, Satnav, Alexa, Siri, Cortana, Netflix all use sophisticated AI to personalise in terms of filtering, relevance, convenience, time and place-sensitivity. They work because they tailor themselves to your needs. Few notice the invisible hand that makes them work, that makes them more appealing. In fact, they work because they are invisible. It is not the user interface that matters, it is the user experience.

Yet, in online learning, AI UIs are rarely used. That’s a puzzle, as it is the one area of human endeavour that has the most to gain. As Black & William showed, feedback that is relevant, clear and precise, goes a long way in learning. Not so much a silver bullet as a series of well targeted rifle shots that keep the learner moving forward. When learning is sensitive to the learner’s needs in terms of pace, relevance and convenience, things progress.

Learning demands attention and because our working memory is the narrow funnel through which we acquire knowledge and skills, the more frictionless the interface, the more efficient the speed and efficacy of learning. Why load the learner with the extra tasks of learning an interface, navigation and extraneous noise. We’ve seen steady progress beyond the QWERTY keyboard, designed to slow typing down to avoid mechanical jams, towards mice and touch screens. But it is with the leap into AI that interfaces are becoming truly invisible.

Textless
Voice was the first breakthrough and voice recognition is only now reaching the level of reliability that allows it to be used in consumer computers, smartphones and devices in the home, like Amazon Echo and Google Home. We don’t have to learn how to speak and listen, those are skills we picked up effortlessly as young children. In a sense, we didn’t have to learn how to do these things at all, they came naturally. As bots develop the ability to engage in dialogue, they will be ever more useful in teaching and learning.
AI also provides typing, fingerprint and face recognition. These can be used for personal identification, even assessment. Face recognition for ID, as well as thought diagnosis, is also advancing, as is eye movement and physical gesture recognition. Such techniques are commonly used in online services such as Google, Facebook, Snapchat and so on. But there are bigger prizes in the invisible interface game. So let's take a leapof the imagination and see where this may lead to over the next few decades.

Frictionless interfaces
Mark Zuckerberg announced this year that he wants to get into mind interfaces, where you control computers and write straight from thought. This is an attempt to move beyond smartphones. The advantages are obvious in that you think fast, type slow. There’s already someone with a pea-sized implant that can type eight words a minute. Optical imaging (lasers) that read the brain are one possibility. There is an obvious problem here around privacy but Facebook claim to be focussing only on words chosen by the brain for speech i.e. things you were going to say anyway. This capability could also be used to control augmented and virtual reality, as well as comms to the internet of things. Underlying all of this is AI.

In Sex, Lies and Brain Scans, by Sahakian and Gottwald, the advances in this area sound astonishing. John-Dylan Hayes (Max Plank Institute) can already predict intentions in the mind, with scans, to see whether the brain is about to add or subtract two numbers, or press a right or left button. Words can also be read, with Tom Mitchell (Carnegie Mellon) able to spot, from fMRI scans, nouns from a list of 60, 7 times out of 10.
 They moved on to train the model to predict words out of a set of 1001 nouns, 7 times out of 10. Jack Gallant (University of California) reconstructed watched movies purely from scans. Even emotions can be read, such as fear, happiness, sadness, lust and pride by Karim Kassan (Carnegie Mellon). Beyond this there has been modest success by Tomoyasu Horikawain identifying topics in dreams. Sentiment analysis from text and speech is also making progress with AI systems providing the analysis.

The good news is that there seems to be commonality across humans, as semantic maps, the relationship between words and concepts seems to be consistent across individuals. Of course, there are problems to be overcome as the brain tends to produce a lot of ‘noise’, which rises and falls but doesn’t tell us much else. The speed of neurotransmission is blindingly fast, making that difficult to track and, of course, most of these experiments use huge, immobile and expensive scanners.

The implications for learning are obvious. When we know what you think, we know whether you are learning, optimise that learning, provide relevant feedback and also reliably assess. To read the mind is to read the learning process, it’s misunderstandings and failures, as well as its understanding and successful acquisition of knowledge and skills. A window into the mind gives teachers and students unique advantages in learning.

Seamless interfaces
Elon Musk’s Neuralink goes one step further looking at extending our already extended mind through Neural Laces or implants. Although our brains can cope with sizeable INPUT flows through our senses, we are severely limited on OUTPUT, with speech or two meat fingers pecking away on keyboards and touchscreens. The goal is to interface physically with the brain to explore communications but also storage and therefore extended memory. Imagine expanding your memory so that it becomes more reliable – able to know so much more, have higher mathematical skills, speak many languages, have many more skills.
We already have cochlear implants that bring hearing to the deaf, implants that allow those who suffer from paralysis to use their limbs. We have seen how brain use in VR can rewire the brain and restore the nervous system in paraplegics. This should come as no surprise that this will develop further as AI solves the problem of interfacing, in the sense of both reading and writing to the brain.
The potential for learning is literally ‘mind blowing’. Massively leaps in efficacy may be possible, as well as retained knowledge, retrieval and skills. We are augmenting the brain by making it part of a larger network, seamlessly.

Conclusion
There is a sense in which the middleman is being slowly squeezed out here or disintermediated. Will there be a need for classrooms, teaching, blackboards, whiteboards, lectures or any of the apparatus of teaching when the brain is an open notebook, ready to interface directly with knowledge and skills, at first with deviceless natural interfaces using voice, gesture and looks, then frictionless brain communications and finally seamless brain links. Clumsy interfaces inhibit learning, clean smooth, deviceless, frictionless and seamless interfaces enhance and accelerate learning. This all plays to enhancing the weaknesses of the evolved biological brain - its biases, inattentiveness, forgetting, need to sleep, depressive tendencies, lack of download or networking, slow decline, dementia and death. A new frontier has opened up and we’re crossing literally into ‘unknown’ territory. We may even find that we will come to know the previously unknowable and think at levels beyond the current limitations of our flawed brains.