Monday, February 08, 2016
Economics of job losses/gains in learning game
‘Technological unemployment’ was the term coined by Keynes to describe the economic prospect of technology-driven job losses and there has been debate for centuries on the issue. Far from being a cut and dry count of job losses and gains, it is a complex economic issue. One fly in the ointment is ‘structural unemployment’, a feature of most countries but it is not clear that this is always caused by technology. Another are the benefits that accrue to people through technology not reflected in the job figures; cheaper prices new products for everyone. and incresed productivity. Then there are the benefits of higher wages (for those in employment), new jobs in technology, new jobs created by technology and increased investment.
While the optimistic replacement of jobs may have been true in the industrial era, many argue that it is no longer true in the information age, where fewer people design technology that replaces large numbers of blue collar, and now white collar, workers. There is a new concern, that AI can take graduate and professional jobs, just as robots took factory jobs and farm mechanization took agricultural jobs.
My own view is that we are now facing the inevitable rise of long-term structural unemployment, caused by technological innovation. Few professions will be immune to these pressures, even in the learning game. In the last couple of years economic opinion has swung in this direction. It was a major theme at Davos in 2015. However, we must be careful in not being swept away by dystopian ideas on technology, as some predict no real impact on the overall employment landscape. McKinsey, last year in 2015 published a report saying that technology tends to replace part of many jobs but not replace them. Others, like Deloittes are less optimistic. The likely outcome in many areas of education, apart from the base of the pyramid jobs such as librarians and paper publishing, is the gradual replacement of jobs by technology aids and alternatives. However, the one caveat (and it is a big caveat) is that predictions about what technology is capable of have been poor and there may be some radical and unseen shifts that come along with AI and robotics.
Is education the cure?
An interesting brake on the problem could be the very thing we are discussing – increased delivery of education to cerate jobs. However, we must be careful in ascribing employment, and especially technological innovation and employment, to educational causes. Not all economists agree that the continual spend and expansion on Higher Education leads to increases in employment. This has been obvious in southern Europe and is now registering in graduate unemployment and underemployment figures (graduates doing non-graduate jobs). Even Paul Krugman thinks that education is not the cure. He reminds us that there has been a steady ‘hollowing out’ of middle class jobs. This is not a simple heads v hands issue, where smart people retain jobs and manual workers lose their jobs. It may be the other way round. Alan Blinder and Alan Krueger have suggested that we are now in an era where high paid jobs are easier to offshore than low paid manual jobs. Few now think that marching everyone in lock-step through an expensive higher education is the answer to technological unemployment. It may even be a placebo that leads to debt burdens that limit growth and exacerbate the problem.
Jobs in learning?
Substantial job losses will result from the effect of technology and AI. In the recent literature, however, the learning world comes out relatively unscathed. But I’m not so sure about their definition of ‘learning’. There’s a tendency for analysts and educators to see the ‘learning world’ largely through an institutional lens – schools, college, Universities. Yet this is merely a fraction of the learning world. Most learning takes place outside of these institutions, in non-institutional contexts - the workplace, at home and informally. If we define the ‘learning’ world broadly, and include all forms of cognitive improvement, we see libraries, bookshops, online bookshops, television, newspapers, online, social media and many other sources of learning, as causally relevant. When seen through this wide-angle lens, you start to realize how profound technology has been in replacing the human component in learning.
Technology and ‘learning’ jobs
Technology has always democratized, decentralized and disintermediated knowledge. It happened with the invention of writing, then the alphabet, paper and printing. (LINKS) These were all profound technological boosts to the transmission of knowledge.
Electronically, we then had the telegraph, telephone, radio, film and TV. These ate into the time that was previously spent on print.
In the computer age, this started at the bottom of the learning pyramid, with open access to ‘knowledge’ through the internet. Google gave us access, Wikipedia and many others gave us content and jobs were lost in the publishing business (the huge encyclopedia sector died on the spot) and libraries. The fall in footfall at libraries and bookshops, I’d contend, has little to do with the fall in reading and everything to do with the rise in online access to knowledge and cheap online books. There has been a renaissance in reading and writing by young people, who read and write, it would appear, every few minutes on their phones or computers. At this level, if you want to find something out you go to the web. The one third fall in librarian jobs in the US since 1990, after decades of rises, correlates well with the rise of the computer and online access. This was a well paid, graduate level profession that is being decimated by online technology. Similarly with large scale reference books such as encyclopedias. Wikipedia was running a worldwide publishing operation with 20/30 staff. Reference tomes and their distribution, were rapidly replaced with these online services. The effect of these services on losing knowledge delivery jobs, has been substantial. Large tech companies such as Google tend to have tens of thousands of employees, unlike previous global behemoths that had hundreds of thousands.
‘How to’ jobs also
Access to YouTube is also likely to have had an effect on practical jobs. Many, many millions of ‘how to…’ tasks have been completed by individuals who simply looked it up on YouTube. Whether it’s DIY, car repair, learning chords on a guitar… direct instruction is available and off you go. The availability of mobile devices allows you to take the instruction to the task. Saves money, saves time – jobs lost.
It is here that job losses are felt directly. The rise of online organizational learning has been eating away at the training market for over 30 years. This has, without doubt, led to a dramatic reduction in the number of trainers in organisations. Gone are the Stately Home training centres, and month long courses. Online learning, simulations, blended learning, flipped classroom, 70:20:10 initiatives, formal and informal, have all taken a massive shark-size bite out of the classroom training market and undoubtedly led directly to the loss of jobs. The global corporate e-learning market is $51 billion in 2016 (Docebo) with healthy compound annual growth.
Again, the rise of online courses has eaten into the institutional market. It is likely to impact Higher Education, where the projected increase in human teaching staff will, to a degree be replaced by online delivery. In the UK, the largest University in the UK is online – the Open University. Wherever online solutions perform even part of the teaching process, jobs are at risk. If you largely lecture, then that part of you job is definitely at risk, as it is easily recordable, replicable and can be distributed on scale. Global networks of Universities have already signed up to accreditation through MOOCs, more will follow. With the rise of MOOCs, where the ratio of teacher to students is huge, the reduction in teachers per student is obvious. Other trends include AI and adaptive learning that promise to eat into the teaching components, first as hybrid technology-enhanced teaching models then autonomous models. This is unlikely to have huge impact in schools, although even here, despite rising employment figures in the US, teacher jobs have been falling. Just as we have seen Uber rise but own no cars and AirBnB rise where they own no hotels, we are likely to see disintermediation on a global scale in education, where learning is delivered by organisations that may have very few teachers.
We may even ask what subjects are likely to rise and fall. While the teaching of English has seen rising demand, the teaching of other foreign languages in schools and at graduate level is falling. We may even be seeing a fall in liberal arts and the humanities as investment and expectations shift towards STEM subjects. Interestingly, the teaching of STEM subjects may involve more online learning and assessment, as they are less suited to ‘lecture plus essays’ delivery.
What to do?
As learning professionals we need to ask:
1) What kind of learning tasks do computers perform better than you? Low level teaching tasks such as lecturing, exposition and ‘knowledge’ transfer are most at risk, as are the handling of physical assets such as books in libraries and publishing. The textbook industry is in some trouble.
2) What kind of learning tasks do you perform better than computers? Teaching young children, practical skills, high level teaching tasks, online tutoring; these are all, as yet skills that will not be easily replaced by technology.
3) In an increasingly computerised world, what well-paid learning work is left for people to do? Online teaching, tutoring, facilitation and learning design will all be in demand, as the market for online learning grows.
4) How can people learn the skills to do this work? Get online, use social media for CPD, explore online tutoring, do a webinar, participate in MOOCs, try a MOOCs on blended learning and the use of technology in learning and learning design courses. To get some flavor of the skills needed to do online design, see here.
The classic text has been Race Against the Machine (2011) by Brynjolfsson and McAfee but more recent works such as The Rise of the Robots (2015) by Martin Ford and The Future of the Professions (2015) by Susskind and Susskind are well worth a read.
Sunday, February 07, 2016
Unified theory of ‘learning’ emerges – and it’s mind blowing
2500 years of learning theory, tries to uncover the major historical movements and influences; the Greeks, religious leaders, religious zealots, enlightenment, psychoanalysis, pragmatists, behaviorists, cognitivists, instructionalists, online theorists and others. But there may be a new kid on the block that transcends all of this. This is a computational ‘learning’ theory, fed by a confluence of biology, computer science and maths. The overall point here is that computation is no more about computers than astronomy is about telescopes. Computation takes place in many different types of system. It may unlock the very secrets of evolution and human behavior. It is a BIG idea, a VERY BIG idea. But let’s go back a little to the roots of this strain of learning theory.
James Mark Baldwin: learning theorist you’ve never heard of
You’ve probably never heard of James Mark Baldwin, yet he turned out be one of the greats in learning theory. A 19th century psychologist, he introduced what is called the ‘Baldwin Effect’ into evolutionary theory. The idea is that learned behavior, and not just environment and genes, influences the direction and rate of the evolution of psychological and physical traits. The mind is a learning machine and it is the various aspects of this ability to ‘learn’ that may have had driven evolution and our success as a species. The Baldwin Effect places ‘learning’ on a larger theoretical canvas, lying at the heart of evolutionary theory. It is no longer just a cognitive ability, albeit a complex one, with many different systems of memory, but a feature that defines the very success of our species. This is a profound and radical idea. Note that this is not Lamarckism, as it does not claim that acquired characteristics are passed on genetically, only that the offspring of an adaptive trait (physical or psychological) may be genetically better at learning. This creates the opportunity, as it creates the conditions and successful population survival, for standard selection to take place.
Growth of Baldwin effect
The Baldwin Effect has some impressive supporters including Aldous Huxley and more recently; Hinton, Nowlan, Dennett and Deacon. Evolutionary psychology has had a profound influence on the resurrection of the idea. Daniel Dennett is one theorist who posits the Baldwin idea that learned behavior, especially sustainable innovative behaviours, if it is captured in substantial genetic frequency, can act as what he calls a ‘sky crane’ in evolution. Hinton and Nolan revived the idea in 1987 in How Learning Can Guide Evolution and Richard Richards who published Darwin and the Emergence of Evolutionary Theories of Mind and Behaviour in the same year. They generated enough interest for John Maynard Keynes to support them in a article published in Nature in the same year.
But it is Daniel Dennett who has done most to popularize the idea in Consciousness Explained (1991) and Darwin’s Dangerous Idea (1995). Weber and Depew have since published an excellent explanatory and supportive book Evolution and Learning: The Baldwin Effect Reconsidered (2007). Deacon proposed that the Baldwin effect accounts for the rapid evolution of the mind and language. As Wittgenstein showed, a private language makes no sense as meaning is use. As soon as a small number start and continue to develop language skills it confers significant adaptive advantage and confers a real runaway advantage to the users. This ability to learn new skills may be the key to our species having moved beyond fixed, genetic determinism. More than just language, adaption to new environments, responding to climatic, food pressures and other changes that require quicker adaption through selected learning, may have played a role in the rapid success of Homo Sapiens. Dennett proposes the actual creation of selective pressure on others by sustained learned behaviour. This is where it gets interesting for technology.
Significant advantages could have been through the relatively rapid learned ability to create technology, namely the production of tools. Technology scales the ability of its producers, owners and users to avoid predation, become better predators through hunting and fishing, protect its owners against climate (needles, cloth, clothes), use fire and preserve food. It is my contention that technology is the real runaway success, especially technology that allowed us to create social groups in which learning, in the sense of teaching and learning, could thrive. My favoured flavour of the Baldwin Effect, is therefore the runaway success of learning to make things, namely the production of tools and technology. I see this as the cardinal, causal factor that the Baldwin Effect bestowed on our species. Note that our species Homo Sapiens, was not the only species to thrive on the success of tools and technology. All other hominoid species did so, only not with the same levels of success. The history of tool making shadows the history of our species and gives us a window into the development of consciousness. The trail of stone tools is often as significant as the fossil bone evidence. Our advantage over our nearest rivals, the Neanderthals, seems to have been based on superior minds, tools and technology.
The Baldwin Effect gives ‘learning’ cardinal status but learning technology, the product of learning how to teach and learn takes over. I’d go further and claim that learning, especially the development of cognitive systems such as episodic memory, gave the production and use of technology a privileged status. It is my contention that technology itself, through various network effects and now networked learning, has taken this to new levels. It may even transcend our very notion of what we currently see as intrinsically human.
Computational ‘learning’ theory
In an interesting twist Hinton and Nowlan claim to have demonstrated, through computer technology (simulations) that learning could shape evolution. The Baldwin Effect, may, through its own efficacy have created the technological conditions for its own proof. The brain, through consciousness, may have created a fast developing structure that in turn accelerates learning and thus evolution. Beyond these simulations, something even more astonishing has emerged. In our age of algorithms, computer scientists and the AI community have come up with deeper theories of learning that look at the algorithmic root of learning. Leslie Valiant won the Turing Prize (Nobel Prize in Computing) in 1984 when he identified the conditions under which a machine ‘learns’ new information. He called this the PAC (probably approximately correct) model. More recently, in his 2013 book he uses this same theory to encompass evolution itself.
From algorithm to ecorithm
In the process Valiant invented a new word ‘ecorithm’, an algorithm that runs, not just on a computer but any system, even a human or biological organism, to learn within a physical environment. We have now taken the Baldwin Effect down into the realm of mathematics. It is here that biology and maths collide, bridged by computational theory. We have seen remarkable advances in Ai and machine learning, with computers displaying the ability to learn very complex tasks. This has huge implications in terms of 'learning theory' in itself. However, machine learning is only one small part of this effort, it is ‘learning’ itself which emerges as the primary phenomenon.
We have assumed, thus far, that learning is a biological phenomenon. But when AI and machine learning emerged, the definition of learning had to become more elastic. Learning itself has become an object of study. Evolution itself may be a learning system, one that results in more effective biological entities.
Valiant’s insight was that we are a product of two things: evolution and learning and that there must be a theory that encompasses both. We evolved and learn through interaction with the real world. Is there an underlying theory that explains both? His point is that there may be no fundamental difference between intelligence and artificial intelligence or machine and non-machine intelligence.
The implications are huge. As we expand the domains in which we see 'learning' as critical, computers, evolution - any system that reacts with the world, we see the possibility that some core algorithms may lie at the heart of all of these systems. This is exciting. We may be on the verge of uncovereing the laws of learning. Psychology, far from being limited to observations of behavior (behaviourism), experimental testing (cognitive psychology), or molecular investigation (neuroscience)) suddenly becomes a mathematical problem. Computational psychology slips in between cognitive psychology and neuroscience to provide possible insights about learning, based on algorithmic scripts. Watch this space.
Baldwin, J. M. (1973). Social and ethical interpretations in mental development. New York: Arno Press.
Dennett, D. C. (1995). Darwin's dangerous idea: Evolution and the meanings of life. New York: Simon & Schuster.
Dennett, D. C. (1991). Consciousness explained. Boston: Little, Brown and Co.
Weber, B. H., & Depew, D. J. (2003). Evolution and learning: The Baldwin effect reconsidered. Cambridge, Mass: MIT Press.
Valiant, L. (2013) Probably Approximately Correct. Basic Books
Saturday, February 06, 2016
Microsoft’s massive Turing test – are AI teachers on the horizon?
Someone appears on social media and within three days 1.5 million people start chatting to that person, fooling many into thinking they’re human. If you’ve seen the movie HER – this is eerily close to that plot but it actually happened.
The plot thickens, as it appears that Microsoft, in China, has been running a huge Turing experiment. Microsoft’s Chinese, Bing researchers launched Xiaoice (Little Ice) in 2014 on WeChat and Weibo. She can draw upon a deep knowledge (or access to facts) about celebrities, sports, finance, movies… whatever. More than this she can recite poetry, song lyrics and stories, is open, friendly, a good listener, even a little mischievous, funny and chatty. Sentiment analysis allows her to gauge the emotion and mood of the conversation and adapt accordingly.
The results were creepy. Within a few days 1.5 million people had conversations with Xiaoice, many went for up to 10 minutes before realizing she was not human. As the software improved, AI techniques, NLP, fed by Bing’s billions of data points and posts, so did the level of conversational engagement. The conversations started to get longer, with an average of 23 exchanges after tens of millions of chats, some go on for hundreds of exchanges. At 0.5 billion conversations and 850,000 followers, who talk to her on average, 60 times a month, Xiaoice has proved to be a very popular companion.
Nass & reeves in The Media Equation, a brilliant set of 35 studies, showed that we are gullible, in the sense that we easily attribute human qualities to technology. We easily attribute human intention to tech, so expect, politeness, no awkward pauses and other human qualities in our interface with tech and that’s what tech is only now starting to deliver. Heider’s Attribution Theory also suggests that, in terms of motivation, we attribute external and internal causes to behavior. This we do, not only with humans but increasingly with machines.
Xiaoice differs from Watson and other forms of AI, in that she (see how easy it is to slip into gender attribution to a bot) is not trying to solve a problem, like win Jeopardy, or beat the World Champion at chess or GO. Her aim is authentic conversation, or at least conversation that seems authentic to humans. That, in a nutshell, is the Turing test. It may already have been passed, on a massive scale.
Even more astonishing is that as she converses, and the data set grows, she gets better and better. This internal learning feature, typical of such AI techniques, means that she learns, not like a human but to behave like a human. Obviously there is no consciousness here, that is not to say there is no intelligence. That is a philosophical question where the question of consciousness and intelligence may well lead to the idea that all such networks have some form of intelligence, just not that which we know of as human.
Potentially, there’s a sinister side to this piece of AI driven tech. Big Brother really can be a bot, but a bot designed by governments to do specific jobs, like keep the population under control. Is it any accident that this experiment was run in China, the master of population control? I suspect that they would never have got away with the experiment in any liberal democracy.
Will it be possible to emulate what teachers do with technology? In many ways it already can and does. Technology will find things out faster and with more accuracy than a human (search). It can hold much more in memory than any human. But this is not about simply emulating teachers subject knowledge, it is also about the wider skills of teaching. Remember also, that teachers are humans, with brains, and brains not only get lots of things wrong, they are full of cognitive biases, often display racial and sexual bias, get tired, need to switch off for around eight hours a day, start to forget and lose their powers. AI does noneof this. Where am I going with this? I’ve been arguing for the last few years that AI is the most important underlying trend in learning technology, as it offers the greatest possibilities for solving the deeper problems in education and learning, such as replication of good teaching, effective feedback, automated assessment, motivation, access and scale.
We know from recent work, such as Todai and at Stanford, that AI is starting to get very good at educational tasks such as passing exams, essay marking and predicting learner attainment. It is also delivering more effective learning experiences. This is why every major tech company on the planet is pouring money into AI. We did not go from running speed to 100 miles per hour by copying the leg of a cheetah – we invented the wheel. So it is with AI. We are not copying teachers’ brains, we are building things that may turn out to be better. Note that part of this process, with current systems, such as essay marking or beating champions at GO, involves training the system using real experts, the system then starts to teach itself and get netter and better. It’s like CDP on steroids. That’s frightening. This form of AI introduces the possibility the the ‘teacher’ component, a teacher that not only has an enormous knowledge base, but also the human-like skills of being a motivator and respected tutor, may be on the horizon. It’s a distant horizon, nevertheless, it has appeared.
Learning Technologies 16; the flat, the sharp & 7 things I would like to have seen
Keynotes – one flat, one sharp
Marshall Goldsmith’s been around for a very long time. I’ve seen him speak before but I’m a little weary of coachy, motivational platitudes. Don’t want to be too grouchy and I know I have to be more positive, be happy at all times, find meaning, be engaged blah blah blah. I found his SIX active questions just plain odd :
Did I do my best to:
1. Be happy?
Nope – happiness is not the state I always aspire to. I know lots of relentlessly happy people. I don’t want to be a clown.
2. Find meaning?
You’re a business coach not a philosopher – your meaning is not my meaning. There may even be no meaning in the universe.
3. Be fully engaged?
No. I often want to disengage. It’s a European thing Marshall – we don’t all want to be earnestly positive and engaged. Often we want to chill.
4. Build positive relationships?
Some of the most interesting people I have known, and know, have been misanthropic, sceptical, grumpy, anti-social and even unpleasant. I gravitate towards people who don’t always pitch their tent in the happy campers area.
5. Set clear goals?
Goals are OK but they can box you in. Many things I’ve achieved in life have been through letting go, taking risks and not sticking to goals. In fact I find people obsessed with ‘goals’ a bit blinkered; education goals, career goals, business goals, social climbing goals etc.
6. Make progress towards goal achievement?
I’d agree with this, if I knew what achievement meant. For lots of people I’ve met, it means getting a gong, like an MBE or OBE. That, for me, is playing some sort of establishment game – it diminishes people in my eyes. For others it means being a doctor, lawyer or accountant – but I would have hated being locked into career progression within a ‘profession’.
Maybe this 'keynote' thing needs a rethink. Marshall was witty but far too prescriptive for me. I’ve never had a coach and quite glad really, as I’ve made my way in the world by largely ignoring the advice of professional coaches and advisors. Get a life, not a coach – that’s my motto.
Ben Hammersly was different and most delegates thought he was sharp and excellent. I’m not a fan of ‘futurists’ who simply cull examples from the web and string them together like a necklace but his observations on AI and the rise of powerful consumer tech was compelling. This also happened to be the theme of my own talk on AI in learning, so I’m duty bound to sing his praises! He also has a nice line in direct opposition to Goldsmith, "It's impossible to be productive without being lazy at the same time," and a fine moustache –I like that.
In conclusion on keynotes, one seemed like someone from bygone days, the other seemed more focused on the future.
I’m not going to comment on the sessions, as I gave one, enjoyed the experience and everyone I saw present tried hard. Many were good, some OK. I couldn’t attend many but the twitter feed seemed to reflect things well. One observation I would make is the format is too samey. There no real ‘debate’ where issues are thrashed out, I’d have a full on head-to-head debate at the end of each of the two days.
Having established itself as the most important L&D technology exhibition in the UK (WOLCE, I think, is a dead duck), what were my impressions this year? When I asked people what they thought of the exhibition, I got shrugs of shoulders and ‘nothing much that’s new’ responses. I felt the same. Before the exhibition, the twitter feed was riddled with vendor Tweets, who clearly have numpties doing their marketing. ‘Come visit Stand number XY’ (with big pic of stand), but absolutely no compelling reason in the Tweet to make the trip. That’s the problem right there. Why are you telling me to come see you? What do you have that’s compelling? Where’s the evidence that it works? Why?
There’s dozens of LMSs. I’m not in the camp that says the LMS is dead but I am in the camp that says the LMS is not new and more than a little dull. For example, I didn’t see much evidence of xAPI being implemented (although lots of talk). Although there was an excellent session in the conference in xAPI, it remains a dark art and that’s a problem. Let’s not make the same mistakes we did with SCORM. Where was the debate about safe harbour on data that took place on the very days the conference was being held? Julian Stodd had an interesting perspective on this in his blog on LT, when he focused on the concept of big company ‘inertia’. I agree with his overall observations.
Lots of people can make e-learning but simply decorating your cake with the hundreds and thousands that is ‘gamification’ is not enough. ‘Gamification’ in technology based training has been around for 30 years. Few have really grasped the importance of deep game design, many plump for patronising Pavlovian techniques. What I wanted to hear were alternative models around curation and automatically, AI-generated content. What I heard was ‘it works on every device’. Oh yeah – try it on my Apple watch then. The best piece of e-learning I’ve seen this year was by a company that wasn’t even there. The innovations around AI and adaptive learning were noticeably absent (apart from Filtered). A few had VR on the stand but it tended to be stand eye-candy. Don’t flash the tech if you don’t know what you want to do with it. I want to see ideas and applications. So, I hear you say, it’s easy to be critical, how about some suggestions, so here goes.
7 things I would liked to have seen/heard about?
1. Training levy
This will hit all of the major employers who attended but nothing mentioned anywhere. People are sleepwalking into the future. This is a serious issue and needs to be unpacked and discussed.
2. Workplace MOOCs
4000 MOOCs and 45 million enrolments, with a massive move by the major vendors towards vocational and workplace learners, yet barely a peep in the conference. That was very odd.
We need a clearer presentation of open-source LMS, authoring tools and content. They exists but are often drowned out by the big vendors.
It’s not new but there’s a lively debate to be had around chunking and this form of learning. What is it really? How does it relate to the delivery of deep content etc.
Maybe something on small companies where lots of 5 min pitches could be presented to potential vendors or investors. This is where the real action is.
6. Evidence not bunk
A good session on evidence-based psychology of learning, dismissing the bunk. This would have chimed in well with Clive Shepherd’s appeal for more ambition and integrity within the L&D community.
I met a girl at the exhibition and she was in the crowd in the pub in the evening. She was always being paraded as a ‘Millennial’ in L&D but she had insights which we oldies have missed. I feel the need for some learner voices around devices, content etc.
ConclusionMy summary has been as a critical friend, as I’m not in the Goldsmith camp but in the ‘seek, to strive and not to yield’ camp - it’s good but let’s make it better. The things I love about Learning Technologies are; meeting stacks of people, ending up in the pub and coming away with a bunch of new friends and a clutch of conversations that spark off new ideas. As always, it’s the people that matter. Oh, and the excellent Donald Taylor - who is always ready to listen.