Sunday, June 25, 2023

Can machines have empathy and other emotions?

Can machines have empathy and other emotions? Yann Lecun thinks they can and I agree but it is a qualified agreement. This will matter if AI it to become a Universal teacher and have the qualities of an expert teacher.

 

One must start with what emotions are. There has been a good deal of research on this, by Krathwohl, Damasio & Immordino-Yang, Lakoff, Panksepp. Also good work done on uncovering the role of emotion in learning by Nick Shackleton-Jonesl also covered them all in this podcast.


We must also make the distinction between:


Emotional recognition

Display of emotion

Feeling emotions

 

Emotional recognition

The face is a primary indicator of emotions and we look for changes in facial muscles, such as raised eyebrows, narrowed or widened eyes, smiles, frowns, or clenched jaw. Facial scanning can certainly identify emotions using this route. Eye contact is another, a solid gaze showing interest, even anger, while avoiding eye contact can indicate disinterest, shyness, unease or guilt. Microexpressions are also recognisable as expressing emotions. Note that all of this is often a weakness in humans, with a significant difference between men and women, also in those with autism. Emotional recognition is well on its way to being better than most humans and will most likely surpass that ability. 

 

Vocal tone and volume are also significant, tone of voice, intonation, pitch, raised volume when aroused or angry; quiet or softer tone when sad or reflective; upbeat when happy. Body language is another, clearly possible by scanning for folded arms and movements showing unease, disinterest or anger.

 

Even at the level of text, one can use sentiment analysis to spot a range of emotions, as emotions are encoded in laguage. LLMs show this quite dramatically. This can be used to semantically interpret text that reveals a whole range of emotions. It can be used over time, for example, to spot failing students who show negativity in a course. It can be used at an individual level or provide insights into social media monitoring, public opinion, customer feedback, brand perception, and other areas where understanding sentiment is valuable. As it improves, using LLMs it is starting to spot It may struggle with sarcasm, irony and complex language usage.

 

AI already could understands music in some sense, even its emotional intent and effect. Spotify already classify using these criteria using AI. This is not to say it feels emotion.

 

Even at the level of ‘recognition, it could very well be that machine help humans control and modulate bad emotions. I’m sure that feedback loops can calm people down and encourage emotional intelligence. The fact that machines could read stimuli quicker than us and respond quicker, may mean it is better at empathy than we could ever be. Recognising emotion will allow AI to respond appropriately to our needs and should not be dismissed. It can be used as a means to many ends, from education to mental healthcare. Chatbots are already being used to deliver CBT therapy. 

 

Display of emotion

Emotions can be displayed without being felt. Actors can do this, written words in a novel can do this and both can elicit strong human emotions. Coaches do this frequently. Machines can also do this. From the earliest Chatbots, such as ELIZA, that has been clear, Nass &Reeves, showed in 35 studies in The Media Equation, that this reading of human qualities and emotions into machines is common.


As Panksepp repeatedly says we have a tendency to think of emotions as human and therefore ‘good’. Their evolutionary development means they are there for different reasons than we think, which is why they often overwhelm us or have dangerous as well as beneficial consequences. Most crime is driven by emotional impulses such as unpredictable anger, especially violent and sexual crime. This would lead us to conclude that the display of positive emotions should be encouraged, bad ones designed out of the system. There are already efforts to build fairness, kindness, altruism and mercy into systems. It is not just a matter of having a full set of emotions, mort a matter of what emotions we want these systems to display or have.

 

Feeling emotions

This would require AI to be fully embedded in a physical nervous system that can feel in the sense that we feels emotions in the brain. It also seems to require consciousness of the feelings themselves. We could dismiss this as impossible but there are half way houses here and there is another possibility. Geoffry Hinton has posited The Mortal Computer and hybrid computer brain interfaces could very well blur this distinction in a sense of integrating thought with human emotions, in ways as yet not experiences, even subconsciously. But we may not need to go this far.

 

Are emotions necessary in teaching?

I have always been struck by Donald Norman’s argument “Empathy… sounds wonderful but the search for empathy is simply misled.” He argued that this call for empathy in design is wrong-headed and that “the concept is impossible, and even if possible, wrong”. There is no way you can put yourself into the heads of the hundreds, thousands, even tens and hundreds of thousands of learners. As Norman says “It sounds wonderful but the search for empathy is simply misled.” Not only is it not possible to understand individuals in this way, it is just not that useful. It is not empathy but data you need. Who are these people, what do they need to actually do and how can we help them. As people they will be hugely variable but what they need to know and do, in order to achieve a goal, is relatively stable. This has little to do with empathy and a lot to do with understanding and reason.

 

Sure, the emotional side of learning is important and people like Norman, have written and researched the subject extensively. Positive emotions help people learn (Um et al., 2012). Even negative emotions (D’Mello et al., 2014) can help people learn, stimulating attention and motivation, including mild stress (Vogel and Schwabe, 2016). We also know that emotions induce attention (Vuilleumier, 2005) and motivation that can be described as curiosity, where the novel or surprising can stimulate active interest (Oudeyer et al., 2016). In short, emotional events are remembered longer, more clearly and accurately than neutral events.

 

All too often we latch on to a noun in the learning world without thinking much about what it actually means, what experts in the field say about it and bandy it about as though it were a certain truth. But trying to induce emotion in the teaching and design process may not be not that relevant or pnly relevant to the degree that mimicing emotion may be enough. AI can be designed to induce and manipulate the learner towards positive emotions and not the emotions, identified by Panksepp and others, that harm learning, such as fear, anxiety and anger. We are in such a rush to include ‘emotion’ in design that we confuse emotion in learning process with emotion in the teacher and designer. It also seems like lazy signalling, for not doing the hard analysis up front, defaulting to the loose language of concern and sympathy.

 

Conclusion

In discussion emotions we tend to think of it as a uniquely human phenomenon. It is not. Animals clearly have emotions. This is not a case of human exceptionalism. In other words, beings with less complexity than us can feel. At what point therefore can the bottom up process create machine that can feel? We seem to be getting there and have come quite far having reached ‘recognition’ and ‘display; 

 

If developments in AI have taught us one thing, it is to never say never. Exponential advances are now being made and this will continue, with some of the largest companies with huge investments, along with a significant shift in research and government intentions. We already have the recognition and display of emotions. The feeling of emotions may be far off, unnecessary for many tasks, even teaching and learning.

 


In medicine, empathy is already being helped with GPT4, patients can benefit from being helped by both a knowledgeable and empathetic machine. We see this already Healthcare in the Ayers (2023) research, where 79% of the time, patients rated the chatbot significantly higher for both quality and empathy. That’s before the obvious benefits of being available 24/7, getting quicker results, increased availability of healthcare in rural areas, access by the poor and decreased workload for healthcare systems. It empowers the patient. For more on this area of AI helping patients with empathy listen to Peter Lee’s excellent podcast here. He shows that even pseudo-empathy can run deep and be used in many interaction with teachers, doctors, in retail and so on.

This is why I think the Universal Teacher and Universal Doctor are now on the horizon.

 

Bibliography

Ayers et al. 2023. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Platform

Norman, D.A., 2004. Emotional design: Why we love (or hate) everyday things. Basic Civitas Books.

Norman, D., 2019. Why I Don't Believe in Empathic Design.

Um, E., Plass, J.L., Hayward, E.O. and Homer, B.D., 2012. Emotional design in multimedia learning. Journal of educational psychology104(2), p.485.

D’Mello, S., Lehman, B., Pekrun, R. and Graesser, A., 2014. Confusion can be beneficial for learning. Learning and Instruction29, pp.153-170.

Vogel, S. and Schwabe, L., 2016. Learning and memory under stress: implications for the classroom. npj Science of Learning1(1), pp.1-10.

Vuilleumier, P., 2005. How brains beware: neural mechanisms of emotional attention. Trends in cognitive sciences9(12), pp.585-594.

Oudeyer, P.Y., Gottlieb, J. and Lopes, M., 2016. Intrinsic motivation, curiosity, and learning: Theory and applications in educational technologies. Progress in brain research229, pp.257-284.

https://greatmindsonlearning.libsyn.com/affective-learning-with-donald-clark 

Friday, June 23, 2023

Is the new Digital Divide in AI between the 'EU' and 'Rest of the World'?

OpenAI has opened its first foreign office in London citing the pro-innovation economy, talent and, it is clear although not stated, the fear of EU regulation. They are also clearly cautious about EU regulation. Bard was available in 180 countries and territories, including the UK, but NOT the EU, until a deal was done. Facebook has been holding back releases of models, Twitter has left the EU’s voluntary code of practice. Is this a new Digital Divide? One wonders what effect this will have on investment in AI across the EU? The lack of debate around the consequences of this is puzzling. 

When Italy declared UDI and banned ChatGPT they quickly relented (actually a move by a right wing appointee to show their strength). But this is different, large AI providers, such as Google, and Facebook, are taking the initiative and simply not releasing AI services in EU countries. This new Digital Divide may soon be between the EU and the rest of the world and could have serious consequences.

 

On the other hand, the EU is a huge and wealthy market, so the large tech companies will not take these decisions lightly. The problem is that the EU's legislation is often bureaucratic and cumbersome, involving lots of paperwork, hits on productivity. and is a stick and not carrot mechanism. The famous pop-up consent solution ‘Manage all cookies’ is GDPR nonsense, as no one reads the consent forms, it is therefore largely a waste of time. It was the result of bad legislation, producing a massive hit on productivity with no tangible benefits. One side overlegislates, the other is perhaps too lax and defensive – the net result is a Digital Divide.


With the release of Baidu's Ernie 3.5, which is neck to neck with ChatGPT4 on performance, this has turned into a two horse race - US and China. There's a third horse, but that's a moral high horse, which has barely left the stalls - that's the EU.

 

Economic environment

Let’s start with the big picture. 


In 2008 the EU economy was larger than America’s. In 2008 the EU’s economy was nearly 10% larger than America’s at $16.2tn versus $14.7tn. By 2022, the US economy had grown to $25tn, whereas the EU and the UK together had only reached $19.8tn… Now the US is nearly one-third bigger. It is more than 50 per cent larger than the EU without the UK…” (FT 20 June 2023) and that gap is growing. 


The US has trounced Europe in terms of productivity, economic growth, investment models, research, investment, the creation of tech companies, defence and energy policy. It has also trounced the EU in terms of AI research and implementation. If the EU cannot develop a strong tech-based economy it will have to rely on low growth legacy markets, such as tourism and luxury goods, meaning it will fall further behind.

 

Productivity deficit

AI matters as ‘productivity’ needs a well-educated and skilled workforce, good infrastructure, and a favourable business and investment environment. Importantly, those with the more sophisticated tools tend to be the more productive. The evidence for the productivity gains using AI, within just a few months, is clear. If the EU either ban such tools or create an environment where angels fear to tread, then productivity on coding, management and general output, as these tools affect almost every sector, will start to lag. We had a dry run when Italy banned ChatGPT and there were reports of falls in productivity.

 

Training and education deficit

The University research and teaching system that feeds AI tech in terms of core research and skilled labour is dominated by the US, UK and China. EU Universities barely figure in the major rankings. An additional problem is the now deeply rooted anti-corporate sentiment in Higher Education in the EU. The sneering attitude towards the private sector, even OpenAI as a not-for profit, is now the norm, often accompanied by a failure to understand its actual structure. A symptom of this is that the debate in Higher Education focused largely, not on learning, but plagiarism. Far too little debate has taken place on the benefits in education and health.

 

Effort in AI is skewed towards often vague ethical initiatives making the overall atmosphere one of negativity, slowing down progress. The danger is that the benefits will be realised elsewhere while the EU remains rooted in old, analogue instititutions, where everything in tech is seen as a moral problem. There is nothing wrong with the moral debate but it is so often driven by fearmongering and activism, and not objective moral debate, which is to look at the moral issues and consequences, good and bad, not just the bad.

 

An additional problem is the unlikely adoption of AI in education in the EU. The real initiatives such as Khan Academy and Duolingo have been funded and implemented in the US, aided by philanthropic investment. There is little of that energy and type of investment in Europe. As AI becomes integrated into education and training in the US, its absence here will mean less productivity. 

 

Investment freeze

Investors looked askance at Italy’s surprise ban and widened their astonished gaze across the whole of the EU. If one country can do this, so can others. Investors have a currency – it is called ‘risk’. They assess and quantify risks and base decisions based on that risk analysis. Anyone who has been through the process knows that they do their homework and due diligence. One of those risks is already baked in, the Italy ChatGPT ban, huge punishment fines is another, the generally negative rhetoric and cultural context is yet another. Why would large scale investors pump cash into a territory where bans, fines and an absence of services have become the norm? Investors like a favourable business environment not one that is based on negativity and punishment.

 

Investment model

The model that emerged post-war in the US has proved superior to that in the EU – private sector, investors, government and Universities working together on large projects with a real focus on impact. In AI we now see the fruits of that system in the US, where ground-breaking research on foundation models take place in large tech companies and not-for-profits, such as OpenAI. Europe scoffs, and gets bogged down in long-winded, bureaucratic and low impact Horizon projects, while the US and China gets on with getting things done. In truth we now look to the US for investment and that is the market most want to expand in, as it is the largest growth market on the planet. They have become so dominant that they merely buy European companies in AI.

 

Ethical quicksand

Generative AI has launched a thousand quangos, groups and bad PhDs on ‘AI

and ethics’ across Europe. You can’t move for reports, frameworks and ideas for regulations which rain down on us from publicly funded organisations, with far too little attention on potential solutions and benefits. 

 

Debate on the benefits has been swept aside by pontificating and grand-standing. It is easy to stand on the sidelines as part of the jeering, pessimist mob, less easy to do something positive to actually solve these issues. Rather than solve the problems of safety, security, alignment and guard-railing with real solutions, the EU has chosen to see the glass, not as half full, but as brimming with hemlock. It sees laws and fines as the solution, not design and engineering.

 

The EU also has no laws banning VPNs and their use is becoming more common. This is a huge loophole when using AI services. It is already happening with Bard, as the internet is like water, it tends to seep round and into places, based on demand.

 

Punishment strategy

The EU has been issuing fines for some time now, although not is all as it appears. You may note that many of these large fines are issued from Ireland, but there has been a long and bitter fight between Dublin and Brussels. Ireland gains a good portion of its GDP from a small number of US tech companies and because they are based there, the GDPR fines come from there. They have fought these fines tooth and nail but, in the end, had to bend the knee to Brussels.


 

There is something reasonable in these latest fines as the data transferred may be used by US surveillance agencies (they have a bad track record here). In practice Meta have until later in the year to comply. This is a bit of a cat and mouse game, with politics at the heart of it all. On the other had the EU puts up with Ireland and Luxembourg stealing other countries tax revenues through massive tax evasion. It is all a bit of a tangled mess.

 

The bottom line, however, is that this tactic is resulting in a deeper rift. Twitter quit the EU’s voluntary code of practice in mid-May as it could be fined up to 6% of its global revenue (£145m) or be banned across the EU if it does not comply with the Digital Services Act. Facebook are making noises about abandoning the EU.

 

Solutions

If as much effort went into solutions, than regulations, fines and rhetoric, we would progress at the right pace, solving problems as we go, rather than trying to punish people into submission. Hacker-led safety testing, well funded research, effort on international ISO standards not regional efforts with a focus only on large operation and parameter models and implementations would all help. Above all third-party professional hacker teams can be deployed to identify security and data weaknesses before release. Incident reporting can also be useful. This collaborative, non-confrontational approach is far preferable to the negativity and sledgehammer of legislation and punishing fines.

 

Conclusion

These battles have been raging for some time, mostly behind the scenes but Google and Facebook have also had run ins with Canada and Australia. Some of this has been resolved, some not. There is something predictable about it all - the old world versus the new.


It is an inconvenient truth but the EU is too late to the party, the US and China have forged ahead in IT and AI with their own tech giants. The EU has failed to create tech giants and has also deliberately chosen the path of being some sort of global regulator but it has a weak economy, weak research, weak investment and a weak entrepreneurial culture. In the same way that the Ukraine war showed the EUs lack of investment in defence and a lack of any overall defence policy, where not for the first time it had to rely on the US to come to its aid and provide arms, cash and expertise, so it is with AI. 

 

The investment is low, there is no policy other than taking a morally superior stance. So much energy has gone into ethical hand-wringing that Europe is reduced to being a bystander. It thinks it has sway but it is a fraction of the world’s population, shrinking, and white Eurocentrism now seems more than a little dated. It rides its lumbering moral high-horse, looking down on the rest of the world, while others like the US feed it a little hay to keep it happy and speed past. It would surely be better developing AI solutions that have identified benefits in productivity, learning and healthcare, than simply regulating it.

Thursday, June 22, 2023

High-horses are all very well but not if you want to get somewhere

Bard is available in 180 countries and territories, including the UK, but not the EU. Is this a new Digital Divide? One wonders what effect this will have on investment in AI across the EU? Other AI services are also not releasing. The lack of debate around the consequences of this is astounding. 

When Italy declared UDI and banned ChatGPT they quickly relented (actually a move by right wingers to show their strength). But this is different, large AI providers, such as Google and Facebook, are taking the initiative and simply not releasing AI services in EU countries. This new Digital Divide may soon be between the EU and the rest of the world and could have serious consequences.

 

On the other hand, the EU is a huge and wealthy market, so the large tech companies will not take these decisions lightly. The problem is that its legislation is often bureaucratic and cumbersome, involving lots of paperwork and hits on productivity. The famous pop-up consent solution ‘Manage all cookies’ GDPR nonsense (no one reads the consent forms, therefore largely a waste of time) )was the result of bad legislation, producing a massive hit on productivity with no tangible benefits.

 

Economic environment

Let’s start with the big picture. “In 2008 the EU economy was somewhat larger than America’s. In 2008 the EU’s economy was nearly 10% larger than America’s at $16.2tn versus $14.7tn, by 2022, the US economy had grown to $25tn, whereas the EU and the UK together had only reached $19.8tn… Now the US is nearly one-third bigger. It is more than 50 per cent larger than the EU without the UK…” (FT 20 June 2023) and that gap is growing. The US has trounced Europe in terms of productivity, economic growth, investment models, research, investment, the creation of tech companies, defence and energy policy. It has also trounced the EU in terms of AI research and implementation.

 

Productivity deficit

All of the above matters as ‘productivity’ needs a well-educated and skilled workforce, good infrastructure, and a favourable business and investment environment. Importantly, those with the more sophisticated tools tend to be the more productive. If the EU either ban such tools or create an environment where angels fear to tread, then productivity on coding, management and general output, as these tools affect almost every sector, will fall behind. We had a dry run when Italy banned ChatGPT there were reports of falls in productivity.

 

Training and education deficit

The University research and teaching system that feeds AI tech in terms of research and skilled labour is dominated by the US, UK and China. EU Universities barely figure in the rankings. An additional problem is the now deeply rooted anti-corporate sentiment in Higher Education in both the UK and EU. The sneering attitude towards the private sector, even OpenAI as a not-for profit, is now the norm.

 

Effort in AI is skewed towards ethics making the atmosphere negative and progress slow. This is a great danger that the benefits will be realised elsewhere while we remain rooted in our Medieval institituions seeing everything as a moral problem. There is noithing wrong with the moral debate but it is so often driven by fearmongering and acticism, with little actual moral or ethical foundation, which is to look at the moral issues and consequences, good and bad.

 

An additional problem is the unlikely adoption of AI in education. The real initiatives such as Khan Academy and Duolingo have been funded and implemented in the US, aided by philanthropic investment. There is little of that energy in Europe. As AI become integrated into education and training in the US, its absence here will mean less productivity in learning. 

 

Investment freeze

Investors looked askance at Italy’s surprise ban, and widened their gaze across the whole of the EU. If one country can do this, so can others. Investors have a currency – it is called ‘risk’. They assess and quantify risks. One of those risks is already the Italy ban, huge fines are another, the general rhetoric and cultural context is yet another. Why would large scale investments flow into a territory where bans, fines and an absence of services have become the norm? Investors like a favourable business environment not one that is based on punishment.

 

Investment model

The model that emerged post-war in the US has proved superior to that in Europe – private sector, investors, government  and Universities working together on large projects with a real focus on impact. In AI we now see the fruits of that system in the US, where ground-breaking research on foundation models takes place in large tech companies and not-for-profits, such as OpenAI. Europe scoffs, and gets bogged down in long-winded, bureaucratic, vague and low impact Horizon projects, while the US gets on with getting things done. In truth we look to the US for investment and that is the market most want to expand in as it is the largest growth market on the planet. They have become so dominant that they merely buy European companies in AI.

 

Ethical quicksand

Generative AI has launched a thousand quangos, groups and bad PhDs on ‘AI

and ethics’ across Europe. You can’t move for reports, frameworks and regulations which  rain down on us from publicly funded organisations, with far too little attention on potential solutions and benefits. The debate in Higher Education focused largely. Not on learning, but plagiarism. Far too little debate has taken place on the benefits in education and health,

 

Debate on the benefits has been swept aside by pontificating and grand-standing. It is easy to stand on the sidelines as part of the jeering, pessimist mob, less easy to do something positive to actually solve these issues. Rather than solve the problems of alignment, guard-railing with real solutions, the EU has chosen to see the glass, not as half full, but as brimming with hemlock. It sees laws and fines as the solution, not design and engineering.

 

The EU has no laws banning VPNs and their use is becoming more common. This is a huge loophole when using AI services. It is already happening with Bard as the internet is like water, it tends to seep round and into places, based on demand.

 

Conclusion

It is an inconvenient truth but the EU is too late to the party, the US and China have forged ahead in IT and AI with their own tech giants. The EU has deliberately choosing the path of being some sort of global regulator but it has a weak economy, weak research, weak investment and a weak entrepreneurial culture. In the same way that the Ukraine war showed the EUs lack of investment in defence and a lack of any overall defence policy, where not for the first time it had to rely on the US to come to its aid and provide arms, cash and expertise, so it is with AI. 

 

The investment is low, there is no policy other than taking a morally superior stance. So much energy has gone into ethical hand-wringing that Europe is reduced to being a bystander. It thinks it has sway but it is a fraction of the world’s population and shrinking, and white Eurocentrism now seems more than a little dated. It rides its lumbering moral high-horse, looking down on the rest of the world, while others like the US feed it a little hay to keep it happy and speed past. It would surely be better developing AI solutions that have identified benefits in productivity, learning and healthcare, than simply regulating it.

 

 

Monday, June 19, 2023

Personalised tutors - a dumb rich kid is more likely to graduate from college than a smart poor one

A dumb rich kid is more likely to graduate from college than a smart poor one and traditional teaching has not solved the problem of gaps in attainment. Scotland, my own country, is a great example, where the whole curriculum was up-ended and nothing has been gained. In truth, we now have a solution that has been around for some time. I have been involved in such systems fo decades.


AI chatbot tutors, such as Khanmigo, are now being tested in schools but not for the first time. The Gates Foundation has been at this for over 8 years and I was involved in trials from 2015 onwards, and even earlier with SCHOLAR, where we showed a grade increase among users.. We know this works. From Bloom’s famous paper onwards, the simple fact that detailed feedback to get learners through problems they encounter as they learn works. 


“It will enable every student in the United States, and eventually on the planet, to effectively have a world-class personal tutor” says Salman Khan. Gates, who has provide $10 million to Khan agrees, “The AIs will get to that ability, to be as good a tutor as any human ever could” at a recent conference.

 

We see in ChatGPT, Bard and other systems increased capability in accuracy, provenance and feedback, along with guardrailing. To criticise such systems for early errors now seems churlish. They’re getting better very fast.

 

Variety of tutor types

We are already seeing a variety of teacher-type systems emerge, as I outlined in my book AI for Learning.


Adaptive, personalised learning means adapting the online experience to the individual’s needs as they learn, in the way a personal tutor would intervene. The aim is to provide, what many teachers provide, a learning experience that is tailored to the needs of you as an individual learner.

The Curious Case of Benjamin Bloom

Benjamin Bloom is best know for his taxonomy of learning (now shown to be weak and simplistic), wrote a far less read paper, The 2 Sigma Problem, which compared the lecture, formative feedback lecture and one-to-one tuition. It is a landmark in adaptive learning. Taking the ‘straight lecture’ as the mean, he found an 84% increase in mastery above the mean for a ‘formative feedback’ approach to teaching and an astonishing 98% increase in mastery for ‘one-to-one tuition’. Google’s Peter Norvig famously said that if you only have to read one paper to support online learning, this is it. In other words, the increase in efficacy for tailored  one-to-one, because of the increase in on-task learning, is huge. This paper deserves to be read by anyone looking at improving the efficacy of learning as it shows hugely significant improvements by simply altering the way teachers interact with learners. Online learning has to date mostly delivered fairly linear and non-adaptive experiences, whether it’s through self-paced structured learning, scenario-based learning, simulations or informal learning. But we are now in the position of having technology, especially AI, that can deliver what Bloom called ‘one-to-one learning’.

Adaption can be many things but at the heart of the process is a decision to present something to the learner based on what the system knows about the learners, learning or context.

 

Pre-course adaptive

Macro-decisions

You can adapt a learning journey at the macro level, recommending skills, courses, even careers based on your individual needs.

 

Pre-test

‘Pre-test’ the learner, to create a prior profile, before staring the course, then present relevant content. The adaptive software makes a decision based on data specific to that individual. You may start with personal data, such as educational background, competence in previous courses and so on. This is a highly deterministic approach that has limited personalisation and learning benefits but may prevent many from taking unnecessary courses.

 

Test-out

Allow learners to ‘test-out’ at points in the course to save them time on progression. This short-circuits unnecessary work but has limited benefits in terms of varied learning for individuals.

 

Preference (be careful)

One can ask or test the learner for their learning style or media preference. Unfortunately, research has shown that false constructs such as learning styles, which do not exist, make no difference on learning outcomes. Personality type is another, although one must be careful with poorly validated outputs from the likes of Myers-Briggs, which are ill-advised. The OCEAN model is much better validated. One can also use learner opinions, although this is also fraught with danger. Learners are often quite mistaken, not only about what they have learnt but also optimal strategies for learning. So, it is possible to use all sorts of personal data to determine how and what someone should be taught but one has to be very, very careful.

 

Within-course adaptive

Micro-adaptive courses adjust frequently during a course to determine different routes based on their preferences, what the learner has done or based on specially designed algorithms. A lot of the early adaptive software within courses uses re-sequencing, this is much more sophisticated with Generative AI. The idea is that most learning goes wrong when things are presented that are either too easy, too hard or not relevant for the learner at that moment. One can us the idea of desirable difficulty here to determine a learning experience that is challenging enough to keep the learner driving forward.

 

Algorithm-based

It is worth introducing AI at this point, as it is having a profound effect on all areas of human endeavour. It is inevitable, in my view, that this will also happen in the learning game. Adaptive learning is how the large tech companies deliver to your timeline on Facebook/Twitter, sell to you on Amazon, get you to watch stuff on Netflix. They use an array of techniques based on data they gather, statistics, data mining and AI techniques to improve the delivery of their service to you as an individual. Evidence that AI and adaptive techniques will work in learning, especially in adaption, is there on every device on almost every service we use online. Education is just a bit of a slow learner.

 

Decisions may be based simply on what the system thinks your level of capability is at that moment, based on formative assessment and other factors. The regular testing of learners, not only improves retention, it gathers useful data about what the system knows about the learner. Failure is not a problem here. Indeed, evidence suggests that making mistakes may be critical to good learning strategies.

 

Decisions within a course use an algorithm with complex data needs. This provides a much more powerful method for dynamic decision making. At this more fine-grained level, every screen can be regarded as a fresh adaption at that specific point in the course.

 

AI techniques can, of course, be used in systems that learn and improve as they go. Such systems are often trained using data at the start and then use data as they go to improve the system. The more learners use the system, the better it becomes.

 

Confidence adaption

Another measure, common in adaptive systems, is the measurement of confidence. You may be asked a question then also asked how confident you are of your answer.

 

Learning theory 

Good learning theory can also be baked into the algorithms, such as retrieval, interleaving and spaced practice. Care can be taken over cognitive load and even personalised performance support provided adapting to an individual’s availability and schedule. Duolingo is sensitive to these needs and provides spaced-practice, aware of the fact that you may have not done anything recently and forgotten stuff. Embodying good learning theory and practice may be what is needed to introduce often counterintuitive methods into teaching, that are resisted by human teachers. This is at the heart of systems being developed using Generative AI, the baking-in of good learning theory, such as good design, deliberate practice, spaced practice, interleaving, seeing learning as a process not an event.

 

Across courses adaptive

Aggregated data

Aggregated data from a learner’ performance on a previous or previous courses can be used. As can aggregated data of all students who have taken the course. One has to be careful here, as one cohort may have started at a different level of competence than another cohort. There may also be differences on other skills, such as reading comprehension, background knowledge, English as a second language and so on.

 

Adaptive across curricula

Adaptive software can be applied within a course, across a set of courses but also across an entire curriculum. The idea is that personalisation becomes more targeted, the more you use the system and that competences identified earlier may help determine later sequencing.

 

Post-course adaptive

Adaptive assessment systems

There’s also adaptive assessment, where test items are presented, based on your performance on previous questions. They often start with a mean test item then select harder or easier items as the learner progresses. This can be built into Generative AI assessment.

 

Memory retention systems

Some adaptive systems focus on memory retrieval, retention and recall. They present content, often in a spaced-practice pattern and repeat, remediate and retest to increase retention. These can be powerful systems for the consolidation of learning and can be produced using Generative AI.

 

Performance support adaption

Moving beyond courses to performance support, delivering learning when you need it, is another form of adaptive delivery that can be sensitive to your individual needs as well as context. These have been delivered within the workflow, often embedded in social communications systems, sometimes as chatbots. Such systems are being developed as we speak.

 

Conclusion

There are many forms of adaptive learning, in terms of the points of intervention, basis of adaption, technology and purpose. If you want to experience one that is accessible and free, try Duolingo.


ASU trials

Earlier trials, with more rules-based but sophisticated systems proved the case years ago. AI in general, and adaptive learning systems in particular, will have enormous long-term effect on teaching, learner attainment and student drop-out. This was confirmed by the results from courses run at Arizona State University from  2015. 

One course, Biology 100, delivered as blended learning, was examined in detail. The students did the adaptive work then brought that knowledge to class, where group work and teaching took place – a flipped classroom model. This data was presented at the Educause Learning Initiative in San Antonio in February and is impressive.

Aims
The aim of this technology enhanced teaching system was to:
increase attainment
reduce in dropout rates
maintain student motivation
increase teacher effectiveness


It is not easy to juggle all three at the same time but ASU want these undergraduate courses to be a success on all three fronts, as they are seen as the foundation for sustainable progress by students as they move through a full degree course.

1. Higher attainment

A dumb rich kid is more likely to graduate from college than a smart poor one. So, these increases in attainment are therefore hugely significant, especially for students from low income backgrounds, in high enrolment courses. Many interventions in education show razor thin improvements. These are significant, not just on overall attainment rates but, just as importantly, the way this squeezes dropout rates. It’s a double dividend.


2. Lower dropout 

A key indicator is the immediate impact on drop-out. It can be catastrophic for the students and, as funding follows students, also the institution. Between 41-45% of those who enrol in US colleges drop out. Given the 1.3 trillion student debt problem and the fact that these students dropout, but still carry the burden of that debt, this is a catastrophic level of failure. In the UK it is 16%. As we can see increase overall attainment and you squeeze dropout and failure. Too many teachers and institutions are coasting with predictable dropout and failure rates. This can change. The fall in drop out rate for the most experienced instructor was also greater than for other instructors. In fact the fall was dramatic.


3. Experienced instructor effect

An interesting effect emerged from the data. Both attainment and lower dropout were better with the most experienced instructor. Most instructors take two years until their class grades rise to a stable level. In this trial the most experienced instructor achieved greater attainment rises (13%), as well as the greatest fall in dropout rates (18%).

4. Usability

Adaptive learning systems do not follow the usual linear path. This often makes the adaptive interface look different and navigation difficult. The danger is that students don't know what to do next or feel lost. In this case ASU saw good student acceptance across the board. 



5. Creating content
One of the difficulties in adaptive, AI-driven systems, is the creation of ustable content. By content, I mean content, structures, assessment items and so on. We created a suite of tools that allow instructors to create a network of content, working back from objectives. Automatic help with layout and conversion of content is also used. Once done, this creates a complex network of learning content that students vector through, each student taking a different path, depending on their on-going performance. The system is like a satnav, always trying to get students to their destination, even when they go off course.

6. Teacher dashboards

Beyond these results lie something even more promising. The  system slews off detailed and useful data on every student, as well as analyses of that data. Different dashboards give unprecedented insights, in real-time, of student performance. This allows the instructor to help those in need. The promise here, is of continuous improvement, badly needed in education. We could be looking at an approach that not only improves the performance of teachers but also of the system itself, the consequence being on-going improvement in attainment, dropout and motivation in students.

7. Automatic course improvement
Adaptive systems take an AI approach, where the system uses its own data to automatically readjust the course to make it better. Poor content, badly designed questions and so on, are identified by the system itself and automatically adjusted. So, as the courses get better, as they will, the student results are likely to get better.

8. Useful across the curriculum
By way of contrast, ASU is also running a US History course, very different from Biology. Similar results are being reported. The platform is content agnostic and has been designed to run any course. Evidence has already emerged that this approach works in both STEM and humanities courses.

9. Personalisation works
Underlying this approach is the idea that all learners are different and that one-size-fits-all, largely linear courses, delivered largely by lectures, do not deliver to this need. It is precisely this dimension, the real-time adjustment of the learning to the needs of the individual that produce the reults, as well as the increase in the teacher’s ability to know and adjust their teaching to the class and individual student needs through real-time data.

10. Student’s want more

Over 80% of students on this first experience of an adaptive course, said they wanted to use this approach in other modules and courses. This is heartening, as without their acceptance, it is difficult to see this approach working well.



Conclusion

    We have been here before. These systems work. They are already powerful teachers and will become Universal Teachers in any subject. The sooner we invest and get on with the task, the better for learners.

    Saturday, June 10, 2023

    Papert, AI and concrete learning

    Wittgenstein and Vygotsky are worth examining in relation to Generative AI, as their views on language are, I think, very relevant to the use of language to teach and learn. Another theorist who comes to mind is the South African, Seymour Papert. 

    Papert, unlike most learning theorists, was a mathematician and expert in AI and cowrote, with Marvin Minsky, the influential book 'Perceptrons' (1969)). I did a podcast on this early work on neural networks). He co-founded the Artificial Intelligence Lab at MIT with Marvin Minsky, and was a founding faculty member of the MIT Media LabHe also co-founded the Epistemology and Learning Group at the MIT Media Lab, which explored the intersection of AI and education with a focus on the development and application of educational technologies, including AI-based tools, to enhance learning. Their work contributed to the development of intelligent tutoring systems, simulations, and learning environments that leveraged AI techniques to support learners. He dies in 2016 and is someone worth listening to.
     

    Mindstorms

    In his book ‘Mindstorms: Children, Computers, and Powerful Ideas’ (1980) he explored the potential of computers and AI for educational purposes. Children could use computers as ‘mindstorms’ to engage in creative problem-solving, discover mathematical concepts, and develop a deeper understanding of various subjects. He saw AI as providing personalised and student-centered learning, allowing them to pursue their individual interests and learn at their own pace. He believed that AI technologies could adapt to studentss' needs, provide tailored feedback, and offer opportunities for meaningful exploration and discovery. Papert also emphasized the importance of social and emotional aspects of learning. He believed that AI technologies could also facilitate collaborative learning, foster social interactions, and support the development of emotional intelligence. Generative AI is now delivering on that promise.

     

    Constructionism

    For Papert, computers and the web are not merely tools but ways of thinking, in the same way that writing is a way of thinking and expression. In 'The Children’s Machine' (1993) he promoted ‘concrete’ learning as he saw the teaching of purely abstract knowledge as hopelessly imbalanced (I agree). It is not that the computer teaches the child but that the child uses the computer to learn. Instruction, he thought, should be replaced by construction. With Generative AI, and ChatGPT, our relationship with knowledge changes from a search and retrieve model or presentation through lectures, PowerPoint and page turning e-learning, towards dialogue. He would, I’m sure, have approved.

     

    He had worked with Piaget and certainly saw learning as a constructivist process but had stronger views on learning by doing. He promoted ‘Constructionism’ because he believed that learners actively construct their own knowledge by engaging in hands-on, meaningful activities and by creating and manipulating objects in the external world and AI could provide learners with tools and environments for exploration, experimentation and creation. This is why he went on to create LOGO, a computer language for computer control by learners.

     

    Knowledge Machine

    As part of his constructionist vision, he speculated that a ‘Knowledge Machine’ could be built that takes anyone, especially children, into a learning environment, where they can interact, problem solve and develop. His knowledge machine predicted the virtual environment that appeared as the web and the move from 2D towards 3D virtual learning worlds, such as computer games, Minecraft, AR and VR. In this he was prophetic, as the web produced devices and resources that were almost unimaginable when Papert first realised this idea. Generative AI is also in the process of making avatars, with speech recognition, eye-tracking, within 3D worlds, with AI guided learning pathways to make learning in context much easier. He was, in this sense, prophetic.


    Low floor, high ceiling

    Papert made an interesting contribution to technology used for learning in proposing his 'Low floor, high ceiling' idea, where the tool is super-easy to use (ChatGPT) yet has lots of headroom in terms of functionality and efficacy. This is why he would have loved Generative AI. It explains the popularity of search - simply type into a box, it also explains why hundreds of millions are using generative AI - fiendishly easy to use with mind blowing functionality, productivity and potential.


    Conclusion

    Seymour Papert thought deeply about AI and learning. He saw AI as a powerful tool that could transform education and learning and was an advocate for integrating AI and technology into the process of learning. In particular, he saw AI technologies as providing personalized learning experiences that empower students, and facilitate deep engagement with the subject matter. He would have been overjoyed at the current interest in Generative AI as both proof that he was right and, as an active socialist in South Africa, would have relished the opportunity to use the technology for good.

    Friday, June 09, 2023

    Vygotsky, language, intelligence and AI

    Vygotsky is an oft-quoted but rarely read learning theorist. Let me start by saying I am not a social constructivist but in using ChatGPT3.5 and 4, I have become more Vygotskian, as I have come to see ChatGPT as similar to the concept of the Vygotskian teacher. He gives us insights into why language is key to intelligence and why Generative AI may be the most powerful form of learning technology we have ever invented.

    Learn from language

    LLMs are fundamentally Vygotskian. They have been trained on data (language) as created and used by us, and therefore they learn from us. There is another step, where humans train the model further by making judgements to make the output more palatable through what is called Reinforcement Learning from Human Feedback (RLHF). 

     

    Knowledgeable other

    Just as Vygotsky thought of language as a mediating source for learning, so LLMs use this form of mediation by language. It has been further trained by real humans to align it with our expectations. This is how babies and children learn. They listen, are spoken to and guided by adults. When we use a LLM we are like young children asking questions and being given responses by what Vygotsky calls a ‘knowledgeable other. That knowledgeable other is AI.

     

    Ultimately the strength of Vygotsky’s learning theory stands or falls on the idea that learning is fundamentally a socially mediated and constructed activity. Psychology becomes sociology as all psychological phenomena are seen as social constructs. Vygotsky's theory does not propose distinct developmental stages, like Piaget, but instead emphasizes the role of social interaction and cultural context in cognitive development. He believed that social interaction plays a critical role in children's cognitive development and argued that children learn through interactions with more knowledgeable individuals, who provide guidance and support.

     

    Generative AI

    This is exactly what ChatGPT4 does, in general, but also in a more formal teaching experiences as in Khan Academies implementation, Duolingo and other second level implementations of Generative AI. It provides the ‘knowledgeable other’. In fact, this ‘knowledgeable other’ is better than any one teacher as it covers all subjects, at different levels, personalised, is available 365/24/7, is endlessly patient, polite, encouraging and friendly.

     

    Mediation

    The cardinal idea in Vygotsky’s psychology of learning is that knowledge is constructed through mediation, yet it is not entirely clear what mediation entails and what he means by the ‘tools’ he refers to as mediators. In many contexts, it simply seems like a synonym for discussion between teacher and learner. However he does focus on being aware of the learner’s needs, so that they can ‘construct’ their own learning experience and changes the focus of teaching towards guidance and facilitation, as learners are not so much ‘educated’ by teachers as helped to construct their own meaning and learning.

     

    This is exactly what ChatGPT4 does as a ‘tool’. It mediates through dialogue and allows the learner to construct their own sense and meaning by driving the learning process through dialogue. It uses language, the key form of learning and social development for Vygotsky, to patiently go at the learners own pace, level and even identify mistakes. It can keep us in a useful Zone of Proximal Development, as the process of dialogue captures what has been said to guide what should be said next. Language is a form of action, where thought essentially involves manipulation of internalised language, and so can be seen as a form of inner action.


    Tools

    He often uses the word ‘tool’ which refers to any external artifact, symbol, or sign that individuals use to help them think, problem-solve and learn. Tools can be physical objects, as well as cultural and psychological tools. Tools help individuals interact with the world and transform their mental processes. They bridge the gap between a person's current cognitive abilities and their potential for higher-level thinking.

     

    Cultural tools are the external artifacts and signs that are created and shared within a specific cultural context. Examples of cultural tools include writing systems, books, calculators, maps, computers, and language itself. He most likely would have included Generative AI as a useful tool for learning. Vygotsky also identified psychological tools, which are internalized cultural tools that become part of an individual's cognitive processes. Psychological tools include strategies, problem-solving techniques, mnemonic devices, and other mental processes that individuals acquire through social interaction and cultural learning.

     

    Conclusion

    Language shapes thought and therefore is intelligence. Both Wittgenstein and Vygotsky had this insight, that language is not an emergent quality of intelligence but is intelligence itself. This explains what LLMs are so powerful. Intelligence is embodied in language and we learn from language. If they are right, generative AI, using written or spoken language will prove to be the most powerful form of learning technology we have ever seen, as they are congruent with how we learn.

    Wednesday, June 07, 2023

    Apple will shift us from 2D to 3D and set the pace in learning with the Vision Pro



    Artificial Intelligence is like a black hole sucking in all attention in learning technology but published a book 'Learning in the Metaverse' (2023) on the shift that is taking place in parallel, from 2D to 3D. The publisher inisised on using the word 'Metaverse' but it is really about mixed reality.


    The world’s major religions have all posited another virtual 3D afterlife, we build monumental 3D spaces as theatres, cinemas, sports stadia and so on for social gatherings, we have had full-blown 3D video games since the early 90s. Roblox, Fortnite and Minecraft have hundreds of millions of users. We are 3D people who live and work in a 3D world, yet most learning is 2D text on paper, PowerPoint or 2D images and text on screens.

    Vision Pro

    I have always maintained that the shift into virtual worlds will happen and have written about this extensively. What it needs is consumer tech that would make it happen. Apple have just released their Vision Pro, a high-end VR headset. Apple have set the standard and trajectory going forward. It is a springboard product. What they're after is the redefinition of the human-machine interface. It has an eye watering price at $3500 and, at 2 hours, limited battery time, but oh what a product. To be fair it is called ‘Pro’ as they’re releasing it to the research and professional market.


    Apple is selling a dream machine here, a window into new immersive realities. This opens the mind up to heavens on earth but also combinations of the real and unreal. You are not looking at a screen, you are in a world. It also redefines the boundary between the real and virtual worlds. This is the mind shift that Apple is selling. This is not using a device, it is being inside a device. It actually blurs the real and virtual, mixed reality is a matter of degree with one small wheel on the headset. This is the shift, redefining that boundary as a matter of degree.


    Sure, it's a little heavy, but packs a lot of stuff into a stand alone unit. It has two speakers on either side, and lots of cameras, sensors and fans, all run from a M2 and R1 chip, all inside inside the headset. There's a cable to an external battery, which you put in your pocket, with 2-4 hours of battery life. 

     

    Superb interface

    It frees up apps from the restrictions and boundaries of a perceived screen. They can be placed anywhere in the new 3D space, especially for collaboration. Watching TV and movies will be like watching a 100 foot wide screen - superbly immersive, with 180 degree experiences and spatial audio. If you have a Mac you can AirPlay from the Mac to inside the headset in varying resolutions, ultra high resolution being one. It is superb quality.

    It is a complete computer on your head with a ton of sensors and cameras inside and outside. The interface is wholly eyes, hands and voice. No controllers, you just look (eye tracking is sensational) but takes a little getting used to, speak and pinch your fingers. You can have your hand anywhere to pinch. It is fast and accurate and highlights what you are looking at, and you can click wherever you want. A virtual keyboard can pop up, you can also talk to type. As it has its own OS, called Vision OS, it’s the real deal. Like touching on an iPAD, you can look and use your hands to select, scroll, throw, resize, drag stuff around, with low latency. It will also synch with your Mac to use your desktop and other applications. It will, of course, mirror your display as if it were a Mac with a giant screen. You can play games, watch movies or use it as a desktop.


    Customizing an environment with a large number of simultaneous windows is the big win. This takes computing into a manipulable 3D spce.


    Passthrough

    You can open multiple apps, move them around and let go to lock in 3D space. Remember you are seeing passthrough in the sense of a camera showing you the outside world. It's not actually AR. The passthrough is pretty much real time. You can play ping-pong, in this reconstructed real world. Super-close up is difficult but you can use your phone while inside the headset. You can scroll the passthrough to get ever more immersion until you are fully immersed, on the moon, wherever, there are worlds provided.


    How does text input work?

    You can poke a virtual keyboard with your real fingers. Secondly, look at the key and pinch. You can also look at the microphone and speak. 


    This is a typical Apple move. Refine the user experience and make it as simple and intuitive as possible. Imagine this combination of eye tracking, gestures and voice recognition on all future devices. Optical ID is included for privacy.


    Voice recognition is important as AI has now provided that interface into Chatbot functionality, where the Chatbot will truly understand your meaning. I suspect Apple already have their own LLM driven version of ChatGPT that will eventually be integrated. The learning possibilities are mind blowing.

     

    This interface opens the device up to learning as you’re not taking up tons of cognitive bandwidth, only looking, pinching and talking. I can already see training in real contexts taking place with AI generated avatars and 3D worlds, sophisticated learning pathways, real assessment of performance and great data tracking, even of eye movements and behaviours. This may, at last, be the way we really can train and assess skills. Its possibilities in training and performance support are clear.


    Apps

    Some stock apps that come with Vision Pro - Apple Music, set in a music room. Apple TV and Disney Channel have their own environments. The photo app adds parallax. The astronomical sky can be seen. Jigspace allows you to import 3D models and play around with them - a dead cert for training. Keynote allows you to practice a talk in an immersive environment. Then a ton of existing compatible apps that you can use straight off. No Netflix, Spotify or YouTube yet.


    It will automatically connect to your Mac or iPhone, automatically black the screen out and be usable within the headset. Remember multiple apps can be used, so you can use these while your Mac screen is there. This is seamless and useful.

     

    Personas

    The eyes on the outside of the headset are your virtual eyes. This powers personas, impressive and strange at the same time. If you have passthrough you can see the eyes, they're blued-out when you're immersed. It detects external people and they shine through your immersed images when they approach. That's clever.


    To get your eyes, you have to get them captured, along with your hands, then take the headset of and look at the headset, then turn your head right and left, up and down. Then facial expressions, smile, raised eyebrows, closed eyes etc. This is pretty good - it really looks like you. Once the capture is complete it has your persona or avatar captured. You can edit on glasses, skin tone etc. Meta have done this but it's pretty amazing.


    You can use your persona in Facetime and of you all have Vision Pros on it will look like you are looking at the right angle towards that person's face.


    I already have Synthesia and Heygen avatars. as well as Digital-Don my GPT. Then there's my identity on Facebook, Twitter, Blogger and LinkedIn. On top of that my email addresses. Our personas are multiplying as we re-present ourselves in the virtual world.


    Spatial audio

    Voices come from where people are in the room. pu them at a distance and they seem far away. It's not yet real but getting there. One could easily have collaborative training sessions, teachers or seminars, meetings. One could eventually have patients, customers or employees as automated avatars.

     

    Passthrough

    Quite cleverly, it renders the part you’re looking at in more detail, not the whole screen – so it is super-sharp. For learning the passthrough opens up all sorts of possibilities in mixed reality, as you can have all sorts of mixed reality learning experiences. The AR learning opportunities are endless as layers of reality can be presented. With a turn of the cog on the headset, you can control the degree of immersion. This is neat. 

     

    There is one very strange feature. The front of the headset has a screen that displays your eyes, not your real eyes but a representation of your eyes. It is activated if someone comes close. Clever, if not a bit weird, but the idea is to make you more human from the outside. Very Apple.


    A shout out also to Meta's Oculus 3, as it also has passthrough, you can get apps up and see high res movies. It is also more comfortable to wear as it is lighter - and a LOT cheaper. Don't write off other manufacturers here.

    Learning

    I've seen some fantastic applications using the VIVE and Oculus recently. For example, projects on CPR as well as xcrime seen investigation by the police. The trainers tell us that the reaction from trainees is overwhemingly positive. More to the point, the simple point that yoiu can 'look where you want' is hte special fetaure that makes it real and therefore relevant. Far tooo much training for real-world jobs is done in classrooms. We see how we can bring that world into the classroom. To be honest, the headset is standalone, so you can also take it out into the world. There are already videos of people using it while walking around, doing execise and so on.


    The book covers a ton in learning:

    LEARNING

    Spatial thinking, Extended mind. Motivation, Self Determination Theory.


    AUTONOMOUS LEARNING

    Presence, Agency, Embodied learning, Vision, Sound, Touch, Autonomy and generative learning, Implementation

    COMPETENCES

    Tyranny of the real, Tyranny of place, Tyranny of time, Tyranny of 2D, Learning transfer, Assessing competence


    SOCIAL LEARNING

    Social metaverse, Collaboration, Social learning, Social skills


    LEARNING ANALYTICS
    Healthcare data, Data in learning, Eye tracking, Hearing, Haptics. 








    Conclusion

    Apple will watch and see what developers will come up with. I suspect entertainment, ring-side seats at sports events, cinema and games will figure large. This, by all accounts, will deliver stunning experiences. But it is the desktop market that is the new battleground and they’ve made a big move, way ahead of the awful Hololens from Microsoft. 

    An all-new App Store provides users with access to more than 1 million compatible apps across iOS and iPadOS, as well as new experiences that take advantage of the unique capabilities of Vision Pro. 

    Expensive and the separate battery hanging from a cord, with only 2-4 hours battery time is a bit of a disappointment, one movie and you’re out. But oh what a product. I have to tip my hat to Apple here, they're starting to take risks again.