An event took place this month that received little publicity but was described by Bill Gates as a “huge milestone in advancing artificial intelligence”. It could have profound implications for advances in AI. An OpenAI team, of five neural networks, and a lot of other techniques, beat human teams in the game Dota2.
To understand the significance of this, we must understand the complexity of the task. A computer environment like Dota2 is astoundingly complex and it is played by teams of five, all of whom determine long-term success, exploring the complex environment, making decisions in real time, identifying threats, employing clever team strategies. It’s a seriously bad, chaotic, complicated and fast environment made even messier by the fact that you’re playing against some very smart teams of humans.
To win they created five separate neural networks, each representing a player. You need an executive layer to determine the weightings for each of the player’s actions to determine priorities. Using reinforcement learning techniques and playing itself millions of times (equivalent of 180 years of playtime per day!), it learned fast. Not yet at the level of the professional Dota2 team, but it will get there.
What’s frightening is the speed of the training and actual learned competence across a ‘team’ in a team context. It needed to optimize team decisions and think about long-term goals, not short-term wins. This is way beyond single player games like chess and GO.
The implications are huge as AI moves from being good at single, narrow tasks, to general tasks, using a modular approach.
The implications are huge as AI moves from being good at single, narrow tasks, to general tasks, using a modular approach.
Team AI in military
Let’s take the most obvious parallel. In a battle of robot soldiers versus humans, the robots and drones will simulate the mission, going through millions of possible scenarios, then, having learned all it needs to know, attack in real time, problem solving, as a team, as it goes. Autonomous warfare will be won by the smartest software, organic or not. This is deeply worrying, so worrying that senior AI luminaries, such as Musk, and within Google, have signed a pledge this week saying they will not work on autonomous weapons. So let’s turn to the civilian world.
Team AI in robotics
I’ve written about shared learning in AI across teams of robots which give significant advantages in terms of speed and shared experience. Swarm and cloud robotics are very real. But let’s turn to something very real – business simulations.
Team AI in learning for individuals
Imagine a system that simulates the senior management of a company– its five main Directors. It learns by gathering data about the environment – competitors, financial variables, consumer trends, cost modeling, instantaneous cost-benefit analysis, cashflow projections, resource allocation, recruitment policies, training. It may not be perfect but it is likely to learn faster than any group of five humans and make decisions faster, beating possible competitors. Would a team AI be useful, as an aid to decision making? Do you really need those expensive managers to learn this stuff and make these decisions?
Team AI in learning for individuals
Let’s bring it down to specific strategies for individuals. Suppose I want to maximize my ‘learning’ time. The model in the past has always been one teacher to one or many learners. Imagine pooling the expertise of many teachers to decide what you see and do next in your learning experience or journey? Imagine those teachers having different dimensions, cross-curricular and so on? This breaks the traditional teaching mould. The very idea of a single teacher may just be an artifact of the old model of timetabling and institutions. Multi-teaching, by best of breed systems, may be a better model.
Team AI and blended learning
One could see AI determine the optimal blend for a course or you as an individual. If we take Blended Learning (not Blended Teaching) as our staring point, one of then, based on the learning task, learner and resources available from the AI teacher team, we could see guidance emerge on optimizing the learning journey for that individual, a pedagogic expert who determines what particular type of teaching/learning experience you need at that particular moment?
Conclusion
Rather than seeing AI as a single entity and falsely imagining that it should simulate the behavior of a single ‘teacher’ we should see it as a set of different things that work as a team to help you learn. At the moment we use all sorts of ~AI to help you learn – Google, Recommendation engines, Engagement bots (Differ), Support bots (Jill Watson), Content creation (WildFire), Adaptive Learning (CogBooks), Spaced-practice (WildFire), even Wellbeing. Imagine bringing all of these together for you as an individual learner – coordinated, optimized, not a teacher but a team of teachers.
No comments:
Post a Comment