His work is interdisciplinary, crossing boundaries between cognitive science, computer science, and artificial intelligence, contributing both to theoretical insights about human cognition and practical applications in AI.
Computational mind
Tenenbaum's theories build upon the idea of the human mind as a computational system. One of his contributions is in the area of probabilistic models of cognition, where he suggests that the human brain operates as an inference engine that constantly predicts, analyses, and updates the world's state based on incomplete information, importantly and practically, a process that can be modeled and replicated in AI.
Bayesian models
Like Karl Friston, he uses Bayesian models and aims to build machines that can learn about the world in more human-like ways, including understanding physics, psychology and learning new concepts from limited data. AI systems that use Bayesian models belong to a category of machine learning that incorporates Bayesian statistics to infer probability distributions and make predictions. These models are known for their ability to learn from limited data by incorporating prior knowledge into the learning process. This is in contrast with many current machine learning systems that often rely on large datasets to learn effectively. Bayesian approaches can be very powerful in situations where data is scarce or when incorporating expert knowledge is crucial.
He has also proposed a ‘Bayesian Program Learning’ framework, where instead of just learning patterns in data, machines would be able to learn the algorithms themselves that generate the data.
One or few shot systems
Humans are exceptionally good at learning quickly and efficiently. For instance, a child can often recognise a new animal from just one picture and then identify it in various contexts. This contrasts with traditional machine learning systems, which typically require thousands of examples of such animals to achieve a similar level of recognition. This is because we humans bring a wealth of prior knowledge and context to new learning situations, which allows us to make inferences from sparse data. Human learning is also highly adaptable. We can apply what we have learned in one domain to a completely different domain. Understanding how to incorporate prior knowledge into AI systems can provide insights into cognitive processes like reasoning, generalisation, and conceptual learning.
AI systems that can learn from a few examples also demonstrate a similar kind of adaptability, hinting at the underlying flexibility of human cognition. A good example is Google's Alpa software. These systems in AI often use techniques like transfer learning, where a model trained on one task is adapted to another related task with minimal additional data, or meta-learning, where the model learns about the learning process itself during training to better adapt to new tasks with limited data.
In AI these one-shot or few-shot learning systems are designed to learn information or recognise patterns from a very limited amount of data – typically only one or a very few examples and aim to mimic a more human-like ability to learn quickly and efficiently from a minimal amount of information.
By creating and studying AI systems capable of one-shot and few-shot learning, researchers can test hypotheses about human cognition and develop computational models that may reflect aspects of human thought processes. It is a two-way street; not only does our understanding of the brain inform the design of AI systems, but the behaviour of AI systems can also provide clues about the principles of human intelligence.
Common sense core
Tenenbaum explores the idea of building machines that learn and think, like humans, through the development of more human-like artificial intelligence. A key concept is the creation of a ‘common sense core’ for AI, which would allow machines to use intuitive physics and psychology to understand the world in a way that is similar to how a young child learns.
Even in early childhood, we seem to learn very quickly about the physical and social world. This includes basic knowledge about objects, agents and the way they interact within the world's physical laws. We also have a knowledge that other people have minds, knowing they have beliefs, desires and goals that drive their actions.
Tenenbaum's work claims that this core knowledge is integral to human cognitive development and is something that artificial intelligence lacks. The goal of the common sense core idea in AI research is to build machines with a similar foundational understanding so that they can navigate and learn from the world in a way that is analogous to how children learn and operate in the world..
Importantly, this core knowledge would not have to be explicitly programmed for every possible scenario, instead, the machine would use it as a basis to learn and infer a wide array of situations and tasks, just like a child who leverages intuitive physics to understand that a ball thrown in the air will come down without needing to learn the exact equations of motion each time.
In computational terms, this might involve creating initial models in AI systems that reflect these basic understandings, which can then be refined through experience and learning. By incorporating this core knowledge, AI could perform better in a variety of tasks that require an understanding of the everyday, physical world, and could interact more naturally with humans by having a more aligned perspective on how the world operates. This the line taken by many other AI researchers, most notably, Yann LeCun.
Discussion
This line of inquiry in connectionist thought deals with what some would see as innate qualities of the mind, dispositions which we are born with, as opposed to pure sensory empiricism, where everything is seen as the result of incoming data. The existence and reading of other minds, along with an understanding of the world and the behaviour of things in that world takes us beyond the training of large models into the replication of intelligent models that come with these capabilities. Development of AI with a common sense core could be transformative, enabling more robust and adaptable AI systems capable of reasoning and learning from minimal data, much like humans.
Critique
Critics argue that these models, while elegant, may oversimplify the complexity and variability of human cognition. The challenge in to ensure that these models capture the nuances of real-world human thinking and learning. Some of his models, particularly those addressing high-level cognitive functions, might face questions about their scope of applicability. Critics could argue that these models need to be tested across various cognitive tasks to truly assess their versatility. To be fair this work is in its infancy and he recognises the need for wider testing. Tenenbaum's work is highly interdisciplinary, spanning areas like psychology, computer science, and neuroscience. While this is a strength, it also presents challenges in integrating methods and theories from these diverse fields without oversimplifying or losing critical aspects unique to each discipline.
Legacy
For those in the AI field or interested in cognitive science and machine learning, Tenenbaum's work opens up ideas and insights into the intricacies of human cognition and its application to developing intelligent machines. His publications not only offer rich theoretical frameworks but also practical implementations and experiments that push the boundaries of our understanding of artificial intelligence.
No comments:
Post a Comment