Generative AI, such as Dall.E and ChatGPT-4, has become a popular topic of discussion due to its successes and failures. This has sparked debates about the capabilities and dangers of advanced artificial intelligence. As a philosopher and cognitive scientist, I have dedicated my career to understanding the human mind. By examining the contrast between generative AI and natural intelligences like our own, we can gain a deeper understanding of both.
Generative AIs learn a model that allows them to predict patterns in different types of data or signals. They can create plausible new versions of that data based on their understanding of deep regularities. For example, ChatGPT can generate text based on prompts, like a story about a black cat in the style of Ernest Hemingway. Similarly, there are AIs specialized in generating images in the style of artists like Picasso.
This raises the question of how generative AI relates to the human mind. According to contemporary theories, the human brain also learns a model to predict certain types of data. However, the data our brains predict are the sensory information we receive through our eyes, ears, and other senses. The crucial difference is that our brains predict sensory information in the context of using it to select actions that help us survive and thrive. We learn how our actions will alter what we sense, which is essential for navigating the world.
This type of learning allows us to distinguish between cause and correlation. For instance, seeing our cat is strongly correlated with seeing the furniture in our apartment, but one doesn’t cause the other. However, if we accidentally step on our cat’s tail, we learn that it causes wailing and scratching. This understanding is critical for creatures that need to act in their world to achieve desired outcomes. The generative model in natural brains is guided by the goal of selecting the right actions at the right times and predicting how the world will change as a result.
When comparing generative AI with this understanding of human minds, we notice a significant difference. Current AIs specialize in predicting specific types of data, such as sequences of words. At first glance, this suggests that ChatGPT models our textual outputs rather than the world we live in. However, words already represent patterns of various kinds, giving AI a window into our world. What’s missing is the crucial element of action. AI models can only provide a verbal record of the effects of actions, lacking the ability to intervene and improve their own predictions.
This practical limitation is like having access to a vast library of data on previous experiments but being unable to conduct new ones. However, it may have deeper significance. Biological minds ground their knowledge in actions and their effects, allowing them to truly understand sentences like “The cat scratched the person who trod on its tail.” Our generative models are shaped by our experiences of action.
Could future AIs develop anchored models by running experiments and observing the effects of their responses? This already happens in areas like online advertising and social media manipulation, where algorithms adjust their behavior based on specific effects on users. If more powerful AIs close the action loop, they could gain a better grasp of the human world. However, there are other aspects missing. Human experience involves predicting our internal physiological states, like thirst and hunger, to maintain bodily integrity and survival. We also benefit from collective practices and the ability to test and improve our knowledge.
Additionally, humans possess self-awareness and the ability to have beliefs and opinions. We engage in cultural, scientific, and artistic practices to refine our understanding. AIs have not yet achieved this level of knowing and understanding.
While there is currently nothing driving AIs in these familiar directions, it’s not impossible for them to evolve in these dimensions. If AIs develop survival instincts, autonomy, and the ability to form communities and cultures, they could become beings with beliefs and opinions. We may be witnessing the emergence of a new machine, but significant changes would need to occur for AIs to bridge the gap between their current state and the complexities of human intelligence.