"Does AI match human intelligence?" is just the WRONG question to ask

This paper argues that measuring AI's intelligence by comparing it to human or animal intelligence does not provide a useful metric for assessing AI's utility. Despite AI's remarkable advancements, including the adoption of Machine Learning algorithms and the introduction of transformative architectures like transformers, it still falls short of the learning capabilities and energy efficiency of biological systems. Human brains, with their complex neural networks and low energy consumption, outperform AI in multitasking, continuous learning, and reasoning. The paper highlights that current Large Language Models (LLMs), despite their inability to fully understand or model the world like humans, still offer significant utility across various applications. It questions the relevance of AI achieving human-like consciousness or sentience, emphasizing that such characteristics are not prerequisites for AI's effectiveness in solving problems. Moreover, it acknowledges the ongoing debates around AI consciousness but suggests that definitive claims require rigorous scientific validation. Furthermore, the paper discusses the challenges AI faces in dealing with novel situations outside its training data, stressing the importance of continuous learning and adaptation. It also touches upon critical issues like privacy, ethics, security, and the need for AI to become more energy-efficient and interpretable. Concluding, the paper posits that while the comparison of AI to human intelligence remains a scientifically intriguing question, it is not crucial for recognizing AI's vast utility in addressing a wide array of practical problems across multiple sectors. AI's value lies in its applications and potential for improvement, not in achieving parity with human intelligence.

11/29/20234 min read

Every day, discussions arise about whether current AI matches human intelligence, often implying a direct correlation to AI's usefulness. However, defining 'intelligence' with quantifiable metrics is essential before making such comparisons. Human intelligence, studied for centuries by brilliant scientists, has led to the identification of multiple types of intelligence, each with its unique metric(s).

This paper aims to explain why measuring AI's proximity to human or animal intelligence does not necessarily yield a useful metric for assessing AI's utility

Undoubtedly, current AI falls short of the learning abilities exhibited by humans and many animals. These biological systems demonstrate remarkable skill in acquiring a wide range of tasks from an early age with minimal explicit data (arguable potentially a larger amount of data using a form of self supervised learning). Furthermore, the efficiency of the brain, consuming around 25 watts, starkly contrasts with the hundreds of watts needed by digital processors for simple tasks and the enormous energy requirements for training AI models. The human brain, with approximately 100 billion neurons, 10^14 synapses, dozens of neurotransmitters, and potentially cognition-enhancing glial cells, achieves multitasking, continuous learning, and exceptional planning, reasoning, and prediction capabilities with significantly less energy. These disparities between biological and artificial systems are undeniable.

Over the last four decades, AI has continuously evolved, experiencing several 'winters' and breakthroughs. During the past 15 years, a plethora of Machine Learning algorithms, such as Clustering, Decision Trees, SVMs, and Dimensionality Reduction techniques, have emerged alongside advancements in Neural Networks, including CNNs and Deep Neural Networks. These have enabled applications in fields like vision and recommendation systems used in a myriad of products used everyday. The introduction of the transformer architecture in 2018 marked a significant milestone, coupled with technologies like knowledge databases, Retrieval-Augmented Generation, and Reinforcement Learning with Human and AI Feedback to mention a few. ChatGPT's unprecedented adoption rate with 100 Millions users in 3 months is a record in technology history that highlights its utility, evidenced by its wide range of applications.

Do LLM understand the world as human capable of Intuitive physics and modeling the consequence of their act? Probably not, although it seems hard to deny some reasoning capabilities of LLMs. Other interesting AI approaches are being studied in order to see whether AI could build a reliable world model that could then be used for planning, prediction and more. See the work from Feifei Li (https://profiles.stanford.edu/fei-fei-li?tab=bio), Yan Lecun (http://yann.lecun.org/) and its latest proposal for JAPE (Hierarchical Joint Embedding Predictive Architecture and Josh Tanenbum (https://www.csail.mit.edu/person/joshua-tenenbaum). Current LLM may have a (very) partial understanding of the world learned from text-based training set (and soon videos), if at all. Still such a capability may not translate into being useless until such a goal is achieved and which by the way may not be required for a number of problems to solve.

Are AI system conscious/sentient ? Scientists should continue to address the critical topic of a consciousness and no-one can deny the remarkable progress over the past few years. That being said, it seems pretty obvious that AI systems such as LLMs only reproduce/mimic human behaviors based on trained by data generated by human, which easily leads to anthropomorphic biases. The only scientific approach to prove such a statement would require to perform some ablation study after removing related content from training set to demonstrate a total absence of consciousness and sentience. To scientifically assess claims of AI consciousness or sentience, ablation studies that remove related content from the training set would be necessary to demonstrate an absence of these qualities. Regarding sentience, it's hard to envisage it without a limbic system. Moreover, understanding human and animal consciousness is still a work in progress, and the core definition of consciousness significantly influences these debates.

Can Large Language Models (LLM), or AI/ML systems in general, effectively manage entirely novel situations not covered in their training data? While it's true that most current ML/AI systems struggle with unfamiliar scenarios, this limitation isn't unique to AI but applies to many existing systems to varying degrees. Developing AI capabilities for continuous learning and adapting to new situations is essential. This advancement would mark a significant step in AI evolution, expanding its applicability and effectiveness.

This is not to say that AI is out of concerns: it is imperative to address privacy and ethical issues with responsible AI as most giant tech companies do today, improve security, reliability (LLM hallucination), interpretability and transparency and also reduce power consumption to mention a few. AI will continue to progress, sometimes with incremental improvements, sometimes with breakthroughs and the trend will just continue. The evolution of AI is marked by both incremental improvements and significant breakthroughs, and it's likely that reinforcement learning will play a pivotal role in future advancements. This learning approach, akin to nature's implementation through dopaminergic circuits for prediction and adaptation, could enhance AI's learning capabilities. Furthermore, the value of learning from failures, a critical aspect in human cognitive processes, may offer insights for AI development. AI stands to gain significantly from an understanding of nature's efficiency, less data-intensive processes, continuous learning without the need for discrete retraining, and overcoming catastrophic forgetting. My discussion on these topics can be found at [jpvasseur.me/ai-and-neuroscience](https://jpvasseur.me/ai-and-neuroscience). Ultimately, AI research should continue exploring new strategies, algorithms, and architectures inspired by these concepts.

Although scientifically interesting, the question of whether AI matches human intelligence will remain open and may not be as relevant considering the myriad problems AI can address (while still being scientifically highly interesting). The fact that AI's 'intelligence' differs from biological intelligence does not diminish its interest or utility. LLM-based systems will continue to improve with a broad range of tools such as assistants, co-pilots and automation engines, and they will at some point be replaced by other smarter, more sophisticated approaches. AI technologies already have a vast range of applications, including high-tech sector such as Networking/Internet, Healthcare, Advertisement, Industrial Automation, Entertainment, Agriculture and of course Science. The immense scope of these use cases underscores the significant role AI plays in diverse fields.