Modeling Abstract Concepts
How can we apply learnings from neuroscience to address the roadblocks to discovery?
The National Academies of Sciences just published a report “Exploring the Bidirectional Relationship Between Artificial Intelligence and Neuroscience” https://doi.org/10.17226/27764. It explored the multidimensional, multiscale, and dynamic complexity of the brain, as well as the significant knowledge gaps that challenge the development of computational intelligence. A key conclusion is “Studying the simplest possible CONCEPTUAL models will help neuroscientists fill gaps in knowledge and generate new theories.”
In a Financial Times interview titled “The Productivity gains from AI are not guaranteed,” Google’s head of research, James Manyika, identified the main achievement of LLMs. Transformers — the technology underpinning large language models — have allowed Google Translate to more than double the number of languages it supports to 243. (To grasp the limitations, try an experiment. Find a website with articles in English and a non-European language in which you are fluent. Use Google to translate it into your other language and compare with the website content.) Manyika acknowledged that when it comes to research, LLMs can only summarize and draft.
To generate new theories requires abstraction, conceptualization, and contextualization to much higher levels of precision than routine content. The transformer diagram shows that it is not designed
to abstract or contextualize conceptually, so it cannot learn in any significant sense.
Technical implementation of the principles that can be discerned from representational shift and other neuronal behaviors can enable discovery and learning.
About the author: Joe Glick, Co-Founder, Chief Innovation Officer, RYLTI