Skip to content Skip to sidebar Skip to footer

Part 6: Large Language Models have Roadblocks to Discovery

context ontology

Language is a Tool of Intelligence, NOT its Source

The purpose of discovery technology is to enable the user to learn something new, not for the machine to learn what the user already knows. Of course, the new knowledge has to be correct, and that is a challenge for LLMs with no solution in sight.

An article in Nature 9/25/24 reported, “Researchers tested various new models on thousands of prompts that included questions on arithmetic, anagrams, geography and science, as well as prompts that tested the bots’ ability to transform information, such as putting a list in alphabetical order. The result was that the accuracy of the answers increased as the refined models became larger and DECREASED as the questions got harder.” https://www.nature.com/articles/d41586-024-03137-3?et

LLMs are getting more precise at simple tasks, but they are not becoming intelligent because language is not the source of intelligence.

Another report in Nature explains, “The human brain is constantly picking up patterns in everyday experiences — and can do so without conscious thought.”
https://www.nature.com/articles/d41586-024-03116-8?

This occurs in babies before they learn to speak. Take the concept of “mother” as an example. Infants grasp the mother pattern shortly after birth, and much later associate it with a word in whatever language(s) they are exposed to (left image – the photo is ASL). This process occurs in the hippocampus (central image) which serves as the engine of learning, identifying patterns from diverse inputs with no connection to language.

Experts analyze new information by creating abstract mental models (Meyer 1992) and their learning often results in expanding our language with new terms and expressions. Discovering new contextual patterns and relationships is the output of exercising our intelligence, creating an abstract ontology in our mind, which we frequently transform into language.

The section Leveraging Neuroscience discussed twinning the real-world reasoning process using biomimetic abstract modeling. The above diagram of the solution container included a Context Ontology. The image on the right below presents the high-level architecture of the ontology.

We are still working to understand the nature of human language. Chomsky laid the foundation in his work on transformational linguistics, but in a concluding talk at a CogSci conference, he stated that he would not look at another paper on the nature of human language unless it accounted for the sign languages used by deaf cultures, such as ASL. These languages use spatial grammar, not just semantic signs, and of course, the semantics of any language are contextual.

Thinking about the nature of human language makes it clear that LLMs are text and statistics models, not language models. They can only learn what is obvious, and we cannot discover reliable new knowledge using them.

Leveraging the modeling methodology presented here requires significant adjustments in thinking and solution design, but opens up significant discovery opportunities.

About the author:
Joe Glick, Co-Founder, Chief Innovation Officer, RYLTI

Leave a comment