Skip to content Skip to sidebar Skip to footer

Managing AI Risk with VUCA and Knowledge Engineering Part 1 – Why?

Courtesy of Joe Glick, Chief Data Scientist, RYLTI

The VUCA framework – Volatility, Uncertainty, Complexity, and Ambiguity – precisely defines the drivers of all risk. It was defined over half a century ago at West Point, and in recent decades adopted by global management consultants. The more significant and challenging the risks, the more relevant VUCA is, and the risks associated with AI are very significant and challenging. Analytics using the VUCA framework do not eliminate risk but improve the precision of the problem identification and the viability of the mitigation strategy and support an evidence-based approach to deal with what we cannot know or control. How does it apply to AI?

VOLATILITY – usually associated with external forces that we cannot control, such as geopolitics, markets, and competitors’ strategies.  With AI we have to add the internal volatility of black box algorithms at a massive scale. A senior scientist I knew from Los Alamos Lab talked about “neurotic networks”.

UNCERTAINTY – assumptions, premises, and theories for which we have insufficient evidence or cannot test experimentally. With AI we have to add the inherent uncertainty of statistical conclusions, which are core to AI/ML/NLP algorithms. Einstein said, “So far as the laws of mathematics refer to reality, they are not certain. And so far as they are certain, they do not refer to reality”.

COMPLEXITY – the real world is highly complex and interconnected. In 2005, as part of its “Get Real” project, DARPA calculated the potential interaction of one million agents as 10^300,000 – a number that could give you a headache if you try and grasp it. Complexity is invisible to an AI algorithm because:

  1. It can only see what it has been engineered to find.
  2. It works within the narrow limits of the training data.

Neuroscientist Henry Markram, director of the Blue Brain Project, describes the mathematical challenge: “Now it would take a very long time to even try to calculate how many combinations there are (in the brain) when one can choose 10,000 possible genes from a set of 20,000, but what is sure is that it is more than the number of sub-atomic particles in this, and probably every other universe we can imagine.”

AMBIGUITY – this risk is particularly significant in Large Language Models. Linguistic meaning is context-based and evolving, especially online, which is the primary source of input. This issue is well understood and driving global concern, which is driving emerging regulation, which is inevitably ambiguous.

So, the VUCA framework is ideally suited to guide the architecture of solutions aimed at mitigating and managing AI risk.  But how can it be done?

Leave a comment