TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
AI / Large Language Models / Tech Culture

An Argument Against Sovereign AI, but for Sector-Based AI

David Eastman thinks the 'Sovereign AI' approach, but for sectors not nations, is inevitable — which presents opportunities for devs.
Jul 15th, 2023 8:00am by
Featued image for: An Argument Against Sovereign AI, but for Sector-Based AI
Image via Pixabay

When Britain’s Prime Minister declared that the UK should develop a Sovereign AI, it was clearly intended as a soundbite, leveraging the excitement caused by GPT and Large Language Models (LLMs). The term itself refers to the concept of artificial intelligence (AI) systems that are under the control and governance of a specific nation or state. But because of growing fears about rogue AI, and doubts over the UK obtaining enough chips to pursue exascale supercomputing, the story did not grip the industry. Yet the idea does have mileage, and other nations (notably India and Taiwan) have similar ambitions.

Also, the Sovereign AI approach could actually work in certain sectors of a nation, which I will explain in this article.

Identity, Security, Regulatory

Now, while recognizing the purely political aspects of governments pumping money into national computing champions, there are still valid reasons for valuing the strategic approaches implied by Sovereign AI, which is what this post looks at. The intention is to keep LLMs cognisant of national identity, national security, and national regulatory systems — as far as that makes sense. Some of this stems from a belief that the dominance of Google over the years has helped the US government. It is certainly true that the concerns of the White House get big tech’s attention first.

There is no question that the choice of ingested documents is reflected through LLM responses — OpenAI has run into trouble with taste issues in various territories. Thus, controlling the LLM learning process should reduce the likelihood of anomalous narratives. Similarly, how information is retained and used has different legal implications in different places; and ensuring it works within a nation’s regulations is clearly best done within that regulatory space. Security is a measure of control over the physical process of using the neural networks, storing the LLMs and disseminating responses. It is also making a positive of the fact the system is closed.

If, as a Brit, I ask ChatGPT a question like “Explain what ‘the House’ means in politics,” it shows both well-known examples of a bicameral chamber, but it could be argued that I only want the British example. Given that I log in to use these services, it is quite possible that OpenAI could or does alter the answers depending on my locale. But most likely it is just using existing documents on the web to create an informative response:

If this answer was given to a Brazilian, they might be understandably miffed. It must be the case that if an LLM trains only on national documents, then a purely national answer will be forthcoming.

But wait a minute. Do we want it to learn to talk in a purely bureaucratic political language? And are we saying that a paper about the House of Commons written by a French academic must have the wrong values? Real understanding is based on a mix of narrow and broad subject analysis. Some of that should be from the outside, so to speak.

These are genuine misunderstandings of language structure versus message content. If training documents are just policy data, the LLM will understand language structure from a good source, but will only “understand” how to construct a good-looking answer.

Given the success that Google has with localizing search responses, I don’t think there is a solid national identity reason for assuming that there will be much to gain by weaning an LLM on a diet of purely national resources. But the other reasons are better.

Sectorial Sovereignty

The Finnish AuroraAI program doesn’t look as if it is attempting to reject big tech sensibility; just trying to break internal silos and allocate resources more sensibly across service providers. This is a very traditional target for IT improvement, but an LLM that can read across specifications and legislation while independently spawning sub-queries in the right databases could well deliver very satisfactory results for Finns.

In short, we should not worry about the national identity meaning of “sovereign”, but look at what purpose a secure and curated system could have in various sectors.

There are two areas where the case for this looks quite strong.

The law is usually described by a well-defined corpus of documents that is used to generate further case law. Surely this mirrors how LLMs operate? So the predictions that AI will be used extensively throughout the legal world feel very likely to come about. Even within the cosseted ranks of this very arcane profession, this is no longer particularly controversial. Due diligence and litigation preparation are known to be just hard work for a legal mind. Instead of employing poorly paid junior lawyers to read a lot in a hurry, a ChatGPTLaw could deliver the goods. As everyone reading here should know, technology has never truly destroyed an industry; it just shifts the work higher up the value chain. AI will likely make legal provisions easier to obtain and thus increase the number of legal practitioners.

The other example is taken from a field where data should be secure, and the AI can be trained to draw conclusions while maintaining the anonymity of the response. Health data is, unfortunately, extremely valuable to those who are unlikely to use it for customer benefit. Insurance companies would love to know in advance about who not to insure. But anonymising health data too early renders it useless. For example, if location information is messed with, early detection of epidemic outbreaks is no longer possible. Similarly if you omit race and sex data, important trends can simply be lost. This leads to the idea that if the data had an AI sentry that could assess how (and with who) to respond with queries without breaking confidentiality, that would encourage even more research.

The New Zookeepers

In conclusion, I’m not sure there is much mileage in thinking about “Sovereign AI” as something that will be approached by nations, but the same approach in certain sectors does seem inevitable.

And there is a solid likelihood of technical jobs opening up for people with dual skills. Understanding whether words with multiple meanings are embedded correctly (homonymy) needs technical and subject knowledge. It is no simple task to make an LLM unlearn.

The curators of the learning materials used by sovereign LLMs could become a new professionalized caste. Where to look for documents and when to hold back inclusion are both decisions that require real knowledge of an area. But they are more than just zookeepers to an exotic species. In the very close future, LLMs will not be judged like performing circus beasts, but by their response accuracy.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Real.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.