TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
AI / Compliance / Large Language Models / Open Source

LLM Giants Need Openness, Transparency and Safety Engineering

As UK Deputy PM backs open source AI, experts warn of potential risks, challenges.
Dec 26th, 2023 4:00am by
Featued image for: LLM Giants Need Openness, Transparency and Safety Engineering
Image of Amanda Brock, of OpenUK and Peter Cihon, of GitHub, at AI Fringe conference, from YouTube.

Open source virtues around transparency and collaboration, as well as classical safety engineering practices, will be essential as the world learns to live with generative AI and frontier models, experts told the AI Fringe event in London last month.

The call for openness came as the UK’s deputy prime minister, Oliver Dowden, came out in support of open source AI at the AI Safety Summit the UK government held in Bletchley Park. The gathering of world leaders, policymakers and AI firms adopted a Bletchley Declaration, identifying key risks and pledging to collaborate on AI safety research.

But the prospect of “open source” AI models emerged as a faultline at the summit, with governments and companies expressing concern that they could allow undesirables to get hold of advanced technology and cause widespread harm.

Open Frameworks Accelerate Development

Dowden told Politico that open source benefited startups and would likely be essential to ensuring the developing world benefited from AI developments. “So I think there was a very high bar to restrict open source in any way.”

In a fireside chat at the AI Fringe, OpenUK CEO, Amanda Brock, and GitHub senior policy manager, Peter Cihon, pointed out that open frameworks such as TensorFlow and PyTorch had already turbocharged AI development.

But, they continued, open approaches were an essential part of the ongoing conversation about AI harms and regulation.

“Increasingly, we’ve seen especially in the last year and a half, you’re going to have open source coming for models as well,” said Cihon. “Kind of an intermediate layer of the stack.”

Collaboration and Transparency

Collaborative development done in the open could allow people to build on and extend models in new ways, he said. And open approaches could help remove some of the costs and other barriers to using AI.

But open models and ecosystems are particularly important when it comes to “transparency,” he said. The frontier models at the center of the Bletchley Park discussion were dominated by a handful of companies, and, in effect, just two countries, Cihon said. And “the elephant in the room is that large language models are largely black boxes, not well understood.”

It was the open source community which “is really pioneering best practices in terms of models, documentation, model cards, data, documentation datasets.”

He pointed to The Pile by EleutherAI as “a classic example” of how to have these conversations and illustrate the processes of developing a model. It also supports “open science,” he said, as “you’ve seen the thorough documentation of how something was created, and you can iterate and move in new directions.”

Managing Risk

Brock supported the Bletchley Declaration, saying it was a starting point for regulation that was agile and could flex over time as technology and innovation evolved. She said regulators had to learn from the past and avoid being too granular and inflexible.

“I think about it in terms of liability,” she added. “And how do we manage that risk …  you can’t manage risk, if you don’t understand it.”

Cihon said it was important to think beyond the hype and doom and “to really keep a laser focus on the ways that AI is harming people today and the need for regulation to address that.”

When it came to people “opining” about the risks of open source AI, he said, it was necessary to question what the real marginal risk was in making a model available. “In order to do harm in the world, with the support of an AI system, you need to take action in the world. And those actions are visible by law enforcement and prosecutable.”

When it comes to real-world harm, the tech world has much to learn from classical safety engineering, a session on “Building an AI Safety Culture” heard.

John McDermid, professor of software engineering at the University of York, said “it’s common in safety engineering to identify potential hazards, ways in which people might be harmed, and then define means to eliminate those hazards, or to control them in order to reduce risk.”

Some direct harms might be easier to identify — discrimination in loan applications, for example. However, developers need to consider all the ways systems can fail. If a system was looking at images for cancerous cells, he said, “And they miss those, then somebody will not get the relevant treatment.”

Applying this to existential risks and foundational or frontier models, he said, might be harder, “because of the lack of visibility or the transparency of the systems.” But, he continued, “If I’m right about those problems, then we should avoid deploying this class of systems in critical applications where they could do a very great deal of harm.”

Responsible Computing

Rashik Parmar, group chief executive of the British Computing Society, said it was working on a “responsible computing” framework that draws on such approaches and looks to establish best practices.

“This kind of Silicon Valley culture of break it and then figure out how to fix it later may have worked in the past, we don’t feel that’s appropriate for the future,” he said.

But best practices alone aren’t enough, he admitted. Citing the Cambridge Analytica scandal in the 2010s, he said that the engineers there had raised concerns, only to be told, “If you don’t want to do it, then you know where the door is.”

Engineers and professionals will typically understand the consequences of what they’re doing and want to do the right thing, Parmar said. “But unless we can have other mechanisms that can do the regulation, the governance, it’s not going to be enough. So how do we hold the executives to account?”

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.