Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
AI / Data / Open Source

Open Models: Not Stymied Yet

Proponents of closed foundation models worry that open source large language model technology could fall into the wrong hands, said Jim Zemlin of the Linux Foundation, who rejected that "vague" notion at the conference.
Dec 14th, 2023 9:08am by
Featued image for: Open Models: Not Stymied Yet

SAN JOSE, CALIF. — The argument for closed foundation models comes from some vague concerns that someday, open source large language model (LLM) technology can be too dangerous in the wrong hands, said Jim Zemlin, executive director of The Linux Foundation at the organization’s inaugural conference here.

“Today, unfortunately, we’re starting to see a little bit of a trend moving away from openness in generative AI and towards more closed foundation models that are only accessible via APIs,” Zemlin said in his keynote address. “I find the arguments pretty vague and pretty unconvincing.”

The backdrop? The news from the European Union of the first-ever AI regulations. Open source LLMs got a pass. It’s a relief, as the original version of this law would have tightly regulated foundational models, including open source models. After pushback, the EU omitted the tight constraints and adopted a more hands-off approach.

The full text is still under wraps. Other regulatory bodies will study the law closely for guidance, giving some hope that individual countries may take notice and recognize why open source LLMs serve as a safer approach than closing off open innovation.

Mistral, the open source LLM, got the news and ran with it. Comprised of a team of OpenAI and Google technologists, Mistral released its open source LLM into the wild soon after the news from the EU broke. Topping it off, the company also raised a whopping $415 million.

It’s ironic. Much of generative AI large language model technology evolves from open source technologies. Open source software enables the building of foundation models and machine learning models. LLM technology results from communities such as PyTorch, the Python package, open datasets and academic research.

Zemlin said more dangers come from closing off open innovation. Restrictions on openness tend to benefit a small set of incumbents.

And the bad actors will ignore the bans, anyway. Open source makes for better privacy and a safer world. How does this relate to LLMs? An open source LLM or open source model is transparent so people can find vulnerabilities, review the parameters, and more.

You won’t get that transparency from a proprietary LLM.

“You know, one of the things that is a little bit odd about large language models is we don’t quite know how they actually work,” Zemlin said. “And by not knowing how they actually work. It’s very difficult to get the second attribute, which is trust, right? How do we know when hallucinations happen? How do we know where the data is coming from?”

Enterprises Worry about GenAI Security

Enterprises plan to invest in generative AI. What holds them back is security, Zemlin said. He cited new research from the Linux Foundation.

“Almost all of the people that we talked to said that it’s important that the technology that’s used to build generative AI tooling and foundation models, be open and be housed in neutral organizations that they can count on for long periods of time for such a fundamental technology as this,” Zemlin said.

Transparency gives people in enterprises the confidence that they need for business decisions and outcomes.

But are open source LLMs reality that much of a threat? To humanity, no. To big tech? Well, that depends on how you define open source.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.