How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
AI / Open Source / Tech Life

Open Source Can Deflate the ‘Threat’ of AI 

Open source can help in the processing of Artificial Intelligence.
Sep 22nd, 2023 6:44am by
Featued image for: Open Source Can Deflate the ‘Threat’ of AI 

BILBAO, SPAIN — AI should not only be restricted, controlled, and locked down, but developers working with generative language models underpinning this revolution should rely on open source to ultimately allow for a positive outcome that we can only dream about today.

Of course, there are many naysayers for this assumption, and the examples are many, ranging from politicians with different agendas to frightened public members and other parties, some of whom could have good or bad intentions.

As Jim Zemlin, the Linux Foundation‘s executive director, referenced in his Open Source Summit Europe keynote, Elon Musk was one of over a thousand signers to express his fear of the revolution getting out of control when, in an open letter a few weeks ago, Musk and others proposed a six-month moratorium on AI, beyond which was released by OpenAI with ChatpGPT.

Not to downplay how AI models are already often biased and do not take diversity into account, representing very real risks and potential tragic outcomes for today and tomorrow, ill-founded reactions to fears of what could go wrong are numerous.

The naysayers, as someone said, or start over. Zemlin offered a number of substantive reasons and historical examples involving hip cryptography of why attempting to lock down LLM could potentially be a costly mistake.

“Recently, we’ve heard from different people around the world, largely folks that already have a lot of capital, a lot of GPUs, and good foundation models that we need to take a six-month pause until we’ve figured it out. We’re even hearing calls from folks who are saying, hey, this large language models technology and advanced AI technology is so powerful that in 20 years in the hands of individual actors, people could do terrible things, such as create violent weapons, massive cyberattacks and so forth,” Zemlin said.

“And what I’m telling you today is that kind of fear and that kind of concern that the availability of open source large language models would create some terrible outcome simply isn’t true. That open source always creates sunshine, and that fear as a counterbalance around the code, because it’s not just bad things people do with large language models it is good things too, like discovering advanced drugs, helping manufacturing to become more efficient, using large language models to create more environmentally friendly building construction. Like for every action, there can be a reaction, and we’re already seeing open source immediately start to tackle some of these things people are concerned about when it comes to AI.”

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.