Open Source Can Deflate the ‘Threat’ of AI
BILBAO, SPAIN — AI should not only be restricted, controlled, and locked down, but developers working with generative language models underpinning this revolution should rely on open source to ultimately allow for a positive outcome that we can only dream about today.
Of course, there are many naysayers for this assumption, and the examples are many, ranging from politicians with different agendas to frightened public members and other parties, some of whom could have good or bad intentions.
Open source will help developers achieve great things. Things will change radically, yes. But we need to rely on open source for this fascinating road ahead. That was my takeaway from Foundation Executive Director @jzemlin’s keynote. #OSSummit @linuxfoundation @thenewstack pic.twitter.com/ii6fTRP6jV
— BC Gain (@bcamerongain) September 20, 2023
As Jim Zemlin, the Linux Foundation‘s executive director, referenced in his Open Source Summit Europe keynote, Elon Musk was one of over a thousand signers to express his fear of the revolution getting out of control when, in an open letter a few weeks ago, Musk and others proposed a six-month moratorium on AI, beyond which was released by OpenAI with ChatpGPT.
Not to downplay how AI models are already often biased and do not take diversity into account, representing very real risks and potential tragic outcomes for today and tomorrow, ill-founded reactions to fears of what could go wrong are numerous.
The naysayers, as someone said, or start over. Zemlin offered a number of substantive reasons and historical examples involving hip cryptography of why attempting to lock down LLM could potentially be a costly mistake.
“Recently, we’ve heard from different people around the world, largely folks that already have a lot of capital, a lot of GPUs, and good foundation models that we need to take a six-month pause until we’ve figured it out. We’re even hearing calls from folks who are saying, hey, this large language models technology and advanced AI technology is so powerful that in 20 years in the hands of individual actors, people could do terrible things, such as create violent weapons, massive cyberattacks and so forth,” Zemlin said.
“And what I’m telling you today is that kind of fear and that kind of concern that the availability of open source large language models would create some terrible outcome simply isn’t true. That open source always creates sunshine, and that fear as a counterbalance around the code, because it’s not just bad things people do with large language models it is good things too, like discovering advanced drugs, helping manufacturing to become more efficient, using large language models to create more environmentally friendly building construction. Like for every action, there can be a reaction, and we’re already seeing open source immediately start to tackle some of these things people are concerned about when it comes to AI.”