Cybersecurity Pioneer Calls for Regulations to Restrain AI
The pace of AI development means the world must develop global regulations to prevent the “escape” of artificial general intelligence (AGI) and should define what counts as “treason” towards humanity, WithSecure’s chief research officer said recently.
At the same time, the CRO, Mikko Hypponen said, calls for a pause in the development of AI are ill-founded because of the risk of totalitarian regimes developing artificial general intelligence ahead of liberal democracies.
Focus on the Real
In the meantime, he added, developers worried about the threat to their jobs from ChatGPT and its peers should focus on those areas where the online world has to interface with the real world, as anything exclusively online will be automated.
Speaking at the company’s recent Sphere23 unconference in Helsinki, Hypponen said the downsides of the current wave of AI were “almost too easy to imagine” and had obvious implications for cybersecurity.
“We know that you can use deep fakes to do scams or business email compromise attacks or what have you.” Current tools gave criminals and other bad actors the ability to generate unlimited personas, which could be used for multiple types of scams.
More broadly, the march of AI also means that whatever can be done purely online can be done through automation and large-scale language models like ChatGPT, he said, which has obvious implications for developers.
However, he said, humans are harder to replace where there’s an interface between the real world and online technology. Rather than studying to build software frameworks for the cloud, he said, “You should be studying to build software frameworks for, let’s say, medical interfaces for human health because we still need the physical world. For humans to work with humans to fix their diseases.”
Looking slightly further ahead, he said that people who worried about the likes of ChatGPT becoming too good, or achieving AGI, “haven’t paid attention”, as that is precisely what the declared goal of OpenAI is.
This would result in an intelligence explosion when these systems, which are essentially code, become good enough to improve themselves. “And when it’s made a better version of itself, that version can make a better version of itself, which can make a better version of itself, and it will skyrocket when we reach the critical point.”
Getting to AGI “safely and securely” could bring immense benefits, Hypponen said. But if it all goes wrong, “It’s gonna be really bad.”
Hypponen was relatively sanguine about OpenAI’s approach to the dangers. He noted OpenAI’s structure and its focus on security and safety. “They have 100 people in-house doing red teaming, and teams outside doing red teaming against these systems.”
The Escape of AI
But it was incumbent on the world to start putting regulations in place, particularly against the “escape” of AI.
“We should be passing international law and regulation right now, which would sentence people who help AI escape as traitors not just for their country, but for mankind.”
For example, he said, “We must make sure that AI systems or bots or robots don’t get any privacy rights or ownership rights. They must not become wealthy, because if they become wealthy, they can bribe.”
To counter the problem of deep fakes, he said, media houses should be signing source material with a cryptographic key on a file server. And it must always be clear to humans when they are dealing with a machine, rather than a human.
Most importantly, he said, when missions are passed to advanced frameworks “We must make the systems understand that the most important thing for us is that we can still switch you off. So it’s more important that I can switch you off than for you to complete your mission.”
However, he was skeptical of calls for a pause on AI development, as this would give bad actors a chance to catch up, and he would rather see a responsible organization from a democratic country get there first.
“Because the other option is that Vladimir Putin will be the one getting AGI or China or North Korea or ISIS, and whoever has this technology will win everything… So it has to be done right.”
Foreign policy analyst Jessica Berlin also used the conference to highlight the current cyberspace threat from anti-democratic countries. “We find ourselves right now in a true war between authoritarian systems and democratic systems.” The authoritarian states had great money and influence at their disposal, she said.
Too often, she said, the democratic states hadn’t even registered the attack. The private sector needed to be a part of the response, she said. “We need private sector companies or ideally a coalition of private sector companies, who are willing to have a global task force to defend democracy in general, elections and the public information space in particular.”
Threat to Mankind?
Hypponen’s comments came amidst ratcheting concern over the potential threat to mankind from AI, with leaders of OpenAI, amongst others, warning the technology could cause human extinction, while other key figures have expressed regret over their roles in developing the technology.
At the same time, others have suggested the concern is overblown. The Centre for Data Innovation, a policy think tank that is part of the Information Technology and Innovation Foundation, which is backed by Amazon, Google, and Microsoft amongst others, in early May characterized “the current panic over generative AI” as climbing towards “the height of hysteria”.
This followed earlier tech panic cycles, it said, such as those sparked by printing technology, the phonograph, and the birth of motion pictures. A rush to regulate could lead to poorly crafted rules and missed opportunities for society, it argued.