TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
AI / Large Language Models

Regulating AI Presents Confounding Issues

As generative AI hype sucks the oxygen out of the room, vital questions remain about managing it for the good of society without stifling innovation.
Jun 29th, 2023 3:00am by
Featued image for: Regulating AI Presents Confounding Issues

For once, the tech industry is calling for regulation as generative AI has everyone talking, though it’s not clear exactly what that regulation should look like or even who should do it or how to enforce it.

“Perhaps at some point AI law will become a well-developed area, but right now it is the Wild West. It changes almost every week, literally both on the legal side and the technical side,” Van Lindberg, a lawyer and founder of Texas-based OSPOCO, an open source program office as a service company, said during a talk at Open Source Summit North America in Vancouver.

All the talk that AI could pose an existential threat to humanity and calls to pause development of AI because of its risks present their own kind of threat, according to Tom Romanoff, director of the Technology Policy Project at the non-profit Bipartisan Policy Center.

“Because the oxygen is kind of all being taken up by the artificial intelligence conversation, other things are kind of getting put to the side or ignored, but they really are foundational to addressing some of the risks and concerns around AI,” he said.

He’s concerned that the focus on negative use cases as some sort of future sci-fi-like apocalypse will lead to regulation that stifles innovation when there also are positive uses for the technology like being able to diagnose disease faster.

“If you’re a regulator, you’re sitting there hearing all this media chatter around how bad it is, you start looking at ways to put up guardrails,” he said.

“We’re starting to see people introduce regulation and also get the message out there that perhaps it’s not as dire as they thought it was. And let’s start thinking about how we can maintain a competitive edge against China and Russia, who are also using this stuff, and [let’s] try to get something good out of it.”

Focus on Risk

The European Union’s draft law known as the AI Act, is expected to be a model for AI governance around the world. It takes a page from GDPR, its regulation on data protection and privacy, focusing on risk. In the United States, the National Institute of Standards and Technology (NIST) has issued its own AI risk management framework. It’s part of the White House’s Blueprint for AI Bill of Rights focused on safety, transparency, eliminating bias and other factors.

Focusing solely on risk poses its own problems, according to Kamales Lardi, a Swiss-based consultant, author and speaker. She notes that GDPR deals with a more static environment while emerging technologies like generative AI are morphing rapidly.

In assessing risk, she said, to a certain extent regulators would be relying on a certification process that involves self-assessment, with the companies themselves saying what the technology is and what it does.

And with the AI Act, they’re trying to create a standard regulatory environment across different impacts and use cases for AI, rather than considering that a new use case could crop up in weeks or a month that has not been considered before, she said.

“Rapid change, that’s going to happen and is already happening. That has not been taken into account. And the depth of understanding for how these technologies are going to impact us and how the regulation should be defined hasn’t been taken into account,” she said.

Added Romanoff: “If you’re only looking at the risk side, then you are not focusing on scalability of a particular technology or program, and you’re not focusing on where there are differences across different sectors. And so when the European Union is kind of doing a kitchen sink approach of regulating all of AI, that’s going to impact sectors that in some situations, the use of AI is pretty benign, and others it’s got a high-risk side of things. But if you classify it all as high risk, then you don’t see the benefits later down the line. So I’ve always said that there needs to be a kind of consideration of the impact of the use of that AI.”

While there does need to be some standardization and risk assessment, it brings up a host of other issues, such as the types of data that are going to be used, the data sources, the level of transparency and others.

Daniel Barber, CEO of data privacy management vendor DataGrail, has brought up the issue of consumer consent. What if people don’t want their data used in large language models? So far, companies have been using data without asking about consent. If regulators require consent, do those companies have to scrap their models? Remove that data? Or decide that paying a paltry fine less odious than reworking the models?

That’s one of the foundational issues Romanoff was referring to. The United States doesn’t have a data privacy law or framework in place, though some states are trying to fill that in, he said.

Entrenching Big Tech?

One of the issues is that lawmakers, who would be creating the regulation, don’t fully understand AI.

Rep. Jay Obernolte (R-Cal.), who has a master’s degree in AI, has been quoted, saying, “Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what AI is. You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes.”

The Washington Post has since reported that members of Congress and their staffs are seeking a crash course on technologies like ChapGPT, Microsoft Bing and Google Bard in meetings with tech companies, academics and other experts to fill in the blanks in their understanding. With Silicon Valley titans helping to mold their understanding of these technologies, however, comes the likelihood of undue influence on any regulation to come.

“There’s a lot of talk around the idea of regulating AI as a way to further entrench established industry,” Romanoff said. “And open source code allows for startups and small, medium-sized businesses to catch up essentially, to the technical divide that they otherwise could not have competed with against the kind of big tech or the big companies that are out there. And so by creating this general approach, it’s commercial versus open source at the same risk, I can see what they were trying to do, but it further entrenches some of the folks that are already established players in the space and creates a barrier to entry for a lot of others.”

Who Should Regulate?

There’s also been talk that our U.S. lawmakers, not being experts, should not be the ones doing the regulating at all.  But if not government, then who? Letting the industry police itself? We know that never goes well.

OpenAI CEO Sam Altman has been among those advocating for a new FDA-like federal agency to grant licenses to create AI models above a certain threshold of capabilities, similar to that in the pharmaceutical industry, with the authority to also revoke those licenses if the models don’t meet safety guidelines set by the government.

Lardi has suggested a consortium approach similar to those used with blockchain where government works with experts in the field to create regulations with transparency.

“One of the key concerns I have is the concentration of power, the regulation forcing providers of AI as well as companies that potentially could use AI, to regulate themselves to set up certain bases for compliance. Not every organization can afford that, [with] the level of requirements so stringent that you need to have an internal compliance team, you need to have a data strategy, you need to have certain reporting and record keeping, you need to have, if you’re not in the EU market, you need to have a local partner that’s established enough to be able to support you with these things. Startups and smaller companies will not be able to do that,” she said.

She believes regulation of certain lower-level uses of AI can be automated, but higher-risk uses will require human involvement. And there will always be bad actors. What will be the recourse against those who use AI for harm? The whole issue of how any regulations will be enforced is still up in the air.

If a regulatory agency is created, it needs to be given the authority to do the job, not have pieces of it parceled out to different agencies, Romanoff said, pointing to “like 19 different agencies that have some piece of regulating some aspect of the internet.”

I asked ChatGPT about that. It pointed out there are at least five, but added that various agencies have overlapping jurisdiction.

All Hat, No Cattle

While the White House and Senate Majority Leader Charles E. Schumer (D-N.Y.) are sending out rallying cries to get something on the books, Romanoff noted, “There’s a long, long history of the tech nerds screaming about the impact of AI and where it’s going. And this is the first time that there’s kind of a ‘I told you so’ moment, so we have a lot of catching up to do.”

When asked how he would regulate AI, if it were all up to him, Romanoff cited three things:

  • Not treating all industries the same. The Bipartisan Policy Center is advocating a use-case-specific “hard-law” and “soft-law” (frameworks) approach to balance risk and reward in AI regulation. He said he would create regulation specific to healthcare, to the financial industry and other industries specific to how they use it.
  • Focus more on near-term uses and misuses, such as folks that are getting their voices or images replicated for ransom. We also need to be considering how AI will augment some jobs and displace others, so we need to be working to mitigate ill effects of that.
  • And he called for missing regulations on data privacy and content moderation that are foundational to building a comprehensive legal framework for AI.

Creating regulation won’t be a one-and-done affair either, he pointed out.

“If there are regulations put in place, I think it’s very important that there is a system to regularly update them or reevaluate it, because we don’t know the impact of AI yet,” he said. “And it’s because we don’t know what the impact is, we need to have transparency and accountability so that we can update those regulations so we’re not stifling innovation in the long term, and we can make changes as needed.”

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.