Congress and AI

Congress has never been the quickest off the mark when it comes to making laws dealing with technology. Now, even as AI takes over creative writing and art, Congress continues to sit idle.
As legislators endeavor to comprehend generative AI programs such as Microsoft Bing, ChatGPT and Google Bard, some of the more technology-oriented lawmakers are apprehensive about a repeat of Congress’s unpreparedness in responding to the previous major tech wave — social media. Worries, however, don’t appear to be leading to action.
True, there’s a backlash now for letting tech companies keep Washington at arm’s length with promises of “self-regulation” on critical issues such as privacy protection, child safety, disinformation, cryptocurrency, and data portability. But harsh words mean little.
For example, President Joe Biden believes an effective regulatory framework should be put in place at the start of new technology waves to encourage tech companies to build products with built-in consumer protections rather than as an afterthought. But, bipartisan calls to regulate technology appear to be going nowhere.
Algorithmic Accountability
Take, for instance, the S.3572 – Algorithmic Accountability Act of 2022. This bill, according to its chief author, Senator Ron Wyden (D-Ore.), was to bring new transparency and oversight of software, algorithms, and other automated systems that are used to make critical decisions about nearly every aspect of Americans’ lives.
Wyden said, “If someone decides not to rent you a house because of the color of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad. Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school.”
Sounds good, doesn’t it? It would have required tech companies to submit “impact assessments of automated decision systems and augmented critical decision processes” to the Federal Trade Commission. However, it never made it out of committee in 2022, and it’s not even been reintroduced into 2023’s Congress.
Written by ChatGPT
More recently, in January, Congressman Ted Lieu (D-Cal.) submitted a brief resolution, written by ChatGPT, directing the House of Representatives to conduct a broad study of generative AI technology “to guarantee that the development and deployment of AI are safe, ethical, and respect the rights and privacy of all Americans.” But, this non-binding resolution doesn’t regulate anything. It’s simply a worry written into the Congressional Record.
The real problem, explained Representative Jay Obernolte (R-Cal.) in the New York Times, is that most lawmakers are completely clueless about AI. Obernolte, with a master’s in AI, went on, “Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what AI is. You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes.”
Ha. I’m a tech journalist and wouldn’t be surprised at all. I’m still explaining to people that “password” isn’t a good password. How AI, at its current state of development, is dangerous is way beyond the concept of most people.
For example, Alexander Hanff, a technologist and privacy expert, tried ChatGPT on that ever-popular question: “Who am I?” Or, in this case, “Who’s Alexander Hanff?” Like many such questions, it got some things right and some things wrong. ChatGPT, as I think most of us know, has a bad habit of making things up.
This time, however, ChatAPT took a very dark turn. It insisted that Hanff was dead. Not only that, it even made up a fake URL to The Guardian as “proof” that he was dead.
Now imagine, if you will, that a bank relies on ChatGPT or Bing to check on someone’s mortgage application. With everyone insisting you can rely on ChatGPT, I can see this happening very easily. But, if ChatGPT “thinks” you’re dead, that’s the end of your mortgage loan, and you’ll probably find more trouble heading your way as you’re reporting for credit fraud.
With the state of AI today, we are a long way from being able to trust it. True, technically, ChatGPT is simply a large language model. We really can’t expect accuracy when we ask factual questions. But, that’s exactly what we are doing. Someone needs to monitor and regulate AI so its dangers don’t overwhelm its advantages. It’s not going to be the government.
At this point, I don’t know who it will be. Given big technology’s track record of self-regulation, I don’t see the Googles and Microsofts of the world doing it either. I just know that, as we move forward, we should all be extremely wary of trusting AI, whether it’s to do a school paper or determine if Joe or Ginny should be put on the real drug or a placebo in a drug trial.