Skip to content

Blumenthal (And AI Software) Delivers Opening Remarks at Senate Hearing on Oversight of Artificial Intelligence

“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” said Blumenthal

[WASHINGTON, DC] – U.S. Senator Richard Blumenthal (D-CT), Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, delivered opening remarks at today’s hearing titled, “Oversight of AI: Rules for Artificial Intelligence.” The hearing, featuring OpenAI CEO Sam Altman as a witness, was the beginning of an effort to, “write the rules of AI,” said Blumenthal.  

In his opening remarks, Blumenthal played an AI-generated audio recording that mimicked his voice and read a ChatGPT-generated script about the hearing. 

“If you were listening from home, you might have thought that voice was mine and the words from me, but…the audio was an AI voice cloning software trained on my floor speeches. The remarks were written by ChatGPT when it was asked how I would open this hearing, and you heard just now the result,” said Blumenthal.

Blumenthal said the rapid advancement of AI shows that, “we are on the verge really of a new era,” while warning that the technology is, “no longer fantasies of science fiction.” While AI holds great promise for the future when it comes to curing diseases and developing new understandings of science, Blumenthal warned of the potential harms, including weaponized disinformation, housing discrimination, the harassment of women and impersonation fraud, voice cloning, and deep fakes.

“These are the potential risks despite the other rewards and for me, perhaps the biggest nightmare is the looming new industrial revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training and relocation that may be required,” said Blumenthal.  

Blumenthal called on Congress to address these new challenges, saying, “We need to maximize the good over the bad. Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment…Now we have the obligation to do it on AI before the threats and the risks become real.”

Specifically, Blumenthal said efforts should begin by focusing on transparency, limitations on use, and accountability. Blumenthal said, “AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access,” and that, “we ought to impose restrictions, or even ban their use, especially when it comes to commercial invasions of privacy for profit and decisions that affect people’s livelihoods.”

Blumenthal concluded by stressing the importance of adapting Section 230 so that when companies or clients of AI cause harm, “they should be held liable…be responsible for the ramification of their business decisions.”

“The AI industry doesn’t have to wait for Congress,” Blumenthal continued. “I’m hoping that we’ll elevate rather than have a race to the bottom. And I think these hearings will be an important part of this conversation.”

Video of Blumenthal’s opening remarks can be found here. A transcript is available below.

U.S. Senator Richard Blumenthal (D-CT): Welcome to the hearing of the Privacy, Technology, and the Law Subcommittee. I thank my partner in this effort Senator Hawley, Ranking Member, and I briefly want to thank Senator Durbin, Chairman of the Judiciary Committee, and he will be speaking shortly.

This hearing is on the Oversight of Artificial Intelligence. It’s the first in a series of hearings intended to write the rules of AI. Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past.

And now for some introductory remarks. ‘Too often, we have seen what happens when technology outpaces regulation: the unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.’

If you were listening from home, you might have thought that voice was mine and the words from me, but in facts that voice was not mine, the words were not mine, and the audio was an AI voice cloning software trained on my floor speeches. The remarks were written by ChatGPT when it was asked how I would open this hearing, and you heard just now the result.

I asked ChatGPT why did you pick those themes and that content and it answered, and I’m quoting, ‘Blumenthal has a strong record in advocating for consumer protection and civil rights. He has been vocal about issues such as data privacy and the potential for discrimination in algorithmic decision-making. Therefore, the statement emphasizes these aspects.’ Mr. Altman, I appreciate ChatGPT’s endorsement.

In all seriousness, this apparent reasoning is pretty impressive. I am sure that we’ll look back a decade from now and view ChatGPT and GPT-4 like we do the first cellphone, those big clunky things that we used to carry around. But, we recognize that we are on the verge really of a new era. The audio and my playing it may strike you as curious or humorous, but what reverberated in my mind was what if I asked it and what if it had provided an endorsement of Ukraine surrendering or Vladimir Putin’s leadership. That would have been really frightening and the prospect is more than a little scary, to use the word Mr. Altman that you have used yourself. And I think you have been very constructive in calling attention to the pitfalls as well as the promise. And that’s the reason why we wanted you to be here today. And we thank you and our other witnesses for joining us.

For several months now, the public has been fascinated with ChatGPT, DALL-E, and other AI tools. These examples like the homework done by ChatGPT or the articles and op-ed that it can write feel like novelties, but the underlying advancements of this era are more than just research experiments. They are no longer fantasies of science fiction. They’re real, present. The promises of curing cancer, developing new understandings of physics and biology, or modeling the climate and weather, all very encouraging and hopeful.

But we also know the potential harms. And we’ve seen them already – weaponized disinformation, housing discrimination, the harassment of women and impersonation fraud, voice cloning, deep fakes. These are the potential risks despite the other rewards and for me, perhaps the biggest nightmare is the looming new industrial revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution in skill training and relocation that may be required.

And already industry leaders are calling attention to those challenges. To quote ChatGPT, “this is not necessarily the future that we want.” We need to maximize the good over the bad. Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is predators on the internet, toxic content, exploiting children creating dangers for them, and Senator Blackburn and I and others like Senator Durbin on the Judiciary Committee are trying to deal with it, the Kids Online Safety Act. But Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.

Sensible safeguards are not in opposition to innovation. Accountability is not a burden. Far from it. They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science but also in promoting our democratic values. Otherwise in the absence of that trust I think we may well lose both.

These are sophisticated technology but there are basic expectations common in our law. We can start with transparency. AI companies ought to be required to test their systems, disclose known risks, and allow independent researcher access. We can establish scorecards and nutrition labels to encourage competition based on safety and trustworthiness.

Limitations on use. There are places where the risk of AI is so extreme that we ought to impose restrictions, or even ban their use, especially when it comes to commercial invasions of privacy for profit and decisions that affect people’s livelihoods.

And of course, accountability, reliability. When AI companies and their clients cause harm, they should be held liable. We should not repeat our past mistakes. For example, Section 230. Forcing companies to think ahead and be responsible for the ramification of their business decisions can be the most powerful tool of all.

Garbage in, garbage out. The principle still applies. We ought to beware of the garbage, whether it’s going into these platforms or coming out of them. And the ideas that we develop in this hearing I think will provide a solid path forward. I look forward to discussing them with you today.

And I will just finish on this note. The AI industry doesn’t have to wait for Congress. I hope there are ideas and feedback from this discussion and from the industry and voluntary action such as we’ve seen lacking in many social media platforms, and the consequences have been huge.

So I’m hoping that we’ll elevate rather than have a race to the bottom. And I think these hearings will be an important part of this conversation. This one is only the first. The Ranking Member and I have agreed there should be more and we’re going to invite other industry leaders, some have committed to come, experts, academics, and the public we hope will participate. And with that I will turn to the Ranking Member Senator Hawley.

-30-