“I challenge Big Tech to come forward and be constructive.”
[WASHINGTON, DC] – U.S. Senator Richard Blumenthal (D-CT), Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, delivered opening remarks at today’s hearing on “Oversight of AI: Insiders’ Perspectives.” During the hearing, Blumenthal discussed his bipartisan legislative framework with Ranking Member U.S. Senator Josh Hawley (R-MO), which would “impose a measure of accountability” on Big Tech to better protect against AI’s potential harms.
Blumenthal highlighted OpenAI CEO Sam Altman’s previous testimony to the Subcommittee, including his warnings about AI’s potential pitfalls, “We have heard from industry leaders responsible for innovation and progress in AI and they share their excitement for the future but they also warned about serious risk. Sam Altman, for example, sat where you are now, shared his worst fear that AI could ‘cause significant harm to the world.’”
“Despite those self-professed fears of Sam Altman and others, Big Tech companies and leading AI companies are rushing to put sophisticated products into the market,” Blumenthal continued. “The pressure is enormous. Billions and billions of dollars—careers of smart, motivated people are on the line. And what seemed to be a kind of slow walk on AI has turned into literally a gold rush.”
Blumenthal highlighted the urgent need for action to rein in AI, “Companies, even as we speak, are cutting corners in pulling back on efforts to ensure AI systems do not cause the kinds of harm even Sam Altman thought were possible. We are already seeing the consequences. Generative AI tools are being used by Russia, China, and Iran to interfere in our democracy. Those tools are being used to mislead voters about elections and spread falsehoods about candidates.”
“I challenge Big Tech to come forward and be constructive here,” concluded Blumenthal. “They have indicated they want to be about some kind of regulation to control and safeguard the people of the world.”
Video of Blumenthal’s opening remarks can be found here. The full transcript of Blumenthal’s statement is available below.
U.S. Senator Richard Blumenthal (D-CT): I welcome the Ranking Member, as well as my colleagues Senator Durbin, who is Chair of the Judiciary Committee and Senator Blackburn, my partner on the Kids Online Safety Act—members of this body who have a tremendous interest in the topic that brings us here today, and we are very, very grateful to this group of witnesses who are among the main experts in the country, and not only that, but experts of conscience and conviction about the promise and the dangers of artificial intelligence. We welcome you, and we thank you for being here.
We have had hearings before in this Subcommittee. It seems like years ago, and in fact, a short time on artificial intelligence may seem like years in terms of the progress that can be made. We have heard from industry leaders responsible for innovation and progress in AI, and they share their excitement for the future, but they also warned about serious risk. Sam Altman for example, who sat where you are now, shared his worst fear that AI could “cause significant harm to the world.” But as he sat with me in my office and described a less advanced version of his technology, he assured me that there were going to be safeguards, red teams, all kinds of guardrails that would prevent those dangers.
We are here today to hear from you because every one of the witnesses that we have today are experts who were involved in developing AI on behalf of Meta, Google, OpenAI, and you saw firsthand how those companies dealt with safety issues and where those companies did well and where they fell short, and you can speak to the need for enforceable rules to hold these powerful companies accountable. And in fact, Senator Hawley and I have a draft framework that would impose those kinds of safeguards and guardrails and impose a measure of accountability, and we are open to hear from you about ways it can be strengthened, if necessary, or improved.
But my fear is that we are already beginning to see the horse out of the barn. Mr. Harrison, your testimony, I think, assured us the horse was not out of the barn, but my fear is that we will make the same mistake we did with social media, which is too little too late. That is why the work that Senator Blackburn and I are doing on kids online safety is so important to accomplish with urgency.
Despite those self-professed fears of Sam Altman and others, Big Tech companies and leading AI companies are rushing to put sophisticated AI products into the market. The pressure is enormous. Billions and billions of dollars, careers of smart, motivated people are on the line. And what seemed to be a kind of slow walk on AI has turned into literally a gold rush. We are in the wild west, and there is a gold rush. The incentives for a race to the bottom are overwhelming, and companies, even as we speak, are cutting corners and pulling back on efforts to make sure that AI systems do not cause the kinds of harm that even Sam Altman thought were possible.
We are already seeing the consequences. Generative AI tools are being used by Russia, China, and Iran to interfere in our democracy. Those tools are being used to mislead voters about elections and spread falsehoods about candidates. So-called “face swapping” and “nudify” apps are being used to create sexually explicit images of everyone from Taylor Swift to middle schoolers in our educational institutions around the country. One survey found that AI tools are already being used by preteens to create fake sexually explicit images of their classmates. That’s preteens. You know, I don't have to expand on this point because everyone in this room, probably by this point, everybody in America who is watching or seeing anything on the news, has become familiar with these abuses, and voice-cloning software is being used in imposter schemes targeting senior citizens, impersonating family members, defrauding those seniors of their savings.
This fraud and abuse is already undermining our democracy, exploiting consumers, and disrupting classrooms, but it is preventable. It is also a preview into the world that we will see expanding and deepening without real enforceable rules. And now, artificial general intelligence or AGI, which I know our witnesses are going to address today, provides even more frightening prospects for harm. The idea that AGI might in 10 or 20 years be smarter, or at least as smart as a human being, is no longer that far out in the future. It is very far from science fiction. It is here and now. One to three years has been the latest prediction, in fact, before this Committee. And we know that artificial intelligence that is as smart as human beings is also capable of deceiving us, manipulating us, and concealing facts from us, and having a mind of its own when it comes to warfare, whether it is cyber war or nuclear war or simply war on the ground in the battlefield.
So we have no time to lose to make sure that those horses are still in the barn. And I am going to abbreviate the remarks that I was going to make because we have been joined by a number of our colleagues, and I want to get to the testimony and give Senator Hawley a chance to comment. But let me just say for the benefit of others in this room, I know our witnesses are familiar with our legislation, that the principles of this framework include licensing, establishing a licensing regime and transparency requirements for companies that are engaged in high-risk AI development. It is about oversight, creating an independent oversight body that has expertise with AI and works with other agencies to administer and enforce the law, watermarking, rules around watermarking and disclosure when AI is being used, enforcement, ensuring that AI companies can be held liable when their products breach privacy, violate civil rights or cause other harm.
And I will just emphasize the last of these points, enforcement, for me as a former law enforcer—I served as Attorney General for my state and federal prosecutor, US attorney— for most of my career, it is absolutely key, and I think to Senator Hawley as well, as a former Attorney General and to many others like Senator Klobuchar, who also was an enforcer. I am very hopeful that we can move forward with Senator Klobuchar's bill on election security—she has done a lot of work on it, and it is an excellent piece of legislation, and I salute her for her leadership—as well as Senator Durbin’s and Senator Coons’ bill on deepfakes. We have a number of proposals like them that are ready to become law, and I hope that a hearing like this one will generate the sense of urgency that I feel for my colleagues as well.
You can see from the membership today that it is bipartisan. Not just Senator Hawley and myself, but literally bipartisan, I think, across the board—just as the vote on the Kids Online Safety Act was 91-3 in the United States Senate to approve it. I think we can generate the same kind of overwhelming bipartisan support for these proposals.
And finally, what we should learn from social media, that experience is don’t trust Big Tech. I think most of you very explicitly agree, we cannot rely on them to do the job. For years, they said about social media, “trust us.” We have learned we can't and still protect our children and others. And as one of you said, they come before us, they say all the time, “We are in favor of regulation. Just not that regulation.” Or they have other tricks that they are able to move forward with, the armies of lobbyists and lawyers that they can muster.
So, we ask for their cooperation. I challenge Big Tech to come forward and be constructive. They have indicated they want to be. But some kind of regulation to control and safeguard the people of the world—it is not just America—have to be adopted, and I hope that his hearing will be another step in that process, and I turn to the Ranking Member.
-30-