Father of ChatGPT Attends Hearing to Embrace AI Regulation

Publisher: EAIOT Time: 2023-05-17 Category: AI 2629Views 0Comments

On May 16, the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law held a hearing titled "AI Regulation: The Rules of Artificial Intelligence" in hopes of taking the first step toward AI regulation.


From Bill Gates, Zuckerberg, to TikTok's Zhou Shouzhi, none of the tech giants can easily "participate" in such a hearing.

This time, it was the turn of Sam Altman, CEO of OpenAI, who made his first appearance at the U.S. Congressional hearing.

There were three witnesses, in addition to Altman, Christina Montgomery, vice president of trust and privacy at IBM, and Gary Marcus, professor emeritus at New York University.

At the beginning of the hearing, the U.S. senator played a special recording. In the recording, "he" talks about the risks of the emerging technologies represented by ChatGPT. It was then explained that the text of the recording was written by ChatGPT and the voice was synthesized by an AI tool.

The demonstration suggests a possibility: just discussing ChatGPT in his voice seems fine, but if it's about issues like the war in Ukraine, such a fake recording could have a serious impact on a political figure.

Since OpenAI launched ChatGPT at the end of November last year, AIGC technology, including the large language model AI painting, has generated a craze. Microsoft, Google and other tech heads have launched an AI arms race, new players have emerged, money continues to be injected, and changes are taking place in a variety of fields such as search, painting photography, and gaming.

But problems are also in the background. The "AI illusion" that ChatGPTs have yet to shake off has led to misinformation, fake news created under bad use, voice and face fakes, phishing emails, etc., all of which have become a piercing noise in the AI boom.

Although Alterman's call for regulation has been going on for some time, although this is a "friendly discussion" with few answers, a foreplay of AI regulation. As the CEO of OpenAI, the company that started it all, Alterman is not the only witness at the hearing, but is still destined to face enormous pressure.


During the three-hour hearing, Alterman showed his full sincerity. In a change from his usual casual style, he attended in a dark blue suit and expressed a strong welcome to regulation.

"I think if this technology runs off the rails, that will create big problems. We want to be forthright about that." He said, "We want to work together to prevent that from happening."

Specifically, Alterman offered three suggestions:

1. create a new government agency that would be responsible for licensing large AI models and revoke the relevant licenses from companies that do not meet government standards.

2. create a set of safety standards for AI models, including an assessment of their dangers.

3. the need for independent experts to conduct independent audits of the models' performance on various metrics.

This is Altmann's most detailed statement to date on what rules should be in place to deal with tools like ChatGPT. Notably absent from Alterman's proposal is any mention of requiring transparency in the training data provided by AI models, as called for by Marcus, another expert witness in the audience.

On other issues, Alterman said both pro and con, praising the technology as well as pointing out threats.

Such as whether ChatGPTs will replace people's jobs, Alterman said the technology will automatically eliminate some jobs, but also create some new ones. Then, as the U.S. election approaches, he expressed concern about the possibility of AI manipulating people's views.

Previously, Altmann has repeatedly expressed concern about this technology, in addition to constantly emphasizing the benefits that such tools bring to the world.

In an interview in March, he said he was concerned about the misuse of ChatGPT: "Since they are getting better at writing computer code, they could be used for offensive cyber attacks."

Even being "swarmed", Altman nodded before shaking his head.

In late March, Musk and nearly 1,000 other techies signed an open letter calling for a moratorium on training GPT models for the next six months to avoid the potential risks they pose to society and humans. It also suggested that the government should step in if the ban is not implemented quickly.

Alterman took a stand a few weeks after the letter raised concerns, saying he "really agreed" with some of its main points, but denied that OpenAI was training GPT-5 and used it to refute the technical details of the letter.

During the hearing, Alterman was a model of a "good student," agreeing, adding, self-critiquing, and making suggestions between smiles.

After the hearing, Richard Blumenthal, who presided over the meeting, said Altman's testimony was "night and day" compared to those of other CEOs.

"Not just in rhetoric, but also in actual actions and his willingness to engage and commit to specific actions." "The fact that some major technology companies have signed and violated decrees is a far cry from the kind of cooperation Sam Alterman promised. Considering his past record, I think that's pretty sincere."


It's hard to say whether Alterman's "warm embrace" of regulation is genuine, a cooperative gesture, a "retreat as a defense" strategy, or both.

The main criticism of OpenAI was made by New York University professor Marcus, who was also a witness. He began the hearing by citing OpenAI's original mission statement, which was to advance AI for the benefit of all humanity, unencumbered by financial pressures. Now, Marcus said, OpenAI has its investor Microsoft to "thank" for that.

"Corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability are creating a perfect storm," Marcus said. Marcus said. It's worth noting that Marcus also signed the thousand-person open letter calling for OpenAI to hit the pause button.

Nonetheless, overall, the hearing was not tense, and lawmakers were friendly to Alterman. This is inseparable from Alterman's efforts.

Alterman has long acted to walk the relationship and communicate. In a March interview, he already revealed that he was in touch with some government officials.

Earlier this month, Alterman, along with the CEOs of Google and Microsoft, met with U.S. Vice President Kamala Harris. And on the eve of this hearing, Alterman also met privately with about 60 members of Congress from both parties.

He had dinner with the lawmakers and introduced and demonstrated ChatGPT to them, and several of the lawmakers interviewed by CNBC spoke highly of Alterman's performance.

From the need to introduce regulation of AI to the fact that an independent agency should be established, Alterman and the legislators sitting at the other end of the hearing had a consensus, and it was not an unexpected surprise on the hearing floor.

An interesting detail is that on most issues, IBM's Mary Monger and Alterman advocated much the same, such as AI needs to be regulated but not to curb innovation, and called for "precise regulation" of specific use cases of AI rather than the technology itself.

But on the issue of creating a new specialized government agency, Montgomery did not think it was necessary. And the statement was immediately rebuked by a senator: "I don't see how you can say we don't need an agency to deal with the most transformative technologies."

During the hearing, a senator even asked Alterman if he was qualified to oversee a federal AI regulator, who declined, saying he "loves his job right now" but promising to send a list of suitable candidates to lawmakers.

The Washington Post commented on this episode as "many senators seemed eager to trust Alterman's ability to regulate himself.


The friendly atmosphere of the hearing also came from its very nature. It was not a clusterfuck against one company, but an exchange of ideas, with U.S. regulators determined to act before emerging technologies "go terribly wrong.

When Zuckerberg first sat in a congressional hearing in 2018, his social media platform Facebook had been launched 14 years earlier. Facebook was embroiled in the "Cambridge scandal," which the company's chief technology officer said affected 87 million users.

It was two days and 10 hours of wheel-to-wheel combat, with members of Congress in clear opposition to Zuckerberg.

The same antagonism was seen in the congressional hearings in 2019 after Zuckerberg's desire to release Libra coin, the antitrust hearings in 2020 (which gathered the four tech giants Google, Facebook, Amazon, and Apple), and the politically charged hearings of TikTok's Chou Chou.

Going back even further, the 1998 antitrust hearing attended by Microsoft founder and then-CEO Bill Gates was full of contradictions and tug-of-war, and even nearly led to Microsoft being spun off.

Now U.S. lawmakers see another wave of technology after the PC and mobile Internet, and this time they want to catch up with AI so as not to repeat the same mistakes.

Untimely and inadequate action in the age of social media has become a piece of mind for U.S. regulators. Throughout the hearing, different legislators repeatedly expressed regret.

"Section 230" was also mentioned several times during the hearing. This provision comes from the U.S. Communications Regulatory Act of 1996, which has provided legal protection for technology companies for many years. Its two main aspects are that Internet companies are not liable for content posted by users, and that social media are not penalized for removing content they find objectionable or inappropriate.

"The result is that the Internet is flooded with toxic content, exploiting children and creating danger for them. Congress has failed to seize the moment on social media. Now we have an opportunity to do that with AI before the threats and risks become a reality." Senator Blumenthal said.

With key companies calling for regulation and lawmakers having regulatory consensus, it looks like everything is looking up. But in reality, it is extremely challenging for the U.S. Congress to implement the regulatory recommendations made at this hearing.

In the past, the regulation of technology companies has repeatedly been affected by the political differences between different parties, such as "Section 230" whether to erase or update, and how to update, there is still no conclusion.

For Alterman, the most brutal reality show "premiere" has come to a perfect end, and there is still a long way to go as to where AI regulation will go.

Tags: AIChatGPT