U.S. proposes to strengthen legislation and enforcement to regulate AI development

Publisher: EAIOT Time: 2023-05-23 Category: AI 621Views 0Comments

The explosion of generative artificial intelligence (AI) like ChatGPT, and some of the resulting disruptions such as the proliferation of false information, bias, and changing workforce structures, has raised concerns for the U.S. federal government, and lawmakers and regulators want to take real action to dispel those concerns, The Hill reported in a recent story.


Raising two key issues

The Hill website reports that the growth of AI could raise two key issues.

On the one hand, the recent rise of generative AI tools raises issues related to the spread of false information. On the other hand, AI that powers automated systems could lead to discrimination.

The British magazine New Scientist also points out that ChatGPT and other chatbots often make factual errors, cite completely fictitious events or articles, fabricate sexual harassment scandals in cases, falsely accuse real people, etc. The use of ChatGPT has also led to data privacy scandals involving the disclosure of confidential company data. In addition, AI-generated images, audio and even video can lead to large-scale "deep falsification" of information, as evidenced by the arrest of former U.S. President Donald Trump and the fake AI composite photo of Pope Francis in a stylish white down jacket.


gpt.jpg


Calls for stronger enforcement

The Federal Trade Commission, the Civil Rights Division of the Department of Justice, the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission (EEOC) recently issued a joint statement saying they are committed to strengthening the enforcement of existing laws, including new regulations aimed at addressing the potential for AI discrimination and developing rules on development, in light of the increasing use of AI for a range of services, from housing to health care. .

EEOC Chairman Charlotte Burrows said the use of advanced technologies, including AI, must comply with federal law.

Of course, the agencies' joint statement focuses on automated systems that use AI, not generative AI like ChatGPT.

Even so, as chatbots like ChatGPT soar in popularity, Google and other companies continue to launch new products to compete with them, and in addition, a group of entrepreneurs, including Tesla and Twitter CEO Elon Musk, are racing to enter the industry, the potential for uproar and commotion from this underscores the urgent need for policymakers to take relevant The potential for uproar and commotion also highlights the urgent need for policymakers to take relevant measures.

To address the risks posed by AI, the U.S. Congress will weigh enacting new regulations, and agencies should step up efforts to enforce existing laws. Because even where existing laws apply, it may be difficult to enforce them because of the way AI systems work. For example, algorithmic hiring tools may lead to discrimination, but it is difficult for employees to know if they are being discriminated against by the system. So agencies need to address not only how to apply the law, but also how to address the challenges posed by the enforcement process and understand how these AI systems will impact the real world, among other things.

Assistant Attorney General for Civil Rights Kristin Clark also emphasized that AI poses a significant threat when it comes to the various discrimination issues that arise in today's society, and that these issues require more in-depth study and review by policymakers and others.


Security Management Imperative

U.S. senators unveiled a proposal earlier this month that would create a framework for AI regulation aimed at improving transparency and accountability. On top of that, lawmakers are poised to take action to pressure industry on the risks associated with the rise of AI technology.

The Senate Intelligence Committee recently sent letters to the chief executives of tech companies such as OpenAI, developer of ChatGPT, Metaverse Platform Inc. and Google, Anthropic and Microsoft, asking how they are addressing security risks in developing large-scale AI models.

With the increasing use of AI in most fields and the potential for large scale language models to be steadily integrated into a range of existing systems from healthcare to the financial sector, the letter reads, there is an urgent need to emphasize security concerns in the use of AI.

In response to questions from some senators about how AI is being used, a spokeswoman for the Consumer Technology Association said the association has been "contributing to AI policy, standards and framework development" with its members.

Craig Albright, vice president of U.S. government relations for the American Software Alliance, said Congress could require companies to develop risk management plans, conduct risk assessments of high-risk uses of AI and define what constitutes a high-risk case, while companies would need to conduct impact assessments and design evaluations to ensure they are doing the right thing.


Related Article