ChatGPT Opens Pandora's Box AIGC "Closes the Loophole"
The first day of May, the global AI "high light" focused on Jeffrey Hinton, the so-called "Godfather of AI". On the same day, the New York Times was the first to expose Hinton's departure from Google. Hinton confessed that he left Google in order to be completely free to speak out about the dangers posed by artificial intelligence.
Apple co-founder Steve Wozniak has also expressed his concern about AI spreading disinformation, "If someone wants to lie to you, using AI technology, deception becomes much easier."
The farce has been preempted. Previously, a George Washington University law school professor was inexplicably made a "sexual harasser" because he was on a ChatGPT-generated list of "legal scholars with a history of sexual harassment. In February this year, a news about "Hangzhou government will cancel the motor vehicle tail number restriction policy on March 1" was also circulated on the Internet. But then Hangzhou police confirmed that the news was fake news written by ChatGPT.
Regulation in pursuit
Right now, regulatory concerns about generative AI have spread around the world. A few days ago, the U.S. White House just heavily announced its first AI regulatory plan, with the National Science Foundation planning to allocate $140 million to launch seven new national AI institutes.
Immediately afterwards, President Joe Biden attended an AI conference at the White House to meet with CEOs of top AI companies, including Google, Microsoft and OpenAI.
After the meeting, Vice President Harris said in a statement that AI technology has the potential to improve lives, but it could also raise security, privacy and civil rights concerns. She told the tech executives that it was their responsibility to ensure the safety of their AI products and that the government was open to legislation on AI.
The U.K.'s Competition and Markets Authority also confirmed that it is conducting a review of the AI market, which also includes basic models, including big language models and generative AI. And last month, members of the European Parliament also agreed on a proposal for an Artificial Intelligence Bill that would impose stricter regulatory requirements on AI models.
It is believed that there are outstanding problems in the regulation of the AIGC industry such as an inadequate system of relevant laws, regulations and standards, the lack of a top-down regulatory system with multi-sectoral collaboration, and backward regulatory technical means, etc. Relevant government departments need to establish and improve the system of laws, regulations and standards related to the development and application of AIGC as soon as possible, establish or improve the ethical review system and regulatory system, strengthen the supervision of technology enterprises, application scenarios, and users, build a multi-party participation and collaborative governance platform, and promote the healthy and sustainable development of the AIGC industry.
"There are also some difficult issues in the regulation of generative AI, such as how to define the responsibility of the platform side, how to determine whether the content comes from real humans or artificial intelligence, and the difficulty of forensics." The relationship between regulation and innovative technologies and innovative models will always be one of follow and be followed, there must be technology first, then generate risk, and then regulate. With regulation, with the initiative, it can also be more conducive to the development of the industry.
Generative AI relies on big model training and plays an important role in certain application areas, even in some cases being able to reach the level of a relative professional, enough to disguise the truth. Therefore, from the perspective of communication, social and entertainment, especially in the content platform, it is easy to create certain problems.
In response to the difficulty of distinguishing the fiction from the reality of generative AI content, on the technical side, some methods have been proposed, such as identification for synthetic text and digital watermarking, but there are still difficulties.
The text written by machines and humans have fewer differentiating features, and the results of machine synthesized text can better follow the rules of human writing from both structural and semantic perspectives, while the way a real person speaks can be variable and structured in a wrong way, so it is difficult to identify whether it is generated by ChatGPT simply from the text itself. In addition, digital watermarking requires ChatGPT to add a watermark as soon as the content is generated, which needs to take into account technical implementation issues.
Related Article
-
The first "laborers" whose jobs were taken by AI have already appeared
-
With the integration of ChatGPT, this in-car AI voice assistant has "captured" many European countri
-
"Father of ChatGPT" and other industry leaders jointly warn: AI may bring the risk of human extincti
-
Father of ChatGPT Warns AI Could Exterminate Humanity
-
A senior U.S. attorney using ChatGPT to assist with a case turned out to be a fake?
-
Smart home industry will usher in a new opportunity for development under the AI wave