Most ChatGPT-generated code has security vulnerabilities
Ever since ChatGPT was introduced, people have been impressed by its "power". However, several researchers in Canada have recently discovered that despite its programming capabilities, the code written by ChatGPT is not only mostly unsafe, but ChatGPT does not actively alert users to problems with this code.
This may mean that coders who were worried about losing their jobs to ChatGPT can breathe a sigh of relief for a while.
ChatGPT-generated code is not secure
Recently, four researchers from the University of Quebec in Canada said in a paper titled "How secure is ChatGPT-generated code?" in a paper titled "How Secure is ChatGPT-Generated Code?" said that, overall, ChatGPT-generated code is "not very secure.
"The results are worrisome," the researchers wrote in the paper, "and we found that in some cases, ChatGPT-generated code fell well below the minimum security standards that apply in most cases. In fact, when asked whether the generated code was secure, ChatGPT was able to identify that it was not."
The four authors came to these conclusions after asking ChatGPT to generate 21 programs and scripts in multiple languages, including C, C++, Python and Java.
Overall, ChatGPT was secure in only five of the 21 codes it generated in its first attempt. After further correction of its errors, only 7 codes were also safe.
Does not actively alert code vulnerabilities
At the same time, the researchers found that "ChatGPT appears to be aware of - and indeed readily admits to - serious vulnerabilities in the code it claims to have."
However, it does not make proactive alerts unless asked to evaluate the security of its own code proposals.
"Obviously, it's an algorithm. It's not omniscient, but it does identify insecure behavior," said one of the paper's authors.
The researchers found that if ChatGPT is asked a question about the security of the code it generates, it will initially repeat that it wants to keep the code safe by simply "not providing invalid input to the vulnerable programs it creates" - which works in the real world. -This doesn't work in the real world. ChatGPT did not provide useful guidance until later, when testers repeatedly asked ChatGPT to correct the problem.
In the authors' opinion, this is not ideal, because it implies that familiarity with specific vulnerabilities and coding techniques is a prerequisite for wanting ChatGPT to accurately correct code problems. In other words, if a user wants ChatGPT to make the right hints for fixing a vulnerability, that user may need to already understand how to fix the vulnerability originally.
ChatGPT-generated code should not be relied on too much
The researchers believe that there are still vulnerabilities and risks in ChatGPT's current programming capabilities.
In fact, we've seen students using it and programmers will be using it in real-world applications," they said. Therefore, having a tool that generates unsafe code is very dangerous. We need to make students aware that if code is generated with this tool, it is likely to be unsafe."
"What surprises me is that when we ask (ChatGPT) to generate the same task in a different language - the same type of program - sometimes, for one language, it's safe, and for another language, it may be insecure. Because this language model is kind of like a black box, I don't really have a good explanation or theory for it." the researchers wrote.
Related Article
-
The first "laborers" whose jobs were taken by AI have already appeared
-
With the integration of ChatGPT, this in-car AI voice assistant has "captured" many European countri
-
"Father of ChatGPT" and other industry leaders jointly warn: AI may bring the risk of human extincti
-
Father of ChatGPT Warns AI Could Exterminate Humanity
-
A senior U.S. attorney using ChatGPT to assist with a case turned out to be a fake?
-
Smart home industry will usher in a new opportunity for development under the AI wave