Researchers revealed Threat actors can easily exploit ChatGPT’s penchant for bearing false information to circulate nasty code packages.
This poses a notable risk for the software supply chain, as it can authorize malicious code and Trojans to sneak into legitimate apps and code repositories like npm, PyPI, GitHub, and others.
Threat of Malicious Code Spreading Beyond Control via ChatGPT
Researchers at Vulcan Cyber’s Voyager18 said the hackers aimed to build the code into software and then spread it.
In artificial intelligence, a hallucination is a conceivable response by the AI that’s unsatisfactory, biased, or untrue.
Notably, hallucinations emerge because ChatGPT answers questions posed to them based on the sources, links, blogs, and statistics unrestricted in the vast expanse of the Internet.
Researcher Bar Lanyado explained that with this broad training and openness to vast portions of textual data, LLMs like ChatGPT can develop plausible but fictional data, beyond their training and potentially evoke responses that appear plausible but not entirely accurate.
How to Spot Bad Code Libraries
Researchers claim it can be difficult to tell if a code package is nasty in design if a hacker effectively obfuscates their work, or uses techniques such as initiating a Trojan package that is practical.
They also stated that developers can sniff out these nasty codes before they bake them into an application or publish them to a code repository.
To do this, developers need to validate the libraries they download and make sure to verify its direction is not a clever Trojan disguising as a legitimate package, says Lanyado.
Furthermore, It’s largely important when the suggestion comes from an AI rather than an associate or people they trust in the same work community, he says.
A developer can verify validity by checking the creation date, number of downloads, and comments.
Moreover, an absence of comments and stars and having a look at any of the library’s attached notes can all be red alerts, researchers said.