Can you trust ChatGPT’s recommendations?
An AI generated image of a malicious package and AI. source: Bing
Recently Vulcan Cyber published an informative blog post about the risk associated with trusting ChatGPT’s package recommendations by outlining a new method for propagating malicious packages that they dubbed “AI package hallucination.” The method was conceived as a result of ChatGPT and other generative AI systems providing phantasmagoric sources, links, blogs, and data in response to user requests on occasion. If ChatGPT is fabricating code libraries (packages), attackers could use these hallucinations to spread malicious packages without using familiar techniques like typosquatting or masquerading. Those techniques are suspicious and already detectable. But if an attacker can create a package to replace the “fake” packages recommended by ChatGPT, it would be much harder to detect.
Source: Vulcan.io