Can you trust ChatGPT’s package recommendations?
Can you trust ChatGPT’s package recommendations?
vulcan.io Can you trust ChatGPT’s package recommendations?
ChatGPT can offer coding solutions, but its tendency for hallucination presents attackers with an opportunity. Here's what we learned.
From https://twitter.com/llm_sec/status/1667573374426701824
- People ask LLMs to write code
- LLMs recommend imports that don't actually exist
- Attackers work out what these imports' names are, and create & upload them with malicious payloads
- People using LLM-written code then auto-add malware themselves
You're viewing a single thread.
All Comments
14 comments
It's terrifying that someone would build off suggestions from ChatGPT without verifying the packages they are installing.
6 0 Reply
14 comments
Scroll to top