Can you trust ChatGPT’s package recommendations?
Can you trust ChatGPT’s package recommendations?
vulcan.io Can you trust ChatGPT’s package recommendations?
ChatGPT can offer coding solutions, but its tendency for hallucination presents attackers with an opportunity. Here's what we learned.
From https://twitter.com/llm_sec/status/1667573374426701824
- People ask LLMs to write code
- LLMs recommend imports that don't actually exist
- Attackers work out what these imports' names are, and create & upload them with malicious payloads
- People using LLM-written code then auto-add malware themselves
You're viewing a single thread.
All Comments
14 comments
Asking LLMs for code is fine, but it really needs proof reading to be worth anything. Could even ask it to proofread its own work.
Also, never 3.5
4 0 Reply
14 comments
Scroll to top