Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries
Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries
www.tomshardware.com Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries
ArtPrompt bypassed safety measures in ChatGPT, Gemini, Clause, and Llama2.
1 crossposts
0 comments