shish_mish@lemmy.world to Technology@lemmy.worldEnglish · 7 months agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square10fedilinkarrow-up10arrow-down10cross-posted to: technology@lemm.ee
arrow-up10arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish@lemmy.world to Technology@lemmy.worldEnglish · 7 months agomessage-square10fedilinkcross-posted to: technology@lemm.ee
minus-squarespujb@lemmy.cafelinkfedilinkEnglisharrow-up0·7 months agoyes i am aware? are they being used by openai?
yes i am aware? are they being used by openai?