Security researchers discovered a new way to trick OpenAI’s language model, GPT-4o, into generating executable exploit code by leveraging a simple, yet cunning method—hex code.
Hex-Encoded Instructions Used to Jailbreak GPT-4o
By using hex-encoded instructions, researchers bypassed the model’s sophisticated security protocols, which prevent it from creating harmful or restricted content. Marco Figueroa, a leading researcher on Mozilla’s generative AI bug bounty platform, 0Din, aims to expose