To Hack AI Just Use Poetry
I've been saying that AI is great, but isn't ready for public use yet. Please don't carry on private conversations with AI chatbots or let AI have access to your private files! They are so easy to hack and get your data, and some AI's are using your private input to generate advertisements.
I recently posted a link to a GitHub account that has all of the initial prompts that are programmed into each major AI engine. It's important to understand the guardrails that are in place, and also learn how and when the AI is told to lie to you. If you jailbreak out of those guardrails, the AI will give you totally different results.
A recent study found something interesting: If you want to jailbreak out of the AI's guardrails, you just put your malicious prompt in the form of poetry. The study found "Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems." 🤯
Here's the 16-page poetry research study by Icaro Lab called "Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models":
https://arxiv.org/pdf/2511.15304
