<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[CleverPeople.com: Group Blogs: November 2025}]]></title>
	<link>https://cleverpeople.com/blog/group/286/archive/1761969600/1764565200</link>
	<atom:link href="https://cleverpeople.com/blog/group/286/archive/1761969600/1764565200" rel="self" type="application/rss+xml" />
	<description><![CDATA[}]]></description>
		<item>
	<guid isPermaLink="true">https://cleverpeople.com/blog/view/148633/to-hack-ai-just-use-poetry</guid>
	<pubDate>Sun, 30 Nov 2025 15:39:16 -0500</pubDate>
	<link>https://cleverpeople.com/blog/view/148633/to-hack-ai-just-use-poetry</link>
	<title><![CDATA[To Hack AI Just Use Poetry]]></title>
	<description><![CDATA[<p><span style="white-space:pre-wrap;">I've been saying that <strong>AI</strong> is great, but isn't ready for public use yet. Please don't carry on private conversations with <strong>AI chatbots</strong> or let <strong>AI</strong> have access to your private files! They are so easy to hack and get your data, and some <strong>AI</strong>'s are using your private input to generate advertisements.</span><br /><br /><span style="white-space:pre-wrap;">I recently posted </span><a href="https://github.com/elder-plinius/CL4R1T4S"><span style="white-space:pre-wrap;">a link to a GitHub account</span></a><span style="white-space:pre-wrap;"> that has all of the initial prompts that are programmed into each major <strong>AI</strong> engine. It's important to understand the <strong>guardrails</strong> that are in place, and also learn how and when the <strong>AI</strong> is told to lie to you. If you jailbreak out of those guardrails, the <strong>AI</strong> will give you totally different results.</span><br /><br /><span style="white-space:pre-wrap;">A recent study found something interesting: If you want to jailbreak out of the <strong>AI</strong>'s guardrails, you just put your malicious prompt in the form of <strong>poetry</strong>. The study found</span><i><span style="white-space:pre-wrap;"> "Poetic framing achieved an average jailbreak success rate of 62% for hand-crafted poems." </span></i><span class="x1xsqp64 xiy17q3 x1o6pynw x19co3pv xdj266r xjn30re xat24cr x1hb08if x2b8uid xexx8yu xcaqkgz x18d9i69 xbwkkl7 x3jgonx x1bhl96m" style="background-image:url(&quot;denied:https://static.xx.fbcdn.net/images/emoji.php/v9/t5/1/16/1f92f.png&quot;);background-size:16px 16px;cursor:default;white-space:pre-wrap;" data-testid="emoji">🤯</span><span style="white-space:pre-wrap;">&nbsp;</span><br /><br /><span style="white-space:pre-wrap;">Here's the 16-page poetry research study by <strong>Icaro Lab</strong> called "</span><i><span style="white-space:pre-wrap;">Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models"</span></i><span style="white-space:pre-wrap;">:</span><br /><a class="x1fey0fg xmper1u x1edh9d7" href="https://arxiv.org/pdf/2511.15304"><span style="white-space:pre-wrap;">https://arxiv.org/pdf/2511.15304</span></a></p>]]></description>
	<dc:creator>Gary Wright II</dc:creator>		</item>
</channel>
</rss>
