Enlarge / Some ASCII art of our favorite visual cliche for a hacker. (credit: Getty Images)
Researchers have discovered a new way to hack AI assistants that uses a surprisingly old-school method: ASCII art. It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.
ASCII art became popular in the 1970s, when the limitations of computers and printers prevented them from displaying images. As a result, users depicted images by carefully choosing and arranging printable characters defined by the American Standard Code for Information Interchange, more widely known as ASCII. The explosion of bulletin board systems in the 1980s and 1990s further popularized the format.
@_____
_____)| /
/(“””)o o
||*_-||| /
= / | /
___) (__| /
/ _/ |/
| | |/
| … // Read more: original article.

Previous post Raspberry Pi OS 5.2 is here, with pleasant tweaks to Wayland-based desktop
Next post Raspberry Pi OS Is Now Powered by Linux 6.6 LTS, Improves Raspberry Pi 5 Support