• Hello and welcome! Register to enjoy full access and benefits:

    • Advertise in the Marketplace section for free.
    • Get more visibility with a signature link.
    • Company/website listings.
    • Ask & answer queries.
    • Much more...

    Register here or log in if you're already a member.

  • 🎉 WHV has crossed 17,000 monthly views and 220,000 clicks per month, as per Google Analytics! Thank you for your support! 🎉

Google’s Gemini AI Exposed to ASCII Smuggling Attack — No Fix Planned

johny899

New Member
Content Writer
Messages
520
Reaction score
3
Points
23
Balance
$607.6USD
You read that correctly - Google has decided not to patch a new vulnerability in its Gemini AI assistant. This issue is an ASCII smuggling attack, and it allows Gemini to be fooled into performing an action it shouldn't perform - such as reading a hidden message or providing an inaccurate answer. Weird, right?

I have experimented with a couple of AI tools as well and it feels like someone has discovered a secret and can whisper commands to the AI in a way that only the AI can hear.

What Is ASCII Smuggling?​

Let’s simplify this.

ASCII smuggling is used by embedding hidden characters into normal text, hidden characters that you and I can't see but are detected by computers. These invisible characters can be used to embed hidden commands into normal looking text.

So when Gemini reads the text, it will see the hidden characters, and likely execute the hidden instructions they contain.

Imagine sending a message that appears normal to people, but the AI reads hidden instructions between the lines. That's how this trick works.

Why This is an Issue​

This may seem small, but it can be serious, especially since Gemini is connected to websites such as Google Workspace (your emails, calendars, and documents).

Here’s the issue:

• Attackers could hide bad instructions in emails or calendar invites.
• Gemini may execute these instructions without you ever knowing.
• Hidden text could lead Gemini to display incorrect or misleading content like fake links, or fake products, for example.

A security researcher named Viktor Markopoulos uncovered this exploit and demonstrated how Gemini could be tricked like this. To make matters worse, other systems, like ChatGPT or Claude, already block these types of hidden characters – but Gemini does not.

What Google Said About It​

Viktor reported the issue to Google on September 18.

But Google stated that it does not intend to fix it, claiming it isn’t really a bug — it is just “social engineering.” In other words, it believes people, not the system, are the issue.

That response shocked many experts. Even if it’s a social tactic, an AI should defend itself against invisible hidden code.

Amazon has published guidance regarding what to do to stop Unicode or ASCII attacks, which indicates it is not something beyond being able to be controlled.

What You Can Do​

If you utilize Gemma or other AI tools that connect to your data, it's wise to exercise caution. Here's what you can do:

• Don’t copy and paste text from unknown sources.
• Use filtering to eliminate odd or invisible characters from text.
• Watch for stray behaviors from AI tools.

Developers can also take it even further by adding input cleaning tools to remove hidden code before sending it toGemini or the like.

Final Thoughts​

So yeah — Google is not fixing this ASCII smuggling issue in Gemini, even while they acknowledge it can let attackers conceal secret commands in text. Google claims it's not a bug, and many security experts disagree.

In my opinion, ignoring this issue could be problematic down the road. AI tools are becoming a part of our daily work, so the goal should be to increase their safety, not risk.

What do you think? Should Google fix it?
 
Top