
If you’re a flyer be forwarned.
August 23, 2025
Annapolis Server Management: Why Local Businesses Trust Alpha for Reliable IT Support
August 25, 2025Google has sounded a “red alert” for its 1.8 billion account holders over a new artificial intelligence scam reportedly being exploited by cyber crooks. Tech whiz Scott Polderman broke down the data theft scam, which involves another Google product, Gemini, an AI assistant known as a chatbot.
“So hackers have figured out a way to use Gemini – Google’s own AI – against itself,” he clarified. “Essentially, hackers are sending an email with a hidden message to Gemini to reveal your passwords without you even realizing.”
Scott said that this scam is unique from previous ones as it is “AI against AI” and could set a precedent for future attacks in the same vein. It comes as Trump claims he he wants to rename ‘artificial intelligence’ for a bizarre reason.
He also advised that Google has previously stated it will “never ask” for your login information or “never alert” you of fraud through Gemini.
Another tech guru, Marco Figueroa, added that send emails including prompts that Gemini can pick up on, with the font size set to zero and the text color to white so users don’t spot it.
One TikTok user chimed in with additional advice to protect against the scam. “To disable Google Gemini’s features within your Gmail account, you need to adjust your Google Workspace settings,” they wrote.
“This involves turning off ‘SMART FEATURES’ and potentially disabling the Gemini app and its integration within other Google products.”
Another user commented: “I never use Gemini, still I might change my password just in case.”
A third person said: “I’m sick of all of this already. I’m going back to pen and paper!”
Echoing similar sentiments, a fourth user said: “I quit using Gmail a long time ago! Thank you for the alert! I’ll go check my old accounts.”
Google issued a warning on its security blog last month: “With the rapid adoption of generative AI, a new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves. One such emerging attack vector is indirect prompt injections.
“Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.
“As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.”
However, the tech behemoth sought to reassure users, stating: “Google has taken a layered security approach introducing security measures designed for each stage of the prompt lifecycle. From Gemini 2.5 model hardening, to purpose-built machine learning (ML) models detecting malicious instructions, to system-level safeguards, we are meaningfully elevating the difficulty, expense, and complexity faced by an attacker.
“This approach compels adversaries to resort to methods that are either more easily identified or demand greater resources.”




