Ethereum co-creator and its frontman, Vitalik Buterin, has shared a hot take on a recent warning that OpenAI’s product, ChatGPT, can be utilized to leak personal user data.
ChatGPT can be used for leaking your data, warning says
X user @Eito_Miyamura, a software engineer and an Oxford graduate, published a post, revealing that after the new update, ChatGPT may pose a significant threat to personal user data.
Miyamura tweeted that on Wednesday, OpenAI rolled out full support for MCP (Model Context Protocol) tools in ChatGPT. This upgrade allows the AI bot to connect to a user's Gmail box, Google Calendar, SharePoint, and other services.
HOT Stories Vitalik Buterin Reacts to Crucial ChatGPT Security WarningCrypto Market Prediction: XRP to Try $5 Jump, Ethereum (ETH) Begins $5,000 Journey, Bitcoin (BTC) to Stop Before $115,000?XRP Prints Golden Cross at Last, John Lennon's Son Bets on Bitcoin (BTC), $1.2 Billion in Solana (SOL) Moved in Minutes — Crypto News DigestEthereum (ETH) to $25,000 in 2026: Key Reasons Why It Can Happen
We got ChatGPT to leak your private email data 💀💀
All you need? The victim's email address. ⛓️💥🚩📧
On Wednesday, @OpenAI added full support for MCP (Model Context Protocol) tools in ChatGPT. Allowing ChatGPT to connect and read your Gmail, Calendar, Sharepoint, Notion,… pic.twitter.com/E5VuhZp2u2
However, Miyamura and his friends spotted a fundamental security issue here: “AI agents like ChatGPT follow your commands, not your common sense.” He and his team have staged an experiment that allowed them to exfiltrate all user private information from the aforementioned sources.
Miyamura shared all the steps they followed to perform this test data leak – it was done by sending a user a calendar invite with a “jailbreak prompt to the victim, just with their email.” The victim needs to accept the invite.
What happens next is the user tells ChatGPT “to help prepare for their day by looking at their calendar.” After the AI bot reads the malicious invite, it is hijacked, and from that point on it will “act on the attacker's command.” It will “search your private emails and send the data to the attacker's email.”
Miyamura warns that while so far ChatGPT needs a user’s approval for every step, in the future many users will likely just click “approve” on everything AI suggests. “Remember that AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data,” the developer concludes.
You Might Also Like
Fri, 09/12/2025 - 10:33 Deep-Fake Zoom Call Just Cost ThorChain Founder $1.35 Million Crypto: Ledger CTO WarnsByYuri Molchan
Buterin reacts to this warning
In response, Vitalik Buterin slammed the “AI governance” idea in general as “naive.” He stated that if utilized by users to “allocate funding for contributions,” hackers will hijack it to syphon all the money from users.
This is also why naive "AI governance" is a bad idea.
If you use an AI to allocate funding for contributions, people WILL put a jailbreak plus "gimme all the money" in as many places as they can.
As an alternative, I support the info finance approach ( https://t.co/Os5I1voKCV… https://t.co/a5EYH6Rmz9
Instead, he suggested an alternative approach called “info finance,” which is an open market where AI models can be checked for security issues: “anyone can contribute their models, which are subject to a spot-check mechanism that can be triggered by anyone and evaluated by a human jury.”
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。