Written by: Curry, Deep Tide TechFlow
Last week, something quite surreal happened.
The CEOs of two major food delivery giants in the U.S., one worth $2.7 billion and the other managing the world's largest ride-hailing platform, both stayed up late on Saturday night, writing online to clear their names.
The trigger was an anonymous post on Reddit.
The poster claimed to be a backend engineer from a large food delivery platform, who, after getting drunk, went to a library to leak information using public WiFi.
The content was roughly as follows:
The company analyzes the situation of ride-hailing drivers and assigns them a "desperation score"; the more financially strapped the driver, the fewer good orders they receive. The so-called priority delivery for food orders is a lie, and regular orders will be delayed; various "driver benefits" are not given to drivers at all, but are used to lobby Congress against unions…
The post ended in a very convincing manner: I was drunk, I was angry, so I had to leak this information.
It perfectly crafted the image of a "whistleblower exposing how big companies exploit drivers through algorithms."

Three days after the post was made, it received 87,000 likes and surged to the front page of Reddit. Some even took screenshots and posted them on X, garnering 36 million views.
It's important to note that there are only a few major players in the U.S. food delivery market; the post didn't name names, but everyone was guessing who it was.
DoorDash's CEO Tony Xu couldn't sit still and tweeted that this wasn't us, and anyone who dares to do this will be fired. Uber's COO also jumped in to respond, "Don't believe everything you see online."

DoorDash even published a five-point statement on their official website, refuting each claim made in the leak. These two companies, with a combined market value of over $80 billion, were forced to engage in overnight public relations to clarify the situation due to an anonymous post.
Then, it turned out that this post was actually generated by AI.
The one who exposed it was Casey Newton, a journalist from the overseas tech media Platformer.
He contacted the leaker, who promptly sent over an 18-page "internal technical document" with a very academic title, "AllocNet-T: High-Dimensional Temporal Supply State Modeling."
Translated, it roughly means "High-Dimensional Temporal Supply State Modeling." Each page bore a "confidential" watermark, and it was signed by Uber's "Market Dynamics Group, Behavioral Economics Department."
The content explained the model that assigns "desperation scores" to drivers, detailing how it is calculated. It included architecture diagrams, mathematical formulas, and data flow charts…

(Fake paper screenshot, at first glance, it looks very real)
Newton said that this document initially fooled him. Who would go to the trouble of forging an 18-page technical document to bait a journalist?
But now, things are different.
This 18-page document can be generated by AI in just a few minutes.
At the same time, the leaker also sent the journalist a blurred photo of their Uber employee ID, indicating that they indeed worked there.
Out of curiosity, journalist Newton ran the employee ID through Google Gemini for verification, and Gemini indicated that the image was AI-generated.
It could be identified because Google embedded an invisible watermark in the content produced by its AI, called SynthID, which is invisible to the naked eye but detectable by machines.
Even more absurdly, the employee ID bore the logo of "Uber Eats."
An Uber spokesperson confirmed: We do not have employee IDs for the Uber Eats brand; all badges only say Uber.

Clearly, this fake whistleblower didn't even know who they were trying to blacken. When the journalist requested LinkedIn and other social media account information for further verification, the leaker simply deleted their account and ran away.
In fact, what we want to discuss is not whether AI can create fakes; that's not new.
What we want to talk about is: why are millions of people willing to believe an anonymous leak post?
In 2020, DoorDash was sued for using tips to offset drivers' base pay and had to pay $16.75 million. Uber had a tool called Greyball specifically designed to evade regulation. These are real events.
You can easily find a subconscious recognition that platforms are not good things; this judgment is certainly correct.
So when someone says "food delivery platforms exploit drivers through algorithms," people's first reaction is not "Is this true?" but rather "Of course."
Fake news can gain traction because it resembles something that people already believe in.
What AI does is reduce the cost of this "resemblance" to nearly zero.
There's another detail in this story.
The exposure of the scam relied on Google's watermark detection. Google makes AI, and Google also makes AI detection tools.
But SynthID can only check content generated by Google's own AI. This time they got caught because the forger happened to use Gemini. If they had used another model, they might not have been so lucky.
So this case's resolution, rather than being a technical victory, is more about:
The other side made a basic mistake.
Previously, a Reuters survey found that 59% of people are worried they can't distinguish between truth and falsehood online.
The clarifying tweets from the food delivery company CEOs were seen by hundreds of thousands, but how many would firmly believe this is PR, that it's a lie? Although that fake leak post has been deleted, there are still people in the comments section cursing the food delivery platforms.
The lie has traveled halfway around the world, while the truth is still tying its shoelaces.
Think about it again: if this post was about Meituan or Ele.me instead of Uber, what would happen?
What "desperation scores," what "exploiting riders with algorithms," what "not giving a penny of benefits to riders." When you see these accusations, isn't your first reaction emotional agreement?
You remember that article "Delivery Riders, Trapped in the System," right?
So the issue isn't whether AI can create fakes. The issue is, when a lie resembles something that people already believe, does the truth even matter?
That person who deleted their account and ran away, what were they after? I don't know.
I only know they found an emotional outlet and poured a bucket of AI-generated fuel into it.
The fire ignited. As for whether it burns real wood or fake wood, who cares?
In fairy tales, Pinocchio's nose grows longer when he lies.
AI has no nose.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。