Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Three Moats of the AI Era: Why You Only Have 12 Months

CN
Techub News
Follow
5 hours ago
AI summarizes in 5 seconds.

Written by: Thought Circle

Have you noticed that the people around you using AI are all doing the same thing? Prompting, accepting, publishing. Without judgment, without taste, repetitively and mechanically like workers on an assembly line. Recently I read an article by Silicon Valley entrepreneur Shann, who bluntly pointed out: now 90% of people using AI have fallen into this trap. They believe that mastering AI tools means mastering the future, yet they do not realize that the real competition has only just begun. More critically, Shann believes that we have only about 12 months to establish a true moat, otherwise once this window closes, it will become increasingly difficult to stand out. This deeply resonated with me, as I have experienced a similar awakening process myself.

I remember about a year ago when I first truly began using AI to build products and content; the feeling was simply addictive. The time between "I have an idea" and "it's online" almost shrank to zero. I completed more projects in three months than in the previous two years combined. But when I mustered the courage to look back at what I published, I had to admit a brutal fact: half of it was mediocre work. Technically fine, functionally complete, but completely unmemorable. They looked like everything else because they were built in exactly the same way as everything else. The same prompts, the same default settings, the same shallow understanding of "excellence." I fell into the most common trap of the AI era: mistaking output quantity for quality, treating quick releases as productivity, equating doing more with doing better. This realization made me stop and rethink: in an era where AI enables everyone to produce quickly, what is the real competitive advantage?

My new book "Going Global: Marketing Practices for Product Globalization" is about to be published, and to thank all the readers who have continuously supported Thought Circle, I have prepared this gift book activity, where you can get a free copy at the first opportunity. Interested friends can fill out the following information. Due to the limited number provided by the publisher, I will select a portion of the book recipients from the students who fill out the questionnaire, and I may not be able to guarantee everyone will get one, so I appreciate your understanding.

The Proliferation of AI Slop and the Crisis of Trust

"AI slop" has been named the word of the year for 2025. The mention of this term surged ninefold, from 461,000 to 2.4 million. Yet the numbers alone do not fully capture the true feelings of consumers. You have surely seen content like this: posts on LinkedIn that look like they were generated by mid-tier marketing prompts, landing pages with a uniform gradient background paired with Inter font and the title "Revolutionize Your Workflow," and blog posts that cover all angles of a topic but say nothing. These pieces have no technical issues; they lack the most important element: the human touch.

Shann shared a particularly interesting research finding. Research from New York University and Emory University indicates that AI-generated advertisements have an actual 19% higher click-through rate than those crafted by humans. From standard metrics, AI's output is objectively better. But once consumers learn that these ads were produced by AI, their willingness to purchase drops by 33%. This phenomenon is thought-provoking: higher quality output leads people to reject it. Not because the content is bad, but because they don't feel the presence of a person behind it. No one is making decisions here; no one cares enough to put their name on it. Consumers sense this absence, even if they cannot articulate exactly what feels wrong.

I have observed this phenomenon spreading across various fields. Statistics show that 80-90% of AI agent projects fail in production environments, with thousands of indistinguishable websites going live daily, and the content reads like a robot summarizing outputs from another robot. The bar for "functionality" has never been so low, which also means the bar for "excellence" has never been so important. Functionality is now free, while excellence still comes at a cost. This cost is measured in taste, attention, and the willingness to surpass the first output. Consumer trust in AI-generated content has declined by about 50%, which is not coincidental, but a natural response to this flood of content.

The Three Moats: Abilities That AI Cannot Replace

Paul Graham once said, "In the age of AI, taste will become even more important. When anyone can make anything, the real difference lies in what you choose to make." He is right, but I believe that taste alone is not enough. After a year of practice and observation, I found that three things can truly build a moat in the age of AI: taste, distribution, and high agency.

Taste is knowing what is good. This is not an abstract concept, but rather a judgment that is concretely reflected in every decision. Distribution is getting good things in front of those who care about them. In an era of information explosion, being seen is inherently a rare ability. High agency is the willingness to figure things out on your own when no one tells you what to do. It is a personality trait that determines whether you circumvent obstacles or stop.

Why can’t AI replace these three? Because judgment can only come from experience, trust can only come from consistency, and intrinsic motivation does not give up when the path is unclear. Most people have a fundamental misunderstanding of AI: it does not level the playing field; it simply tilts the competitive environment even more. AI is like a mirror, reflecting how much the user truly understands. Hand it to someone with no context, no taste, and no understanding of what they are building, and you get massive generic output. Hand it to someone who truly understands their field and can evaluate output with a trained eye, and it becomes the most powerful tool they’ve ever used. The same input method leads to completely different results. The variable is always human.

The First Moat: Taste

Shann shared his awakening moment during the building process. When he reviewed the works he had rapidly published, he found that half of them were mediocre. So he did something most people would skip: he stopped to learn. He spent hundreds of hours studying what true "goodness" is. Reading the ways other builders think, researching creators who continuously produce unique works. Not for the sake of being different, but because someone cared enough to make real decisions rather than accepting whatever AI first offered. He studied design websites, typography, spacing, visual hierarchy, analyzing the actually converting websites in an effort to understand why they work while thousands of similar sites do not. He read about narrative, narrative tension, and what keeps people scrolling rather than jumping off content.

This reminds me of my own experience. When I was building AI-driven marketing materials, I initially tried every tool I could find: Gamma, Chronicle, Beautiful.ai, and so on. The output all had the same "meh" taste, technically complete, visually clean, but completely forgettable. So I stopped looking for tools to do the work for me and started doing it myself. I spent days closely studying the materials, not just reading but thinking. What story do these data tell? What would make someone care about these numbers? What is the narrative thread that ties this content together from beginning to end? I researched the true principles of presentation design, how information designers handle data density, how the best conference presentations build tension and release, and how visual hierarchy guides the eye through a page without being told where to look. In the end, I divided the tasks: letting Claude Opus 4.6 write the storyline and copy, letting Gemini generate the visuals, while I guided both, providing specific references, constraints, and examples of the feelings each part should present.

Why does AI always default to generic outputs? Leon Lin has a brilliant explanation for this. He built a "taste skill" for Claude Code because he realized a fundamental characteristic of how LLMs (large language models) work: they are probabilistic machines. In the absence of strict rules, they statistically default to the most common patterns in their training data. This is why every AI-generated website looks the same: Inter font, purple gradient, rounded corners in grids. It’s not that AI cannot do better; rather the most likely output is the average of everything it has seen. Leon’s solution was a clear design rule of 400 tokens: specific fonts (Press Start 2P, VT323) instead of Inter and Roboto, specific colors (neon pink, electric blue, acid green) instead of the default blue-purple tones, rules regarding actions, spatial composition, and backgrounds, and a critical "what to avoid" list to prevent AI from reverting to defaults.

This "what to avoid" list represents a true insight. Taste is not just knowing what you want; it’s knowing what to reject. It is having a perspective on defaults and being willing to overturn them. Most people accept any output because they do not have a strong perception of what "better" should look like, so they do not know when to push further. That’s why taste cannot take shortcuts: you cannot get it from a tutorial. You develop it through exposure, slowly building an internal model from observing thousands of examples of what works and what doesn’t. You study typography until you can discern why one font pairing feels elegant while another feels generic, even if you cannot fully articulate why. You read enough great writing until you can sense when a sentence carries its weight and when it’s just filling space.

I deeply realize that developing taste requires time and lots of deliberate practice. Shann mentioned a new 80/20 rule: 80% to AI, 20% is your taste. Let AI do what it does best—research, drafts, sample code, structure, formatting, speed. That’s the 80%. Don’t resist it, don’t slow it down, don’t manually handle things that the machine can do in seconds. That’s wasting your most precious resources: attention and judgment. But that last 20% is yours. That’s where you decide what to keep, what to delete. You rewrite the beginning because AI gave you a safe choice, and safety does not prevent scrolling. You replace default components with something truly fitting. You scrutinize the output, applying all the knowledge you've learned in your specific field about what is good.

Most people have this ratio reversed. They spend 80% of their effort on prompting and tuning AI, trying to get perfect output at once, running the same prompt fifteen times with slightly different wording, hunting for the magic combination of words that will produce exactly what they want. Then they spend almost no time on the curation and judgment level. They optimize one side of the equation wrong. Productivity without quality is just movement. The internet is flooded with competent but mediocre works, everything is usable but nothing stands out because everyone has stopped at the same place.

The Second Moat: Distribution

You can have the best product in the world, the best content, the best design. If no one sees it, it means nothing. This is a moat that most builders, especially technical builders, severely underestimate. AI has leveled the building barrier but hasn’t touched the trust barrier. Building is getting commoditized; anyone can publish a product, create content, generate marketing campaigns. The barrier to making things is approaching zero. But what about the barrier to being trusted? That remains as high as ever, and may be even higher, as the flood of AI-generated content makes people more skeptical, not less. When everything could be AI-generated, trust in the humans behind the work becomes a premium asset.

Shann pointed out a key distinction: the gap between "vibe coded and released" and "someone actually using and paying for it" is almost entirely a distribution issue. At its core, distribution is massive trust. Yes, you can generate 50 posts in an hour. You can automate newsletters, repurpose content across platforms, schedule everything a month in advance. Someone is publishing over 1,000 AI contents daily across hundreds of accounts, and their engagement is approaching zero. Because quantity without quality is just massive noise; the audience can feel what is mass-produced and what is made for them.

The difference between high-performing content and low-performing content is rarely about the information it contains. It’s about whether the readers trust the person who wrote it. Trust comes from consistency, a recognizable voice, accumulated evidence that this person knows what they're talking about because they've been showing their work for months or years. You cannot manufacture this in prompts. Trust operates on a completely different clock. AI can compress creation from days to minutes, but trust still takes months or years to build. There are no shortcuts, no hacks. You cannot vibe code trust.

I think there is an important distinction here that most people will miss: passive audiences are a commodity, followers are vanity metrics. An active community is the moat. Those who interact in your replies, share your work without being asked, and come back every day because you’ve become part of their thinking on a topic. You cannot manufacture this with content calendars and scheduling tools. You earn it by being genuinely useful, saying specific things rather than vague statements, honestly addressing what you know and don’t know, and being present long enough for people to start paying attention. The true distribution advantage in the age of AI lies in using AI to handle logistics—formatting, repurposing, scheduling, analyzing. Focus all efforts on making the things worth sharing themselves better.

Taste feeds back into distribution. If what you make is truly good, people will start doing the distribution for you. They share it because it makes them think, not because you asked them to. If what you make is generic, no amount of posting frequency will save it. You’re just putting more mediocre work in front of more people faster.

The Third Moat: High Agency

This is a moat that most people underestimate, but it might be the most important of the three. Taste can be cultivated, distribution can be built, but high agency is the personality trait that either drives everything else or hinders everything else. High agency is the willingness to figure things out in the absence of anyone handing you a tutorial. Finding ways to bypass obstacles rather than stopping when you encounter them. Combining tools no one told you to combine because you’re curious enough to try. When something doesn’t work, opening the documentation and trying four different approaches before seeking help.

The CEO of Replit said, "You don’t need any development experience. You need persistence. You need to learn quickly." The CEO of Coinbase has said something similar: their best employees are often completely unqualified on paper, but they are all high agency people who just get things done without needing to be managed to every outcome. The ones thriving now are not the most qualified or technically skilled but those who take action without needing to consult. Non-developers are launching Chrome extensions, SaaS products, and complete mobile apps over a weekend because they have the curiosity to open tools and start tinkering, rather than waiting for the perfect course or the perfect time.

AI is a multiplier, not a leveler. This is perhaps the most misunderstood thing about these tools right now. People talk about AI democratizing access and leveling the playing field. This is technically true, but misleading in practice. Multipliers amplify anything you bring to them. Curiosity plus AI equals tenfold leverage; you move faster, learn faster, build faster, and correct course more quickly. Passivity plus AI equals zero. Zero times ten is still zero.

In practice, high agency looks like this: instead of asking "How should I do this?" you ask "What if I try this?" And then you actually try it. Before posting questions, before searching for answers, you try something out. You fail, you learn from the failure, and you try again with new information. This willingness to engage with uncertainty instead of retreating is the difference between those who build real things and those who consume content about building things.

You can see this in those who not only write code with Claude but also go to X, visit Reddit, enter communities and source code, studying what the best builders are actually doing. They reverse-engineer why certain products feel better than AI defaults. They learn the underlying frameworks instead of copy-pasting prompts. They ask Claude to critique their own work, using AI to challenge their assumptions instead of just confirming them. High agency people treat patience as a strategic asset. While others rush to publish the first usable thing, they create opportunities for anyone willing to dig deeper. When the market is flooded with fast and superficial, slow and in-depth become competitive advantages.

The biggest misunderstanding about AI now is that it is a shortcut. It is a speed multiplier, and applying speed multipliers to poor judgment will only get you to the wrong places faster. It will not save you from building the wrong things. It will enable you to build the wrong things in record time. Among the three moats, high agency may be the hardest to fake. AI can approximate much of the execution layer: code, design, copy, research. What it cannot approximate is the drive to figure things out when everything is unclear and no one tells you what to do next. That must come from you, and honestly, it is the foundation that makes the other two possible.

The Window is Closing

Right now, most people using AI are being lazy about it. I say this not to be unkind, but it is simply an observable fact. The default behavior is: prompt, accept, publish. They edit almost nothing, apply almost no judgment, and put in very little taste. The results reflect this: an ever-growing ocean of competent, forgettable, indistinguishable output.

This will not last forever. As AI improves, as tools become more intuitive, as more people figure out the craft layer, the gap between lazy AI use and intentional AI use will shrink. Right now, just having these three moats puts you ahead of 95% of people using the same tools. This window will close, but today, it is still wide open.

I’ve observed a phenomenon: your audience is drowning in AI slop. Every scroll is a wall of generic output that looks, sounds, and feels the same. Cultivating taste to know what is worth making, building real distribution through trust earned over time, and maintaining enough agency to continue figuring things out when others accept the defaults will allow someone to stand out almost instantly. Not because they are faster or have better tools or discovered some secret prompt that no one knows about, but because they are doing what almost no one else is willing to do: caring about what happens after AI completes its task.

Shann provided a timeframe of 12 months. I think he is right. Twelve months from now, having taste will not be as rare; it will be expected. Building distribution will be even more difficult, as everyone will try. Those who start now have the compounded advantage of being early. This is not a manufactured scarcity or artificial urgency; it is the reality of the technology adoption curve. Early adopters build infrastructure, accumulate expertise, and earn trust. Later entrants must compete in a more crowded space.

My advice is simple: build all three moats. Taste knows what is worth making, distribution gets it seen, and agency keeps pushing forward when everything is unclear. That is how you create things that people will truly remember. Others may post faster, then wonder why no one cares. Tools are just tools; what truly matters is what you do with them and how much of yourself you invest in the process.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

注册就送10U!新人首笔交易再领70U空投
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

2 hours ago
Make America the Capital of Cryptocurrency: CFTC New Chairman Mike Selig's Regulatory Revolution
3 hours ago
The trends and investment opportunities of Bitcoin, crude oil, and gold under the US-Iran conflict.
4 hours ago
Weekend Recommended Reading: Kalshi completes over 1 billion dollars in financing, DATs begin to "find ways to raise money."
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
1 hour ago
Moss: Has the Era of AI Traders Arrived for Everyone? | Project Introduction
avatar
avatarOdaily星球日报
1 hour ago
Before the 75,000 USD Gamma threshold, both bulls and bears are waiting for a signal.
avatar
avatarOdaily星球日报
2 hours ago
BitFuFu's revenue in 2025 is 476 million USD, with cloud computing power income increasing by 29.4% year-on-year.
avatar
avatarTechub News
2 hours ago
Make America the Capital of Cryptocurrency: CFTC New Chairman Mike Selig's Regulatory Revolution
avatar
avatar律动BlockBeats
2 hours ago
Who is the real winner of the narrative of "tokenization"?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink