Author: TT3LABS, a remote recruitment platform for Web3/AI/SaaS
On February 26, 2026, fintech giant Block announced layoffs of over 4,000 employees, directly reducing its team size from over 10,000 to less than 6,000. CEO Jack Dorsey mentioned in a letter to shareholders:
"Smart tools have changed the meaning of creating and operating a company... a significantly smaller team, using the tools we are building, can do more and do it better."
Dorsey also made a chilling prediction:
"I think most companies are already late. Within the next year, most companies will come to the same conclusion and make similar structural adjustments."
After hours on that day, Block's stock price soared by over 20%. This was the capital market's response with real money: paying for the AI leverage and efficiency of businesses.
An ordinary person with no programming knowledge can independently run a fully functional app overnight with the help of a large model. Capital markets will inevitably ask a sharp question: what value remains in the huge labor costs of tech giants that employ tens of thousands of programmers to maintain the daily operations of a super app?
The trend of replacing manpower with AI will be followed by more large companies. Anxiety is unavoidable, but mere anxiety is useless. We must start from the changes in the broader environment and step by step revert to individual survival strategies.
AI is more than just a tool; it is becoming a means of production
Some people in the market have begun to use "Web4" to define the current stage. To clarify, let’s review the different stages of internet evolution:
Web2
The core focus is on the interaction between software and people, with different platforms using algorithms to capture user attention; at its essence, it is a battle for traffic capture.
Web3
Aiming to address the issues of digital asset rights and value distribution. Many simply equate it with cryptocurrency, but essentially, it remains in the game of wealth distribution rules and does not touch on the "production and manufacturing" relationship of digital products.
The eve of Web4
AI has touched upon changing the production relations themselves for the first time. It is no longer just an efficiency-enhancing tool; it is becoming a new type of means of production. Those who are better at using it can elevate output limits by an order of magnitude.
In traditional teamwork, there are significant hidden costs: the judgment and industry intuition of excellent leaders are difficult to replicate for subordinates, and misunderstandings and rework in group execution are inevitable. These are the "hidden taxes" of organizational operation, which had no clear solutions before. AI significantly compresses this hidden tax; there is no learning curve, just giving clear prompts can lead to high-quality execution, and it can handle multiple task lines simultaneously. The combined strategic judgment of one person with AI's execution leverage can unlock the output of an entire team from the past.
Of course, AI still occasionally "seriously talks nonsense," which determines that human review and judgment remain indispensable. However, the reliability of models is improving monthly, and the buffer window left for purely execution positions is much shorter than most people think.
Efficiency equity and deep crisis: when entry barriers are leveled
In the short term, ordinary people accessing AI tools can gain efficiency dividends. But projecting backward, when AI levels the basic efficiency gap and greatly lowers professional entry barriers, companies will find: after single-person output efficiency has significantly increased, if the overall business scale does not expand proportionally, maintaining the original employee base becomes a liability.
Just look at the current salary differentiation. According to job monitoring data from TT3LABS, since 2025, the AI job market has repeatedly seen compensation packages exceeding "ten million dollars," and these candidates are young AI engineers with not much "team management skills." When Meta poached core researchers from OpenAI, the signing bonus alone exceeded 100 million dollars, and the average equity compensation for OpenAI employees reached 1.5 million dollars, while senior researchers at Anthropic have base salaries as high as 690,000 dollars (excluding equity).
Capital is buying a scarce capability: making AI itself stronger. Those who can drive the evolution of underlying models have their value geometrically amplified throughout the business network. On the other hand, others, as long as their work content can be covered by AI at a lower cost, may see their valuations shrink.
This also triggers a deeper potential crisis. Nowadays, more and more people’s first reaction to problems is to let AI give answers, bypassing the process of self-deriving, validating, and trial-and-error, leading to a gradual loss of thinking ability over time. The issue is that this "dumb labor" is what shapes your intuition for problems. Relying on AI to complete this process for you in the long run will cause your role at work to regress to that of a "demand translator": turning others' requirements into AI input and then relaying AI output to others. This intermediary step is precisely what the next generation of AI can most easily bypass.
Impact map: Where do you stand?
Fear without coordinates is just anxiety. Before discussing countermeasures, we need to sketch an "impact map." This is not to sell panic, but to help everyone locate themselves.
High-risk job content that can be clearly defined by instructions
Entry-level coding, basic data analysis, standardized report generation, template design, routine translation proofreading. The common characteristic of these positions is that the work can be clearly broken down into "input → processing → output." A considerable part of the over 4,000 people cut by Block fell into this range. Their professional capabilities are not poor, but what they did is precisely what large models can handle.
A standard worth asking yourself: if all your work content can be written as a segment of AI instructions, it indicates that the machine is ready to replace you, and what remains is just when the company will make this decision.
Experienced middle managers are being "compressed"
Project managers, operations supervisors, mid-level engineers. Their work involves judgment and coordination, which AI cannot completely take over in the short term but is being "compressed." Previously, a business chain required five middle managers to oversee different segments and align; now, with AI taking over upstream and downstream execution, one or two people can run the entire chain.
This group faces the situation of "fewer positions." Your skills have not declined, but the market demand for your role has sharply decreased. This group should leverage AI to amplify execution power downward and gain the authority to define problems upward.
The navigators of value uncertainty
There's a type of work where the core isn't "doing it right," but making decisions in an environment where information is never complete and being accountable for the consequences. Complex business negotiations, crisis management, cross-cultural organizational management, high-risk investment judgments. AI can provide analysis and suggestions, but cannot sign for you, cannot take the blame for you, cannot read the hidden interests behind the other party's gaze at the dinner table.
Such roles won’t depreciate; rather, because of the significant reduction in underlying execution costs driven by AI, the same budget can leverage larger projects, and the levers in decision-makers' hands have become longer.
In reality, many people's jobs cross more than one tier. A simple self-assessment: think about how much of your daily work can be clearly communicated in a segment of instructions, and how much requires you to make decisions in ambiguity. The higher the proportion of the former, the more urgently you need to make changes.
Stop tool anxiety, turn public computing power into private barriers
At the end of January, OpenClaw ("little lobster") burst onto the scene, surpassing 170,000 stars on GitHub within days. Various model vendors swiftly followed suit, Alibaba Cloud launched one-click deployment, Tencent released CoPaw as a competitor, and MiniMax and Kimi also launched their own compatible solutions.
Then you will discover an interesting phenomenon: many people spent more time this month "researching how to deploy little lobster" and "comparing which package is more cost-effective" than they actually spent producing business results with AI. Everyone is chasing tools, but after obtaining them, the configuration you deploy can be replicated by others in two hours.
"All large language models—OpenAI, Anthropic, Meta, Google, xAI—are trained on the same publicly available internet data. Therefore, they are essentially the same, which is why they are rapidly being commoditized."
— Larry Ellison, Oracle Q2 FY2026 earnings call
Understanding this from the reverse means: as long as your work relies solely on the public capabilities of general large models, your output is homogeneous; no matter how fancy your instructions are, there is no moat.
The real barrier lies in transitioning from public to private.
There is already a very clear trend: from large enterprises to startup teams, more organizations are deploying localized private models. The direct reason is information security; no one wants to hand over core business data to third-party APIs. But this trend has an underestimated chain reaction: when major players in the industry deploy data and knowledge into private setups, the industry information publicly available for general models to learn will become less and more delayed. On the surface, AI lowers the knowledge threshold for everyone, but the truly valuable layer of industry knowledge is accelerating its disappearance from the public network and sinking into various private knowledge bases.
Therefore, the "dark knowledge" you have accumulated over the years is not depreciating; it is appreciating. The premise is that you need to use it.
Organize and structure the non-standardized business experiences scattered in your mind, chat records, and historical emails so that they can become the "context" digestible by your private model. TT3LABS backend data shows that candidates with more than two years of experience in the Web3 industry have a much higher first-pass qualification rate than those from large tech firms without industry backgrounds, with the core reason being the weight of industry know-how far outweighing general technical skills. An individual with three years of CEX operations has an understanding of compliance logic and the unspoken rules of coin listing, a person who has run through two rounds of DAO governance understands proposal design and community mood turning points, and someone deeply engaged in vertical content has intuition about audience psychology and narrative rhythm—these things won’t appear in any public training data.
When you structure these private experiences and integrate them into models, your AI is no longer a general encyclopedia; it becomes a dedicated partner working only for you and understanding only your field. This depth of output cannot be matched by others using the same general model.
The core logic is straightforward: AI crushes everyone in processing public knowledge, but entirely relies on your feeding for processing private experience. Those who can integrate deep industry know-how with AI will be the core asset in the new division of labor.
Your experience base is the real "model"
AI models are rapidly evolving; today's GPT, Claude, Gemini could all be replaced by stronger versions in six months. But for you, switching to a more powerful model is merely changing an API interface. What truly won't be iterated or replaced is the private set of data and experience you feed it.
Models are universal infrastructure that anyone can use. But the industry knowledge, business judgments, and lessons learned that you pour into it constitute your unique "training corpus." The stronger AI becomes, the better it can digest your corpus, and the higher your private barrier will be. So don't get hung up on whether building a knowledge base now will soon become outdated; your knowledge base is the only asset that won’t depreciate due to model iterations. The models are changing, but your data barriers will only increase in value as AI capabilities improve.
At the same time, traditional workplace competition logic is also being rewritten. In the past, employees could showcase their attitude by burning the midnight oil, but machines output 7×24 hours, and all strategies competing based on "I can endure longer than others" have reached zero against AI.
Many will say: "I still provide emotional value in the team." Yes, this is a uniquely human ability, but its premium depends on your level. When a grassroots team shrinks from ten people to two people plus a row of AI agents, the "team lubricant" loses its scenario. Conversely, at the decision-making level, complex business games, high-risk trust building, and mediation of conflicts across interest groups make deep connections between people more valuable due to the reduced underlying costs. Emotional value is not disappearing; it is migrating upward.
Ultimately, the most crucial investment for individuals in the AI era is not learning which tool to use, but continuously managing that private AI that only you possess. Tools will iterate; the experience base will not.
Three actions you can start now
Returning to the Block case, some people were laid off but others remained; the difference lies in who remains indispensable after AI becomes a standard production tool. Don't wait for the company to arrange AI training for you; starting today, we can try these actions:
01,Transition from "doing it all" to "building workflows"
Workers often fall into the trap of using AI to "slack off" (for example, using AI to write a weekly report or polish an email); this still reflects an execution-level mindset. What you truly need to do is regard yourself as a "contractor" and reconstruct the most core outputs of your current position into an AI automated production line.
Don’t try out dozens of new models simultaneously; select the most mature tool currently available (like ChatGPT Plus or Claude), and force it to intervene in the most time-consuming, experience-heavy part of your work. Transform the linear operation of "manually collecting data → analyzing and comparing → outputting conclusions" into "setting up automated data scraping → feeding into AI for analysis framework → manually intervening to adjust and fine-tune." When you can compress a task that originally took a week into one day with extremely stable quality using this workflow, you are no longer just a single computational node; you have transformed into a high-leverage "micro-enterprise."
02,“Solidify” implicit experiences as your exclusive digital avatar
Large models learn from public data; they understand all theoretical knowledge, but they absolutely do not understand the unique hidden quirks of that difficult major client of your company, nor do they get what untouchable landmines exist during your department's dealings with the finance department. These "dark knowledge" that you have accumulated through countless pitfalls constitute your core asset.
But if these assets only reside in your mind, they can’t generate compounded returns. Your current task is to utilize the customizable features offered by large models (like Custom GPTs or Claude Projects) to transform your experience into their "system preset instructions." Feed it all your edge cases you've dealt with, failed project reviews, and unwritten rules in the industry. Your goal is not to create a static knowledge base or notebook, but to "tame" a 24/7 personal assistant with your distinctive business style that only works for you. When your "digital avatar" takes shape, others wielding general AI cannot possibly compete with you.
03,Enhance your "problem definition authority" and sense of responsibility
Within your team, start deliberately practicing assigning the task of "finding answers" to the machines, while keeping the authority to "ask questions" and "make decisions" in your hands. AI is a perfect answer engine, but it can never detect the real business motivations behind a demand. When a boss says, "I want to create a new retention strategy," AI will instantly provide ten sets of growth-hacking theoretical models. But only you can combine the current budget and development resources to point out, "Although Plan B is perfect, it cannot be implemented right now; Plan C, with half the functions, suits our current pace best."
At the same time, you must understand one thing: AI won’t go to jail; it won’t take responsibility. When companies pay you high salaries, they are often buying your "safety net" for business outcomes. When you submit AI-generated code or plans, you must have the confidence to say: "I have audited the AI output with my professional experience, and I take responsibility for the final outcome." This willingness to make decisions in ambiguous areas, and to accept ultimate commercial consequences, represents a "responsibility premium" that machines cannot replace in any era.
Dorsey said, "Most companies are already late." But for individuals, this statement also holds: most people have yet to start preparing and are unaware of this trend.
Not everyone has to become an AI expert. But everyone needs to clarify one question: in your work, which parts can machines eventually take over, and which parts are unique to you, and move your time and energy from the former to the latter.
If one day AI comprehensively surpasses humans in all fields, it may be in 2027 or 2030, but this is a change you cannot observe from the sidelines.
It does not wait for you to be ready.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。