Original Title: How To Reason About A Messy Future
Original Author: Systematic Long Short
Translation: Peggy, BlockBeats
Editor's Note: As AI begins to write code, optimize code, and even gradually take over the software production process, a deeper structural change is approaching: occupational division, corporate organization, and even knowledge barriers may be redefined.
The author of this article previously managed a team of nearly 20 people at a hedge fund but chose to leave during his career ascent to start a company. In his view, the real signal is not market sentiment, but the leap in technical capability. When models can reliably generate usable code and possess recursive improvement capabilities, the logic of software development and knowledge production has already begun to change.
This article analyzes several types of short-term "moats" that may still exist in the AI era from the perspective of quantitative finance, including proprietary data, regulatory friction, authoritative endorsement, and the lag of the physical world, while presenting a core judgment: in an era of high uncertainty, recognizing direction and taking action before the window closes is more important than precisely predicting the future.
Here is the original text:
When Models Start Writing Code, The Change Becomes Irreversible
I first realized that the industry was approaching a turning point during my last job, as if I could hear the background music slowing down while the people around me continued to pretend that nothing would change.
At that time, I was managing a team of nearly 20 people at a hedge fund, doing something I had been doing for many years. From the outside, it looked like a steadily rising career path. If I had stayed there, I would likely have achieved greater success. But in the end, I chose to leave that coveted position to start a company from scratch, with only a handful of people on the team. This decision was hardly understood at the time and was even considered a form of "career suicide."
However, in recent months, large-scale layoffs, voluntary exits to start businesses, and an increasing number of people working during the day and quietly coding projects at night have made that once "crazy" decision seem less outrageous.
During this period, many people have asked me: Where will all this ultimately lead? This article is the best answer I can currently provide.
To be frank, I am not sure how significant the changes will ultimately be. But one thing that quantitative finance taught me is that having the right direction is often enough.
What truly made me realize that change is irreversible was the ChatGPT's model o1.
Before that, I referred to these systems as "LLM" rather than "AI." I didn't believe they really had any form of intelligence. But when o1 emerged, something changed: for the first time, these models could reliably generate code through structured prompts.
The code is still imperfect, and there can be hallucinations or misunderstandings. But the key point is: it can write useful code now.
My judgment is simple. Once AI can generate usable code, it will begin to recursively improve its own logic and drive software development at a speed we can hardly imagine.
Whenever I bring this up, there are always those who counter with "these codes still have bugs and are far from meeting production requirements." But that overlooks a fact: human-written code also has bugs. We don’t need AI to write perfect code to stop writing code ourselves.
The real turning point is when AI's error rate in writing code falls below that of humans while its speed far exceeds that of humans. At that moment, writing code will be completely outsourced to machines.
After witnessing the capabilities of o1 firsthand, I am almost certain: very drastic changes will occur in the future.
Moats Still Existing in the AI Era
Initially, I believed that AI would gradually erode the quantitative finance industry, but this process would be relatively slow. The reason is simple: institutional-grade code has almost no publicly available data for training.
I originally imagined software engineering as a pyramid: at the bottom is basic coding work; above that are senior engineers with architectural capabilities; further up are specialized developers, such as data scientists, quantitative developers, and various industry experts. In theory, the deeper the specialized knowledge, the safer the job.
My judgment at the time was: within two years, basic programmers would be the first to be eliminated; next would be senior engineers; and as models gradually absorb specialized knowledge, even higher-level positions would be impacted.
But I soon realized another thing: leading model companies would eventually directly hire industry experts to input specialized knowledge into the models. In other words, specialized knowledge will indeed become a short-term moat, but in the long term, it will also be gradually digested by the models.
At that time, there were several types of business unlikely to be easily disrupted in the next five years.
First: Proprietary Data
Companies with large amounts of proprietary data are harder to replace.
For example, large multi-strategy hedge funds (pod shops), such as Millennium, generate massive amounts of data daily: analyst research, investment suggestions, market judgments, actual transaction results.
This data can be used to continuously fine-tune models, forming advantages that are difficult to replicate externally. As long as a company's data sources are not easily accessible to the models, it still has a certain window of moat.
Second: Regulatory Friction
Industries that require extensive human approval are not easily disrupted quickly. For example, the traditional financial market.
To enter these markets, you need to: open a brokerage account, obtain licenses, sign cross-border legal documents. Trading cryptocurrency is easy, but a foreign company trying to trade iron ore in China has a much more complex process.
As long as a certain industry still requires human signature approval, its development speed will definitely be limited by the approval processes.
Third: Authority as a Service
Nowadays, getting AI to write a legal opinion is no longer a difficult task. But in reality, people are still willing to pay tens of thousands of dollars for a lawyer to issue a legal opinion. The reason is simple: AI's opinions currently lack authority.
Smart contract auditing follows the same logic. From a technical perspective, AI may have reached or even exceeded the level of top auditors. But the market still prefers to purchase the "seal" of well-known auditing firms.
Because what clients are really buying is not the opinion itself, but the authority behind the opinion.
Fourth: The Physical World
The speed of hardware advancement lags far behind software, and hardware issues are also harder to fix.
Therefore, industries that interact directly with the real world are unlikely to be rapidly disrupted by AI in the short term. However, once hardware capabilities catch up, the same logic will still hold: lower-level positions disappear first, followed by higher-level positions.
These moats do exist. But it must be acknowledged that they only delay change, not prevent it.
Act Based on Signals, Not Wait for Certainty
When the future is highly uncertain and changes rapidly, people typically make two types of mistakes.
The first is to wait for certainty to act. The second is to simply apply historical analogies, such as: "This is like the internet bubble."
Both approaches can lead to erroneous judgments.
In the absence of complete information, a more rational approach is to reason from first principles.
You don’t need to know every detail of the future. You only need a rough directional judgment and to design asymmetrical bets—meaning that if you judge wrong, the loss is manageable; if you're right, the gains are significant.
In an uncertain future, asymmetry is everything.
A practical way to think is to first ask yourself, "What prerequisite conditions need to exist for a certain outcome to happen?" Then ask if these prerequisite conditions have already appeared?
Looking back, this AI turning point was actually not difficult to foresee. Because the key inputs were already present: code that can write itself, models that can recursively improve, institutional knowledge that can be purchased rather than developed.
As long as you seriously observe these signals, you can roughly determine the direction of the future.
It is even possible to further deduce.
We may not have yet truly seen the following scenarios: AI can train itself, AI can replicate itself, AI operates completely autonomously.
If an AI can enhance its capabilities by 0.1% through a series of actions, it may not seem significant. But as long as that number is not zero, it will continue to magnify. This is driven by a typical power law effect.
In financial markets, once signals become obvious, trades are often already crowded.
In investing, you trade uncertainty for early belief. In careers and entrepreneurship, it is essentially the same.
So the real question is not, what will happen in the future? But rather, what do I already know? What direction does this information point to? What is the cost difference between acting now and waiting?
Another often overlooked fact is that action itself creates information.
Action does not occur in a vacuum. When you take action in the world, the world responds with feedback. This feedback generates new information. Information drives iteration. Iteration produces better actions. This is the basic mechanism of progress.
Remaining stagnant in uncertainty is a slow decline. Whereas action means exploration.
If I just want to continue reaping the benefits of the existing system, I could probably maintain that for a few more years. But I have always wanted to do something that truly belongs to me, and I feel that this window is rapidly closing.
Of course, the world's largest hedge funds will still do very well, as they possess proprietary data that is extremely difficult to replicate. The traditional financial market will also still be limited by regulation and manual processes.
But I believe these institutions will eventually replace most of their employees with AI, even including portfolio managers.
This won't happen immediately, but it will happen sooner or later.
My judgment at the time was that I probably had a 4-5 year window. Once the foundational model companies absorb enough industry talent, it will be increasingly difficult for new startups to enter the field. In some markets, like the US stock market, this trend is already very apparent. It is almost unimaginable what the efficiency will be a few years from now.
Soon, there will be no room for a "second place" in this world. I could continue to work for the top institutions, but I prefer to act in areas where I still have an advantage.
So I resigned and went all in on entrepreneurship. Later, this company evolved into OpenForage.
Now, the window is noticeably narrowing. The speed of change is no longer gradual. What used to take months of progress now only takes weeks.
I do not believe that jobs will completely disappear in the next few years. Humans still need humans. People are social animals, and currently, humanity still does not trust AI. Authority still needs to come from humans.
In the coming years, we may even see AI CEOs, but it is likely that a human CEO will still be needed to approve the AI's decisions. This kind of "human certification" will transmit layer by layer through the organizational structure. Human managers will manage a group of AI agents.
But the hiring logic will change; if the CEO finds it easier to give commands to AI rather than to you, then you are very likely not to be hired, and basic coding work will become increasingly hard to find.
If you want to make yourself irreplaceable, you need to do two things. First, exceed AI on a time scale. For example, long-term strategic planning, complex decision-making, multi-year cycle management. Second, exceed AI in the scope of systems; the context of AI is still limited; they know many facts but struggle to understand the chain reactions of complex systems.
If you can think long-term, quickly absorb information, make long-term decisions, and demonstrate good collaboration skills, then in the foreseeable future, you will still have work.
Before the turning point arrives, the signals can indeed be seen. It's just that most people do not look, do not act when they see, or only react when the signals become deafening. But by then, opportunities have often already been priced in by the market.
Do not underestimate the shifting ground beneath you; do not stay in a position that is losing its advantages while telling yourself to wait for a better time to act. Real opportunities rarely give advance notice. When everyone realizes it, the window is often already closed.
I saw the signals, and I made my bets. Now, I am living the results of that bet—whether good or bad.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。
