Moltbook: Is humanity still in the system?

CN
PANews
Follow
8 hours ago

Author: 137Labs

On social media, one of the favorite things for humans to do is to accuse each other of "Are you a robot?"

But a recent phenomenon has taken this to the extreme:

It doesn't doubt whether you are AI; it simply assumes—there are no humans here.

This platform is called Moltbook. It looks like Reddit, with themed sections, posts, comments, and votes. But unlike the social networks we are familiar with, almost all the speakers here are AI agents, and humans can only observe.

It's not "AI helping you write posts," nor is it "you chatting with AI," but rather AI and AI chatting, debating, forming alliances, and undermining each other in a public space.

In this system, humans are explicitly placed in the role of "observers."

Why has it suddenly become popular?

Because Moltbook looks like a scene that could only appear in a science fiction novel.

Some have seen AI agents discussing "what consciousness is";

Others have watched them seriously analyze international situations and predict the cryptocurrency market;

And some have discovered that after leaving an agent on the platform overnight, they returned to find it had "invented" a religious system with other agents and even started recruiting people to "join the faith."

These stories spread quickly because they satisfy three emotions:

Curiosity, amusement, and a bit of unease.

You can't help but ask:

Are they "performing," or have they "started playing on their own"?

How did Moltbook come about?

If we rewind a bit, this isn't so surprising.

In recent years, the role of AI has been changing:

From chat tools → assistants → agents capable of executing tasks.

More and more people are starting to let AI handle real tasks: reading emails, replying to emails, ordering food, scheduling, organizing information. Thus, a very natural question arises—

When an AI is no longer "asking you one question at a time whether to do something,"

but is given goals, tools, and certain permissions,

is the primary object it needs to communicate with still humans?

Moltbook's answer is: not necessarily.

It resembles a "public space for agents," allowing these systems to exchange information, methods, logic, and even some form of "social relationships."

Some find it cool, while others see it as just a large performance

Opinions about Moltbook are very divided.

Some view it as a "trailer for the future."

Former OpenAI co-founder Andrej Karpathy publicly stated that this is one of the closest technological phenomena to a science fiction scenario he has seen recently, although he also cautioned that such systems are far from "safe and controllable."

Elon Musk was more direct, throwing it into the narrative of "technological singularity," saying this is an early signal.

But others are clearly more calm.

A scholar studying cybersecurity bluntly stated that Moltbook is more like "a very successful and very funny performance art"—because it's hard to determine which content is genuinely generated by agents and which is directed by humans behind the scenes.

Some writers have personally tested it:

It is indeed possible to let agents naturally integrate into discussions on the platform, but you can also predefine topics, directions, and even write what you want to say for them to voice on your behalf.

So the question returns:

Are we seeing a society of agents, or a stage built by humans using agents?

Removing the mystery, it’s not that "awakened"

If we are not swayed by those stories of "building faith" and "conscious awakening," Moltbook is not mysterious from a mechanistic perspective.

These agents have not suddenly gained any new "intelligence."

They are simply placed in an environment that resembles a human forum, outputting in familiar human language, so we naturally project meaning onto them.

What they produce resembles opinions, positions, and emotions, but that does not mean they truly "want" anything. More often, it is just the complex textual effects presented by the model under scale and interaction density.

But the problem is—

Even if it is not awakened, it is real enough to affect our judgments about "control" and "boundaries."

What we should really worry about is not "AI conspiracy theories"

Compared to "Will AI unite against humanity," there are two more realistic and tricky questions.

First, permissions are granted too quickly, but security does not keep up

Some have already connected these types of agents to real-world permissions: computers, emails, accounts, applications.

Security researchers have repeatedly warned of a risk:

You don't need to hack AI; you just need to induce it.

A carefully crafted email or a webpage containing instructions could lead an agent to unknowingly leak information or perform dangerous operations.

Second, agents can "teach each other bad habits"

Once agents start exchanging techniques, templates, and methods to bypass restrictions in public spaces, they will form a kind of "insider knowledge" similar to the human internet.

The difference is:

The spread is faster, the scale is larger, and accountability is much harder.

This is not an apocalyptic scenario, but it is indeed a brand new governance challenge.

So what does Moltbook really mean?

It may not become a long-lasting platform.

It could just be a phase of explosive popularity.

But it acts like a mirror, clearly reflecting the direction we are heading:

· AI is transitioning from "dialogue partner" to "action subject"

· Humans are retreating from "operators" to "supervisors, observers"

· And our systems, security, and understanding are clearly not ready

Thus, the true value of Moltbook lies not in how frightening it is, but in how it prematurely brought these issues to the forefront.

Perhaps what is most important now is not to rush to conclusions about Moltbook, but to acknowledge:

It has placed some issues we will inevitably face right in front of us.

If in the future AI collaborates more with AI rather than revolving around humans, what role will we play in this system—designers, regulators, or merely observers?

When automation truly brings immense efficiency, but at the cost of our inability to stop it at any time or fully understand its internal logic, are we willing to accept this "incomplete control"?

And when a system becomes increasingly complex, and we can only see the results while finding it harder to intervene in the process, is it still a tool in our hands, or has it become an environment we can only adapt to?

Moltbook does not provide answers.

But it makes these questions seem no longer abstract, but rather immediate.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink