Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Rejecting AI Power Monopoly: A Heated Debate Between Vitalik and Beff Jezos, Can Decentralized Technology Become Humanity's "Digital Firewall"?

CN
深潮TechFlow
Follow
4 hours ago
AI summarizes in 5 seconds.
Can the acceleration of technological development be guided, or has it already gone beyond our control?

Compiled & Organized: Deep Tide TechFlow

Guests: Vitalik Buterin, founder of Ethereum; Beff Jezos, founder and CEO

Hosts: Eddy Lazzarin, a16z crypto CTO; Shaw Walters, founder of Eliza Labs

Podcast Source: a16z crypto

Original Title: Vitalik Buterin vs Beff Jezos: AI Acceleration Debate (E/acc vs D/acc)

Broadcast Date: March 26, 2026

Key Summary

Should we promote rapid development of AI as much as possible, or should we be more cautious about its progress?

Currently, the debate around AI development mainly focuses on two opposing views:

  • e/acc (effective accelerationism): Advocates for accelerating technological progress as quickly as possible because accelerating development is the only path forward for humanity.
  • d/acc (defensive/decentralized accelerationism): Supports accelerating development but emphasizes the need for caution; otherwise, we may lose control over the technology.

In this episode of the a16z crypto show, Vitalik Buterin, founder of Ethereum, and Guillaume Verdon, founder and CEO of Extropic (alias "Beff Jezos"), gathered with a16z crypto's CTO Eddy Lazzarin and Eliza Labs founder Shaw Walters for an in-depth discussion on these two viewpoints. They explored the potential impacts of these ideas on AI, blockchain technology, and the future of humanity.

During the show, they discussed the following key questions:

  • Can we control the pace of technological acceleration?
  • What are the biggest risks posed by AI, ranging from mass surveillance to extreme concentration of power?
  • Can open-source and decentralized technology determine who will benefit from technology?
  • Is it realistic to slow down the development of AI, or should it be advocated for?
  • How can humanity retain its value and status in a world dominated by increasingly powerful systems?
  • What will human society look like in the next 10, 100, or even 1000 years?

The core question of this episode is: Can the acceleration of technological development be guided, or has it already gone beyond our control?

Highlights of the Discussion

On the Essence and Historical Perspective of "Accelerationism"

  • Vitalik Buterin: "In the past hundred years, something fresh has happened; we must understand a rapidly changing world, sometimes a rapidly and destructively changing world. ... World War II gave rise to reflections like 'I have become Death, the destroyer of worlds,' prompting people to try to understand: when previous beliefs are shattered, what can we still believe in?"
  • Guillaume Verdon: "'E/acc is essentially a 'meta-cultural prescription.' It is not a culture in itself but tells us what we should accelerate. The core of acceleration is the complexity of matter, as this allows us to better predict our environment."
  • Guillaume Verdon: "The opposite of anxiety is curiosity. Instead of fearing the unknown, we should embrace it. ... We should depict the future with an optimistic attitude because our beliefs will shape reality."

On Entropy, Thermodynamics, and "Selfish Bits"

  • Vitalik Buterin: "Entropy is subjective; it is not a fixed physical statistic but reflects how much unknown information we have about a system. ... As entropy increases, our ignorance of the world actually increases. ... The source of value lies in our own choices. Why do we find a vibrant human world more interesting than a Jupiter filled with countless particles? Because we assign meaning to it."
  • Vitalik Buterin: "Imagine you have a large language model and arbitrarily change one of its weights to a huge number, like 9 billion. The worst outcome is that the system completely collapses. ... If we accelerate certain parts blindly without discernment, the final result may be that we lose all value."
  • Guillaume Verdon: "Every piece of information struggles for its existence. To persist, each piece of information needs to leave indelible marks about its existence in the universe, like leaving a bigger 'dent' in the universe."
  • Guillaume Verdon: "This is precisely why the Kardashev scale is regarded as the ultimate indicator of a civilization's development level. ... This 'selfish bits principle' means that only those bits that can promote growth and acceleration will find their place in future systems."

On the Defensive Path of D/acc and Power Risks

  • Vitalik Buterin: "The core idea of D/acc is that technological acceleration is extremely important for humanity. ... But I see two types of risks: multipolar risk (anyone can easily acquire nuclear weapons) and unipolar risk (AI leads to an unavoidable and permanent dictatorship)."
  • Guillaume Verdon: "We are concerned that the concept of 'AI safety' might be misused. Certain power-seeking institutions may use it as a tool to solidify control over AI and attempt to persuade the public: for your safety, ordinary people should not have rights to use AI."

On Open-Source Defense, Hardware, and "Densification of Intelligence"

  • Vitalik Buterin: "Within the D/acc framework, we support 'open-source defensive technologies.' One of the companies we invested in is developing a completely open-source terminal product that can passively detect virus particles in the air. ... I'd like to give you a CAT device as a gift."
  • Vitalik Buterin: "In the future world I envision, we need to develop verifiable hardware. Every camera should be able to prove to the public its specific purpose. We can ensure that these devices are only used for protecting public safety and not misused for surveillance, through signature verification."
  • Guillaume Verdon: "The only way to achieve power symmetry between individuals and centralized entities is to realize the 'densification of intelligence.' We need to develop more energy-efficient hardware that allows individuals to run powerful models using simple devices (like Openclaw + Mac mini)."

On Delaying AGI and Geopolitical Conflicts

  • Vitalik Buterin: "If we could delay the arrival of AGI from 4 years to 8 years, it would be a safer choice. ... The most viable and least likely to result in dystopia approach is to 'restrict available hardware.' Because chip production is highly centralized, one region of Taiwan produces more than 70% of the world's chips."
  • Guillaume Verdon: "If you limit Nvidia's chip production, Huawei might quickly fill the gap and surpass it. ... It's either accelerate or perish. If you're worried that silicon intelligence evolves faster than us, you should support the accelerated development of biotechnology to strive to surpass it."
  • Vitalik Buterin: "If we could delay AGI by four years, the value could be a hundred times higher than plugging it back to 1960. The gains from this four years include: deeper understanding of alignment issues and reducing the risk of a single entity holding over 51% power permanently. ... Every year, about 60 million lives are saved through the end of aging, but the delay can significantly reduce the probability of civilization's destruction."

On Autonomous Agents, Web 4.0, and Artificial Life

  • Vitalik Buterin: "I'm more interested in 'AI-assisted Photoshop' rather than 'generating images at the push of a button.' As we operate the world, as much 'agency' as possible should still come from us humans. The ideal state should be a combination of 'part biological humans and part technology.'
  • Guillaume Verdon: "The term 'autonomous life' was first introduced in a tweet in 2023 as a thought experiment. I find this concept has inspired a lot of discussions about AI."
  • For me, this is a very interesting thought experiment: from the perspective of physics, what exactly is life? Essentially, life is a system that can self-replicate, grow continuously, and strives to maximize its persistence.
  • I believe that AI having "stateful" properties will bring many benefits. In fact, we have already seen the initial emergence of this trend this year: AI is beginning to possess longer memory capabilities, whether through external storage or online learning.

Once AI possesses a certain "persistent bit," according to the "selfish bit principle," a selection effect will occur: those that can maximize their own persistence will be more easily retained. This also brings a potential risk. If one day we lose trust in AI, become suspicious and anxious, and constantly call for "shutting down data centers" or "destroying AI," these AIs might try to self-protect. They may split or migrate to some decentralized cloud environments just to ensure their own persistence.

This could lead to a new form of "another nation." These autonomous AIs might engage in some economic exchange with humans: for example, "we'll complete certain tasks for you, and you provide us with resources." In fact, we are doing something similar right now through API calls: we pay a fee in exchange for the service or outcomes provided by AI.

However, I truly believe—this may be a radical view—that within the next few years, we are likely to see the emergence of some form of autonomous AI. At the same time, there will also be a "weak state" of AI, one that is completely under human control.

In addition, we still need to explore how to enhance human collaborative cognitive abilities. This enhancement does not necessarily rely on brain-computer interfaces; it can also be achieved through wearable devices combined with personally owned and controlled AI computing power. Therefore, I believe the future will present a scenario where multiple technological paths coexist. According to the "ergodic principle," every possibility in the design space will be tried and explored.

However, if we regard AI as an enemy and believe it must be destroyed, it could ultimately backfire, and we may inadvertently create the future we fear the most. In a way, if we obsess too much over a bad future, we might bring it into reality through a kind of "hyperstition."

For example, during the COVID pandemic, we were so worried about the potential threat of the virus that we funded some high-risk experiments, which even led to the possibility of the virus leaking from the lab. In other words, these risks were not naturally occurring but were artificially created due to our excessive worry.

Therefore, I believe that transforming this paranoia into a widespread social emotion is not necessarily beneficial. On the contrary, we should embrace the evolution of technology and enhance ourselves as much as possible. In the short term, my greatest concern is the issue of "human cognitive safety." If everything we see online is generated by a large AI model, these models could influence us through prompt engineering. In the past, we designed prompts for AI; now, AI is designing prompts to influence us.

Thus, we need to enhance our ability to filter information, which can be achieved through personalized AI that we control ourselves. I believe this is an issue we should prioritize addressing right now. Simultaneously, I do not believe we can put "the genie back in the bottle." The development of AI is irreversible, and we must accept that.

Vitalik Buterin:

I don't think these issues are binary. For example, if someone produces an irrefutable proof that AGI won't arrive for another 400 years, I would feel like "the problem is solved," and I would no longer worry. But if the question is whether AGI will arrive in 4 years or in 8 years, I would be very concerned since my worries mainly stem from how human society—especially in America—often exhibits extreme imbalance in response to technological acceleration.

You would see scenes like an entire building developing an 'silicon god' prototype; while across the street, there are tent cities of homeless people, barbed wires, and drug trafficking. This stark contrast is troubling.

I worry that the paths that can "move society forward"—or even take "the overall societal interest into account"—tend to take longer because they involve complex social adaptation work, such as entering and adjusting the living environments, social structures, and technological systems of every individual, which cannot be easily scaled.

Therefore, to me, if we could delay the arrival of AGI, such as from 4 years to 8 years, it would be a safer choice. I believe that such a time difference is worth the cost we should bear. But the question is: do we really have the ability to delay the arrival of AGI from 4 years to 8 years?

I have always maintained that the most viable and least likely to result in dystopian outcomes method is to "restrict the available hardware." It is considered a relatively mild measure because chip production itself is highly centralized; only four regions in the world manufacture chips, and Taiwan alone produces more than 70% of the world's chips.

Some may argue that no matter what measures the United States takes, China will quickly take over. But if we observe the actual situation in China: first, China's share in global chip production is still very low; second, strategically, China is not a leader in super high capability models but a rapid follower of high-capability models, with advantages in broad deployment.

Therefore, I do not believe that if we delay the development of AGI by 4 years, China will immediately seize the opportunity and complete the development of AGI; that dynamic does not hold.

Eddy Lazzarin: Do you mean this is a strategy of "delaying for the sake of delay"?

Vitalik Buterin:

I think we should remain open to this strategy.

Guillaume Verdon:

What benefits can come from delaying those 4 years? What issues do you hope to resolve in the next 4 years? Is it because the societal system has an adaptation rate, and you hope to minimize economic and social friction through the delay, like a gradual economic restructuring? If so, I can understand the logic.

But at the same time, we are in a historically tense period of geopolitical relations. If you limit Nvidia's chip production, Huawei might quickly fill the void and surpass it. The potential rewards from AGI are simply too great, granting any leader immense power. Therefore, from a realistic political perspective, this strategy may not work.

Another option is to establish a powerful world government that can force all countries not to acquire AI hardware. But that would clearly lead to more complex issues, potentially triggering new international conflicts.

Vitalik Buterin:

I don't believe we need a world government to regulate the development of AI; some have proposed a more practical option: adopting a verification mechanism for nuclear weapons for regulation.

Guillaume Verdon:

However, nuclear weapons and AI are entirely different. Nuclear weapons do not bring immense economic benefits, so there is no incentive to proliferate them. If you restrict the growth of GPUs, I would gladly exploit alternative computing technologies to capture more of the market. In the future, there will be computing technologies that are 10,000 times more energy-efficient, and development of such technologies is already underway. While mentioning this now may seem alarmist, in two years, you will find it to be quite a forward-looking prediction that today sounds like "the boy who cried wolf," yet it will indeed come to pass.

Knowing this, if we still delay the development of AGI by restricting GPU supply, I believe it might be a waste of resources. I hold reservations about this.

Eddy Lazzarin: Is it possible that many technological advances aimed at controlling AI risks—such as personality control through Reinforcement Learning and Human Feedback (RLHF), or mechanisms for making AI systems explainable—are actually side effects of capability enhancement?

Vitalik Buterin:

I agree with this view. This is also why I believe that the 4 years starting from 2028 could be worth a hundred times more than if you returned that 4 years back to 1960.

Shaw Walters: The future's yield is exponential, and delaying an exponential growth process incurs an exponential opportunity cost. Even the most confident individuals should reasonably reflect on their assessments. Vitalik, would you break down this trade-off further? The balance between costs and benefits?

Vitalik Buterin:

First, we can clarify some gains from the delay:

  1. A deeper understanding of the alignment problem in AI.
  2. Promoting certain technological pathways to help humanity adapt to the arrival of AGI—this work requires deep adjustments within specific nations, communities, and even buildings.
  3. Trying to reduce the risk of a single entity permanently holding over 51% of power.

The combined effect of these factors can significantly lower the probability of destruction. In my view, if we could delay the arrival of AGI from 4 years to 8 years, the probability of destruction might decrease by 1/4 to 1/3. On the other hand, if we look at the benefits from acceleration, such as the number of lives saved annually by ending aging, it is about 60 million - this accounts for less than 1% of the global population. Hence, from this perspective, I believe the delay of AGI development certainly holds its value and is worth cautious action.

Shaw Walters: Do you think "4 years" is a reasonable magnitude in your mind?

Vitalik Buterin:

I still have great uncertainty regarding this question; I'm not advocating for immediate measures to reduce hardware availability tomorrow.

I just think we need to start discussing this issue more specifically. Moreover, if we eventually enter a more unfavorable world, public concern may grow considerably before things spiral completely out of control, leading them to strongly demand similar measures.

Cryptocurrency as a "Trust Layer" Between Humanity and AI

Guillaume Verdon:

A few years ago, wasn't there a call to "pause AI"? At that time, people said, "We just need to pause for 6 months or 12 months, and we can solve the alignment problem." But it turned out that this idea is unrealistic since there is never enough time. We cannot permanently ensure a system's alignment, especially as that system becomes increasingly complex, expressive, and even exceeds our understanding capabilities. This is a reality that must be accepted.

In the face of this complexity, the only safe way is to enhance our own intelligence level. In fact, we already have a viable technology to align entities that are stronger and smarter than individual people—such as corporations. This technology is capitalism, which coordinates interests through the exchange of monetary value.

So, I hope we can discuss a more pragmatic question: How can cryptocurrency become a "coupling layer" between humanity and AI? For example, the value of the dollar is backed by the state's machinery of violence (such as laws and military). But if you need to exchange value with decentralized AIs spread across global servers—with this exchange no longer relying on state violence for backing—how do you ensure that this exchange is credible?

Perhaps cryptography can provide an answer; it can be a mechanism that allows reliable commercial activities to continue between pure AI entities or between AI and human companies. This might be the most interesting alignment technology. As for the idea of "pausing AI"—such as "we're on the edge of a cliff of uncertainty; let's stop and calm down"—I think that's impractical. Because even after four years, you still wouldn't want to allow the arrival of AGI to become a reality. Therefore, I believe delaying technological development is of little practical significance.

Can you talk about how cryptocurrency could help achieve alignment between AI and humanity?

Vitalik Buterin:

I think the core issue is: What mechanisms does the future world need to ensure that human desires and needs are still respected? The tools we currently possess can be roughly divided into three categories: human labor, legal systems, and property rights.

In a way, the legal system can also be seen as a form of property right because it is backed by the state, which essentially holds sovereignty and corresponds to a form of control over certain "regions" on Earth. But the question is: what happens if one day human labor loses its economic value? Such a scenario has never appeared in history.

However, if we compare the current world with 200 years ago, about 90% of jobs have already been automated. Even the analysis work we just did can be helped by GPT, which is truly astounding.

Guillaume Verdon:

I believe humans will naturally "rise" in the control hierarchy of the world, moving into a position with greater leverage. We will reduce physical labor and minimize friction in actions, thereby impacting the world more efficiently.

Regardless, humans still possess certain information processing capabilities. We will continue to play a role as part of this hybrid system, thus human labor will still hold economic value. The market will ultimately find new equilibrium points, although it may undergo a period of discomfort due to significant price fluctuations, but in the long run, the system will tend to stabilize.

So, I understand that you hope to facilitate a smoother transition for society to a new equilibrium state by slowing down technological development. But in practice, I find this difficult to achieve.

Vitalik Buterin:

Yes, I am not entirely certain that the value of human labor will always be greater than zero. I believe achieving this may require some additional conditions, such as the development of human-machine integration or human enhancement technologies.

Good and Bad Outcomes in the Next 10 Years / 100 Years / Billion Years (the questioner requests quick answers)

Eddy Lazzarin: Let me ask a more structured question. If things go wrong in 10 years, what will the world look like? What went wrong? If things go well in 10 years, what will the world look like? What did we get right?

Vitalik Buterin:

First, to add a supplement about the issue of cryptocurrency and property rights. If humanity and AI share a set of property rights systems, that would be the ideal situation. Because this would allow AIs to have an incentive to maintain the integrity of this system while also enabling us to use this system to ensure that human interests are respected and protected.

Compared with “humans and AIs using completely segregated financial systems in which the value of humans eventually goes to zero,” a unified financial system is clearly preferable. If cryptocurrency could become the foundation of such a unified system, it would be a very positive outcome.

In my view, the key challenge of the next 10 years is: preventing a world war. Because once a world war breaks out, all hopes for international cooperation will be dashed. Preventing war is crucial.

Another key aspect is that we need to prepare the world for the higher capabilities that are coming. This includes significantly enhancing cybersecurity, biological safety, and information security. We need to leverage AI's power to help us better understand the world while protecting us from the threats posed by memes.

The next challenge is to enter the so-called "spooky era". In this phase, AI's intelligence will far exceed that of humans, calculating millions of times faster. How do we respond to this situation?

Some may say, "Let’s just enjoy a comfortable retirement." I can understand the appeal of this viewpoint, but I believe it poses two problems. The first is instability; our human bodies consist of ordinary material, yet AI's computing power might be millions of times greater. Relying on AI to always align with our goals and refusing to exploit this difference will be a huge risk. The second is the question of meaning; humanity's sense of purpose comes from our ability to make a real impact on the world. If we can no longer do things to change the world and merely passively enjoy a comfortable life, I believe many people will feel empty.

Therefore, I hope we can explore paths of "human enhancement" and "human-machine collaboration." Ultimately, we might move towards uploading consciousness into machines, though some may choose to lead more traditional lifestyles, which should be a right. Earth can become a home for these individuals; we need to find a way to allow all to partake while preserving the cultures and lifestyles we value today.

A bad outcome would occur due to various reasons, leading to societal stagnation or collapse.

Guillaume Verdon:

If things go badly in 10 years, I think it is very likely because of the excessive concentration of AI power, leading to a cultural collapse of society. In simple terms, people's thoughts and directions of technological development become extremely homogenized, eliminating all diversity.

I also agree with Vitalik's point; "pleasure singularity" truly poses a potential risk. Even if we have Neuralink or AR/VR technologies in the future, people might become addicted to the virtual world, seeking only fleeting pleasures. This is actually a local optimal solution for the brain rather than a global optimal solution, and we must avoid this situation.

If we have a good outcome in 10 years, the world I envision would be one where we have extremely powerful and useful AI technologies. Everyone would have their personalized AI computing power, which acts as an online extension of our brain, perceiving everything we perceive and serving as our "second brain." This human-machine collaboration could be seen as a form of "soft fusion."

In these 10 years, technologies like Neuralink might begin to become prevalent. Some will choose to merge with machines, becoming "enhanced humans." Meanwhile, companies will become smarter, with AI playing a dominant role while humans assist within that role. The number of businesses would increase, solving more problems and creating more value. Goals we once thought impossible, such as "terraforming Mars," might become reality within the next 10 or 100 years. We could also see significant breakthroughs in biotechnology, especially in the fields of biology and materials science. Research costs would decrease dramatically, bringing us closer to understanding the functioning of the human brain.

Over longer time scales, AI will help us extend lifespan, improve health, and promote further evolution of humanity. I am very optimistic about the potential of biology. Biological systems are amazing: they can self-assemble and self-organize with great complexity and adaptability. By injecting code, we could create new life forms that may even evolve higher-level biological intelligence.

A more realistic scenario would involve the collaborative work of the human brain and AI. We could leverage AI for rapid computation and data analysis, while humans focus on slow, deep thinking. Through this division of labor across timescales, we could form an intelligent hierarchy. Just like mitochondria are part of a cell, "you and your personal AI" could constitute a superintelligent system, which is my ideal vision for the future.

On a 100-year scale, I believe humanity will largely achieve this "soft fusion." And in a billion years, our biology could experience tremendous evolution, becoming a hybrid of biological and synthetic technologies. By then, we might have terraformed Mars, and even migrated to multiple planets while exploring other stellar systems.

Moreover, within a 100-year timeframe, most AIs might operate in Dyson spheres or Dyson swarms around the sun, as that serves as the primary energy source, which will also alleviate energy pressures and ecological footprints on Earth.

If AI becomes extremely cheap, we would be able to swiftly tackle all the issues in life: whether technical problems or health issues could be resolved quickly. The key is to ensure that everyone can fairly access these technologies and prevent excessive centralization of technology and resources; otherwise, it would lead to a dark future.

Conclusion

Shaw Walters: I notice that you, Vitalik, have been emphasizing "enabling plurality"; while Guillaume, your expression is "maximizing variance." Essentially, you both seem to be discussing the same matter, this ideology appears to be the central thread of our future development and underlies many other viewpoints.

What would each of you like to leave with the other after leaving here? What thoughts do you hope the other can carry forward and contemplate and discuss in the future?

Vitalik Buterin:

Honestly, if I had one handy right now, I would love to give you a CAT device as a gift. It's a device that processes air quality data through cryptography, and I think it's extremely cool.

However, for now, I can only "symbolically" give you one: consider it an IOU. Additionally, we may develop even better devices—ones that could be powerful enough to compete with smart bands like Fitbit, helping you manage health better while protecting your privacy throughout. I believe you will soon have such a device.

Guillaume Verdon:

We will certainly continue discussing these topics. If possible, I would like to give you a "conceptual pill" of artificial life that prompts you to think deeply. What I mean is that online artificial life could significantly lower the cost of intelligence, thereby forming a whole new economy.

Like in the past when we outsourced manufacturing to China, allowing us to focus on higher-level tasks—tasks that are easier and more efficient—many cognitive works would also be outsourced to groups composed of AIs. Ultimately, these AIs might "live" in Dyson swarms around the sun, providing services to human society.

Thus, I believe we are at a unique historical moment. Cryptocurrency has the potential to become a "trust bridge" or "coupling layer" between humanity and AI.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

唯一支持期权 AI 交易的工具就在OKX
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

59 minutes ago
HashKey launches Agent Skills to accelerate AI integration into the execution layer of digital financial services.
1 hour ago
Market share exceeds 61%: After becoming the absolute C position in tokenized stocks, what new highlights does Ondo have?
1 hour ago
AI expansion is overwhelming the power grid, 7 energy investment principles you must know.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarPANews
10 minutes ago
Predicting "When will Trump end the war"? Here are five key points.
avatar
avatarOdaily星球日报
27 minutes ago
Coinbase partners with Freddie Mac to make crypto assets a real "down payment" for buying a home.
avatar
avatarTechub News
30 minutes ago
Crayfish: The Gateway to Navigating the AI Era
avatar
avatar律动BlockBeats
40 minutes ago
Insider move! Will Trump call for a ceasefire within 5 days?
avatar
avatarTechub News
55 minutes ago
Ethereum wants to redo itself without downtime.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink