L1 revenue and value accumulation, L2, blob fees, L1 gas limit targets, the risk of large companies taking over Ethereum, Pectra upgrade…
Compiled & organized by: KarenZ, Foresight News
On February 25, the Ethereum Foundation research team held the 13th AMA on Reddit. Foresight News reviewed over 300 comments, compiling and summarizing the main points from Vitalik Buterin and members of the Ethereum Foundation research team. The discussions mainly covered L1 revenue and value accumulation, L2, blob fees, L1 gas limit targets, the risk of large companies taking over Ethereum, and the progress of the Pectra upgrade, among other future plans.
Fees
Question: The blob fee model seems somewhat unsatisfactory, as it overly simplifies the situation by setting the minimum fee to the Ethereum minimum (1 Wei) present in the protocol. Considering how the pricing mechanism of EIP-1559 works, we might see no blob fees for a long time during the aggressive expansion of blobs. This seems less than ideal; we should incentivize the use of blobs but not allow them to be used for free on the network. Given this, is there a plan to rebuild the blob fee model? If so, in what way? What alternative fee mechanisms or adjustments are being considered?
Vitalik Buterin: I do think we should keep the protocol simple, avoid over-adapting to short-term situations, and coordinate the logic of gas and blobs in the gas market. Ethereum Improvement Proposal 7706 (EIP-7706) has this as one of its two main focuses (the other focus is to add an independent gas dimension for calldata).
Ansgar Dietrichs: Max Resnick proposed a possible solution in EIP-7762. This proposal suggests setting the minimum fee at a sufficiently low level so that it effectively amounts to zero cost during non-congested periods, but high enough to allow for quicker fee increases when demand rises. This proposal was brought up relatively late in the development cycle of the Pectra hard fork, and implementing it could risk delaying the hard fork. We submitted this matter to RollCall #9 to assess whether the issue is serious enough to justify a potential delay in the hard fork; related content can be found at: https://github.com/ethereum/pm/issues/1172. The feedback we received indicates that L2 no longer sees this as an urgent issue. Based on this feedback, we decided to maintain the current model in the Pectra hard fork. However, if there is sufficient demand in the ecosystem, this could still be a viable feature option for future hard forks.
Dankrad Feist: Concerns about blob fees being too low are greatly exaggerated and are short-sighted. However, in the short term, I do think setting a higher minimum price for blobs would be a better choice.
Justin Drake: Yes, EIP-7762 could increase MINBASEFEEPERBLOB_GAS from 1 WEI to something higher, like 2 ** 25 WEI.
Question: What are the Ethereum Foundation's plans for improving scalability and reducing main network transaction fees in the coming years?
Vitalik Buterin:
Expand L2: More blobs (e.g., PeerDAS in Fusaka).
Continue improving interoperability and cross-L2 user experience (e.g., see the recent Open Intents framework).
Moderately increase L1 gas limits: The basic rationale is outlined here.
Ethereum's Value Accumulation and Price Issues
Question: L2 scaling has led to significant losses in value accumulation for L1, which has also affected ETH. Besides "L2 will ultimately destroy more ETH and conduct more transactions," what plans do you have to address this issue?
Justin Drake: Blockchains (whether L1 or L2) typically have several sources of revenue. The first is congestion fees, i.e., "base fees." The second is competitive fees, i.e., "MEV" (maximum extractable value).
Let's first discuss competitive fees. In my view, as modern applications and wallet designs evolve, MEV will increasingly flow upstream and be recaptured by applications, wallets, and/or users. Ultimately, almost all MEV will be recaptured by entities closer to the traffic initiators, while downstream infrastructures like L1 and L2 can only obtain a small portion from competitive fees. In other words, in the long run, chasing MEV may be futile for L1 and L2.
What about congestion fees? For Ethereum L1, the historical bottleneck has been EVM execution. Factors for consensus participants, such as disk I/O and state growth, are key drivers for setting smaller execution gas limits. With modern blockchain designs that scale using SNARKs or fraud-proof games, we will increasingly live in a world where execution is scarce. The bottleneck then shifts to data availability (DA), which is inherently scarce because Ethereum validators run on limited home internet connections, and DAS can only provide a linear increase in scalability of about 100 times, unlike fraud proofs and SNARKs that offer nearly unlimited scalability improvements.
Therefore, let's delve into DA economics, which I believe is the only sustainable source of revenue for L1. EIP-4844 significantly increased DA supply through blobs, which took effect less than a year ago. The chart titled "Average Number of Blobs per Block" in the dashboard clearly shows the growth of blob demand over time (which I believe is primarily driven by induced demand), with demand gradually increasing from 1 blob per block to 2 blobs, and then to 3 blobs per block. We are now saturating blob supply, but we are only at the early stages of blob price discovery, with low-value "garbage" transactions being gradually squeezed out by economically denser transactions.
If DA supply remains unchanged for a few months, I expect hundreds of ETH to be burned daily due to DA. However, Ethereum L1 is currently in "growth mode," and the Pectra hard fork (to be launched in a few months) will increase the target number of blobs per block from 3 to 6. This surge in DA supply should suppress the blob fee market, and demand will take months to catch up again. With the full implementation of danksharding in the coming years, there will be a cat-and-mouse game between DA supply and demand.
What will long-term equilibrium look like? My argument has not changed since the 2022 Devcon talk "Super Robust Money." In the long run, I expect DA demand to exceed supply. In fact, supply is fundamentally limited by consensus participants running on home internet connections, which I believe is equivalent to about 100 home internet connections' worth of DA throughput, insufficient to meet global demand, especially since humans always find creative ways to consume more bandwidth. In about 10 years, I expect Ethereum to reach 10 million TPS (about 100 transactions per person per day), which would yield $1 billion in revenue daily, even with each transaction costing as little as $0.001.
Of course, DA revenue is just one part of ETH's long-term value accumulation. Two other important considerations are issuance and monetary premium.
Dankrad Feist: All blockchains face value accumulation issues, and there is no perfect solution. The execution layer performs slightly better than the data layer because it can extract priority fees that reflect transaction urgency, while the data layer only charges fixed fees. My answer to value accumulation is first to create value. While creating value, we should maximize those future charging opportunities. This means maximizing the value of Ethereum's data layer, enhancing Ethereum's overall value, thus eliminating the need for alternative data availability (alt DA); scaling L1 to allow high-value applications to truly run on L1; and encouraging projects like EigenLayer to expand the use of Ethereum as (non-financial) collateral.
Question: If Ethereum's price falls below a certain level, will the economic security of ETH be threatened?**
Justin Drake: If we want Ethereum to truly withstand attacks (including those from nation-states), high economic security is crucial. Currently, Ethereum has about $80 billion in economic security (slashable), the largest among all blockchains (33,644,183 ETH staked, with the current ETH price at $2,385). In contrast, Bitcoin's economic security is about $10 billion (non-slashable).
Question: What is the ticker?**
Justin Drake: At least for me, it's ETH. I also hold some BTC, mainly for sentimental reasons, as a collectible.
L2 Aspects
Question: Regarding L2 interoperability, many websites (like Aave, Uniswap) and wallets (like MetaMask, Trust Wallet) now have increasingly long dropdown menus to select different L2 networks, leading to a poor user experience. When can we expect to see these dropdown menus completely disappear?
Vitalik Buterin: I hope that chain-specific addresses can reduce the need for such dropdown menus in many scenarios. You could paste an address like eth:ink:0x12345…67890, and the application would immediately know you want to interact with Ink and execute the corresponding actions in the backend. In many scenarios, this is a more application-specific issue that requires finding best practices to minimize user exposure to this complexity. Another long-term possibility is better cross-L2 interoperability, allowing more DeFi applications to run on just one major L2.
Question: Given the sentiment in the Ethereum community, do you still firmly believe that focusing on L2 solutions is the winning choice? If you could go back in time, would you make any changes?**
Ansgar Dietrichs: In the long run, Rollup remains the only principled method to scale blockchain to the level required for a global economic foundation. Looking back, I believe we have not invested enough energy in the path to achieving this ultimate goal and the intermediate user experience. Even in a Rollup-centric world, L1 still needs to scale significantly (as Vitalik recently outlined). We should recognize that advancing L2 work while simultaneously pushing forward L1 scaling paths will provide better value for users during the transition period.
My view is that Ethereum has become somewhat complacent due to not facing strong competitors for a long time. The more intense competition we see now highlights some misjudgments and forces us to deliver a better overall "product" (rather than just theoretically correct first-principle solutions). But to reiterate, some form of Rollup is crucial for achieving the "scaling endgame." The specific architecture is still evolving—for example, Justin's recent explorations of native Rollup indicate that specific methods are still changing—but the overall direction is clearly correct.
Dankrad Feist: I disagree with this answer in some respects. If you define Rollup as merely "DA and execution's extended validation," then how is it different from execution sharding? In fact, we view Rollup more as "white label Ethereum." Fairly speaking, this model has released a lot of energy and funding; if we had focused solely on execution sharding in 2020, we wouldn't have made such progress in zkEVM and interoperability research now. Technically, we can achieve anything we want right now—a highly scalable L1, a more scalable sharded blockchain, or a foundational layer for Rollup. In my view, Ethereum's best option is to combine the first and third.
Future Plans and Discussions
Question: What types of applications will be designed for Ethereum in the short term (less than 1 year), 1 to 3 years, and beyond 4 years?
Ansgar Dietrichs: This is a broad question, so I will provide a (very) partial answer, focusing on broader trends.
I firmly believe that we are currently at a critical turning point in crypto history. We are emerging from a long "sandbox" phase, during which cryptocurrencies primarily focused internally—building internal tools, creating infrastructure, and developing foundational modules like DeFi, but with limited connections to the real world. All of this is very important and valuable, but the impact on the real world has been minimal.
The current moment aligns with both technological maturity (though there is still work to be done, we have roughly mastered how to build infrastructure that supports billions of users) and a positive shift in the regulatory environment of the largest market (the U.S.). Overall, I believe it is time for Ethereum and cryptocurrency as a whole to emerge from the sandbox phase.
This transition will require a fundamental shift across the entire ecosystem. The best articulation of this challenge I have encountered is the "Real World Ethereum" vision proposed by DC Posch: https://daimo.com/blog/real-world-ethereum. The core theme is to focus on building real products for people in the real world, using cryptocurrencies as facilitators rather than the selling point itself. Importantly, all of this still retains our core crypto values.
Currently, the main type of real-world product is stablecoins (which started earlier due to fewer regulatory restrictions), along with some smaller "real-world impact" success stories like Polymarket. In the short term, I expect stablecoins to leverage this first-mover advantage to further scale and gain importance.
In the medium term, I expect real-world activities to diversify further: other real-world assets (such as stocks, bonds, and any assets that can be represented on-chain). In addition to assets, I predict we will see many new types of activities and products (for example, mapping business processes onto the chain, governance, and further new mechanisms like prediction markets).
All of this will take time, but the energy invested here will pay off in the long run. Focusing too much on continuing "sandbox" activities (e.g., meme coins) may show more appeal in the short term, but as real-world Ethereum takes off, it may risk being left behind.
Carl Beekhuizen: Overall, we focus on scaling the entire tech stack rather than designing for specific applications. The overarching theme is scaling: how do we build the most powerful platform while maintaining decentralization and censorship resistance?
In the short term (1 year), the main focus is on launching PeerDAS, which will allow us to significantly increase the number of blobs in each block. We are also improving the EVM: we hope to launch EOF as soon as possible. A lot of research is being invested in statelessness, EOF, gas repricing, and ZK-ifying (zero-knowledge proofing) the EVM, among other areas.
In the next 1 to 3 years, we will further scale blob throughput and launch some of the research projects listed earlier, including further developing the zkEVM (zero-knowledge proof EVM) initiative, such as ethproofs.org.
Looking ahead to 4 years and beyond, our vision is to add a series of extensions to the EVM (L2 will also adopt and accelerate), blob throughput will increase significantly, and we will improve censorship resistance (e.g., through FOCIL), and further accelerate everything through some ZK (zero-knowledge proofs).
Question: There is a viewpoint that one day the Ethereum mainnet should be solidified, and innovation should occur at the L2 level, but we continuously see new research (such as execution tickets, APS, one-time signatures, etc., which the Ethereum Foundation is also promoting, which is great; the competitive environment is constantly changing, and in my experience, digital products "are never finished"). In other words, how likely is it that we will need to make adjustments after the implementation of Vitalik's roadmap / beacon chain?
Vitalik Buterin: Ideally, we can separate the parts that can be solidified from those that need to evolve continuously. We have already done this to some extent by separating execution and consensus (with bolder advances in consensus, including Justin Drake's recent proposal for a comprehensive upgrade to the beacon chain). I expect these specifications to continue to evolve. Furthermore, I believe that for many technical issues, we have seen "light at the end of the tunnel," as the pace of research has indeed slowed compared to about 5 years ago, with recent focus more on incremental improvements.
Question: Vitalik commented in a recent article about Verge: We will soon face a decision point, choosing between the following three options: (i) Verkle trees, (ii) STARK-friendly hash functions, (iii) conservative hash functions. Has a decision been made on which path to take?
Vitalik Buterin: Discussions are still ongoing. My personal impression is that, over the past few months, the atmosphere has slightly leaned towards (ii), but no decision has been made yet. I also think it is worth considering these options in the context of the overall roadmap they will become a part of. Specifically, the most realistic options for me seem to be:
Option A:
2025: Pectra, possibly EOF
2026: Verkle
2027: L1 execution optimization (e.g., delayed execution, multidimensional gas, repricing)
Option B:
2025: Pectra, possibly EOF
2026: L1 execution optimization (e.g., delayed execution, multidimensional gas, repricing)
2027: Initial push for Poseidon.
2028: Increasingly more stateless clients over time.
Option B is also compatible with conservative hash functions; however, in this case, I still prefer a gradual rollout, as even if the risks of the hash function are lower than Poseidon, the proof system will still carry higher risks at the start.
Justin Drake: As Vitalik said, discussions are still ongoing. That said, the long-term fundamentals clearly point to (ii). In fact, (i) lacks post-quantum security, while (iii) is inefficient.
Question: What recent progress has been made regarding VDF?**
Dmitry Khovratovich: A paper in 2024 revealed potential attacks on the candidate VDF MinRoot, indicating that computations can be accelerated on multi-core machines, breaking its sequentiality. Currently, there is a lack of efficient and secure VDF solutions (efficiency refers to the ability to compute on small hardware, and security means it cannot be accelerated), and there is also a lack of reliable VDF candidates. Therefore, research and application of VDF have been temporarily shelved.
Question: Is there a willingness to expand Ethereum 100 times next year?** What is the acceptance level of simple parameter adjustments in the protocol? For example, reducing block time by 3 times, doubling block limits, raising gas targets, increasing the number of blobs, etc.
Francesco D'Amato: Expanding Ethereum overall by 100 times is unrealistic, but I believe that expanding blob throughput by 100 times compared to before 4844 is possible. EIP-4844 has already brought about a 3 times expansion, Pectra is expected to bring another 2 times expansion, and Fusaka aims for a 4 to 8 times expansion. This means we still need to expand by another 2 to 4 times. I believe we definitely have ways to achieve this.
Question: What features do the upgrades of Fusaka & Glamsterdam have?**
Barnabé Monnot: Fusaka seems to primarily focus on PeerDAS, which is crucial for scaling L2, and few people want to delay the delivery of Fusaka for other features. Personally, I am very hopeful to see FOCIL and Orbit in Glamsterdam, which will pave the way for us to move towards SSF (single-slot finality). The above content focuses more on the consensus layer (CL) and data availability (DA), but in Glamsterdam, the execution layer (EL) should also strive to push L1 scaling, and there are currently many discussions ongoing about which feature set is most suitable.
Question: Is it possible to "force" L2 to adopt decentralization of Phase 1 (or even Phase 2) through an EIP (given their slow progress in decentralization)?**
Vitalik Buterin: Native Rollup (such as EXECUTE precompile) has achieved this to some extent. L2 can still freely choose to ignore this feature and write its own code, even adding its own backdoors, but they will have access to a simple and highly secure proof system that is directly part of L1, so those seeking EVM compatibility for L2 are likely to choose this option.
Question: After Fusaka/Glamsterdam, which research might be ready for development upgrades?
Toni Wahrstätter: PeerDAS is being worked on intensively, along with some proposals such as EOF, FOCIL, ePBS, SECP256r1 precompile, and delayed execution. PeerDAS is now ready to be included in the Fusaka upgrade, and there seems to be broad consensus on its urgency. The other proposals mentioned above may all be candidates for the Glamsterdam upgrade, but which specific EIPs will be included in the upgrade has not yet been finalized.
Question: Vitalik has written about proposed measures to take in the event of a quantum emergency. How will we determine that we are in a quantum emergency?
Vitalik Buterin: In reality, it combines media, expert opinions, and market predictions from Polymarket regarding when a "real" (i.e., capable of breaking 256-bit elliptic curve encryption) quantum computer will appear. If the timeline is within 1-2 years, that would definitely count as an emergency; if it's around 2 years, while not urgent, it would still be pressing enough to make us set aside other priorities on the roadmap to integrate all anti-quantum technologies into the live protocol.
Question: What is the Gas limit target for L1 in 2025?
Toni Wahrstätter: There are many different views on the Gas limit, but it ultimately boils down to a key question: should we scale Ethereum L1 by raising the Gas limit, or should we focus on L2 and enable more data blobs (blobs) through advanced technologies like DAS (Data Availability Sampling)?
Vitalik recently published a blog post discussing the possibility of moderately scaling L1, where he listed reasons why raising the Gas limit might make sense. However, increasing the Gas limit also comes with trade-offs: higher hardware requirements; growth of state and historical data; bandwidth.
On the other hand, Ethereum's Rollup-centric scaling vision aims for greater scalability without increasing node hardware requirements. Technologies like PeerDAS (short-term) and full DAS (medium-term) are expected to release significant scaling potential while keeping resource demands reasonable.
That said, if after the Pectra hard fork in April, validators push the Gas limit up to 60 million, I wouldn't be surprised. But in the long run, the main focus of scaling may shift to DAS-based solutions rather than just increasing the Gas limit.
Question: If the Ethereum beam client experiment (or whatever it ends up being called) is successful, and we have several operational Ethereum beam client implementations in 2-3 years, will we need to go through a phase where the current PoS and beam PoS run in parallel, both earning staking rewards, similar to the PoW + PoS parallel we experienced before the PoS transition?
Vitalik Buterin: I think we can proceed with an immediate upgrade. The reason we needed two chains to run in parallel during the merge is:
PoS was overall untested, and we needed time for the entire PoS ecosystem to start and run long enough to have confidence in switching to it.
PoW could experience reorgs, and the switching mechanism needed to be robust against this.
Whereas PoS has finality, and most of the infrastructure (like staking) will carry over. Therefore, we can directly perform a large hard fork to switch the validation rules from the beacon chain to the new design. Perhaps at the exact moment of the switch, the economic finality guarantees may not be fully met, but to me, that is a small and acceptable cost.
Question: The Ethereum Foundation has launched a $2 million academic grant program for 2025. What specific research areas are prioritized? How does the Foundation plan to integrate academic research outcomes into the broader Ethereum development roadmap?
Fredrik Svantes: Here is a wish list: https://www.notion.so/efdn/17bd9895554180f9a9c1e98d1eee7aec.
Some research directions of interest to the protocol security team include:
P2P Security: Many vulnerabilities we have found relate to denial-of-service attack vectors at the network layer (e.g., libp2p or devp2p), so improving security in this area will be very valuable.
Fuzzing: We are currently fuzzing the EVM, consensus layer clients, etc., but there are definitely more areas to explore (e.g., network layer).
Understanding the risks of Ethereum's current reliance on supply chains.
How to leverage LLMs (large language models) to improve protocol security (e.g., code audits, automated fuzzing tools, etc.).
Others
Question: What applications do you most want to see in the Ethereum ecosystem?
Toni Wahrstätter: In my view, Ethereum application developers are doing an excellent job of identifying and meeting users' actual needs—even if L1 or L2 may not yet be fully prepared to support certain applications. I am particularly interested in those applications that combine self-custody with privacy, and there are already some fantastic solutions. Two standout examples are Umbra and Fluidkey, both of which leverage stealth addresses to bring more privacy to everyday user interactions. Additionally, applications like Railgun, Tornado Cash, and Privacy Pools provide significant value by enhancing on-chain privacy. Returning to your question, I hope to see more wallets making privacy the default setting rather than requiring users to opt-in, while still maintaining an excellent user experience (which is harder than people think).
Question: Aren't you concerned about the risk of large companies taking over Ethereum?
Vitalik Buterin: Yes, this is definitely an ongoing concern, and I believe the role of the Ethereum Foundation should be to actively address these risks. The goal is to maintain the neutrality of Ethereum, not the neutrality of the Ethereum Foundation—usually, the two are aligned, but sometimes they can be misaligned, and when that happens, we should prioritize the former. The main risks I see currently are concentrated in the L2 and wallet layers, as well as staking and custodial service providers. The Ethereum Foundation has recently begun to intervene in the first two areas, promoting the adoption of interoperability standards. That said, we absolutely have the opportunity to more actively mitigate risks, and we are exploring various options.
Question: Why is the Ethereum Foundation (EF) always so opaque? There is a lack of transparency and accountability to the community.
Justin Drake: What would you like to know? The Ethereum Foundation research team has two AMAs each year and provides a complete list of 40 researchers on Research.Ethereum.Foundation. Our research is public, such as on Ethresear.ch.
Question: What are your thoughts on the future of hardware wallets?
Justin Drake: In the future, most hardware wallets will run in mobile enclaves (rather than as standalone devices like Ledger USB). Through account abstraction, infrastructure like passkeys can already be utilized. I hope to see native integration (e.g., in Apple Pay) within ten years.
Vitalik Buterin: Hardware wallets need to be "truly secure" in several key areas:
Secure hardware: Built on an open-source and verifiable hardware stack (e.g., see IRIS) to reduce the following risks: (i) deliberately set backdoors; (ii) side-channel attacks.
Interface layer security: Hardware wallets should provide sufficient transaction information to prevent the connected computer from deceiving you into signing something you do not want to sign.
Broad availability: Ideally, we can manufacture a device that is both a cryptocurrency hardware wallet and a secure device for other uses, which will encourage more people to actually buy and use it rather than forgetting about its existence.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。