V神 12/6 Endgame 区块链扩容的终局

2024-02-05 12:50
文章标签 区块 扩容 终局 endgame

本文主要是介绍V神 12/6 Endgame 区块链扩容的终局,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Endgame原文

Special thanks to a whole bunch of people from Optimism and Flashbots for discussion and thought that went into this piece, and Karl Floersch, Phil Daian and Alex Obadia for feedback and review.

Consider the average “big block chain” - very high block frequency, very high block size, many thousands of transactions per second, but also highly centralized: because the blocks are so big, only a few dozen or few hundred nodes can afford to run a fully participating node that can create blocks or verify the existing chain. What would it take to make such a chain acceptably trustless and censorship resistant, at least by my standards?

Here is a plausible roadmap:

  • Add a second tier of staking, with low resource requirements, to do distributed block validation. The transactions in a block are split into 100 buckets, with a Merkle or Verkle tree state root after each bucket. Each second-tier staker gets randomly assigned to one of the buckets. A block is only accepted when at least 2/3 of the validators assigned to each bucket sign off on it.
  • Introduce either fraud proofs or ZK-SNARKs to let users directly (and cheaply) check block validity. ZK-SNARKs can cryptographically prove block validity directly; fraud proofs are a simpler scheme where if a block has an invalid bucket, anyone can broadcast a fraud proof of just that bucket. This provides another layer of security on top of the randomly-assigned validators.
  • Introduce data availability sampling to let users check block availability. By using DAS checks, light clients can verify that a block was published by only downloading a few randomly selected pieces.
  • Add secondary transaction channels to prevent censorship. One way to do this is to allow secondary stakers to submit lists of transactions which the next main block must include.
    在这里插入图片描述

What do we get after all of this is done? We get a chain where block production is still centralized, but block validation is trustless and highly decentralized, and specialized anti-censorship magic prevents the block producers from censoring. It’s somewhat aesthetically ugly, but it does provide the basic guarantees that we are looking for: even if every single one of the primary stakers (the block producers) is intent on attacking or censoring, the worst that they could do is all go offline entirely, at which point the chain stops accepting transactions until the community pools their resources and sets up one primary-staker node that is honest.

Now, consider one possible long-term future for rollups…
Imagine that one particular rollup - whether Arbitrum, Optimism, Zksync, StarkNet or something completely new - does a really good job of engineering their node implementation, to the point where it really can do 10,000 transactions per second if given powerful enough hardware. The techniques for doing this are in-principle well-known, and implementations were made by Dan Larimer and others many years ago: split up execution into one CPU thread running the unparallelizable but cheap business logic and a huge number of other threads running the expensive but highly parallelizable cryptography. Imagine also that Ethereum implements sharding with data availability sampling, and has the space to store that rollup’s on-chain data between its 64 shards. As a result, everyone migrates to this rollup. What would that world look like?
在这里插入图片描述
Once again, we get a world where, block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented. Rollup block producers have to process a huge number of transactions, and so it is a difficult market to enter, but they have no way to push invalid blocks through. Block availability is secured by the underlying chain, and block validity is guaranteed by the rollup logic: if it’s a ZK rollup, it’s ensured by SNARKs, and an optimistic rollup is secure as long as there is one honest actor somewhere running a fraud prover node (they can be subsidized with Gitcoin grants). Furthermore, because users always have the option of submitting transactions through the on-chain secondary inclusion channel, rollup sequencers also cannot effectively censor.

Now, consider the other possible long-term future of rollups…

No single rollup succeeds at holding anywhere close to the majority of Ethereum activity. Instead, they all top out at a few hundred transactions per second. We get a multi-rollup future for Ethereum - the Cosmos multi–chain vision, but on top of a base layer providing data availability and shared security. Users frequently rely on cross-rollup bridging to jump between different rollups without paying the high fees on the main chain. What would that world look like?
It seems like we could have it all: decentralized validation, robust censorship resistance, and even distributed block production, because the rollups are all invididually small and so easy to start producing blocks in. But the decentralization of block production may not last, because of the possibility of cross-domain MEV. There are a number of benefits to being able to construct the next block on many domains at the same time: you can create blocks that use arbitrage opportunities that rely on making transactions in two rollups, or one rollup and the main chain, or even more complex combinations.
在这里插入图片描述
Hence, in a multi-domain world, there are strong pressures toward the same people controlling block production on all domains. It may not happen, but there’s a good chance that it will, and we have to be prepared for that possibility. What can we do about it? So far, the best that we know how to do is to use two techniques in combination:

  • Rollups implement some mechanism for auctioning off block production at each slot, or the Ethereum base layer implements proposer/builder separation (PBS) (or both). This ensures that at least any centralization tendencies in block production don’t lead to a completely elite-captured and concentrated staking pool market dominating block validation.
  • Rollups implement censorship-resistant bypass channels, and the Ethereum base layer implements PBS anti-censorship techniques. This ensures that if the winners of the potentially highly centralized “pure” block production market try to censor transactions, there are ways to bypass the censorship.

So what’s the result? Block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented.
在这里插入图片描述

So what does this mean?

While there are many paths toward building a scalable and secure long-term blockchain ecosystem, it’s looking like they are all building toward very similar futures. There’s a high chance that block production will end up centralized: either the network effects within rollups or the network effects of cross-domain MEV push us in that direction in their own different ways. But what we can do is use protocol-level techniques such as committee validation, data availability sampling and bypass channels to “regulate” this market, ensuring that the winners cannot abuse their power.

What does this mean for block producers? Block production is likely to become a specialized market, and the domain expertise is likely to carry over across different domains. 90% of what makes a good Optimism block producer also makes a good Arbitrum block producer, and a good Polygon block producer, and even a good Ethereum base layer block producer. If there are many domains, cross-domain arbitrage may also become an important source of revenue.

What does this mean for Ethereum? First of all, Ethereum is very well-positioned to adjust to this future world, despite the inherent uncertainty. The profound benefit of the Ethereum rollup-centric roadmap is that it means that Ethereum is open to all of the futures, and does not have to commit to an opinion about which one will necessarily win. Will users very strongly want to be on a single rollup? Ethereum, following its existing course, can be the base layer of that, automatically providing the anti-fraud and anti-censorship “armor” that high-capacity domains need to be secure. Is making a high-capacity domain too technically complicated, or do users just have a great need for variety? Ethereum can be the base layer of that too - and a very good one, as the common root of trust makes it far easier to move assets between rollups safely and cheaply.

But also, Ethereum researchers should think hard about what levels of decentralization in block production are actually achievable. It may not be worth it to add complicated plumbing to make highly decentralized block production easy if cross-domain MEV (or even cross-shard MEV from one rollup taking up multiple shards) make it unsustainable regardless.
在这里插入图片描述What does this mean for big block chains? There is a path for them to turn into something trustless and censorship resistant, and we’ll soon find out if their core developers and communities actually value censorship resistance and decentralization enough for them to do it!

It will likely take years for all of this to play out. Sharding and data availability sampling are complex technologies to implement. It will take years of refinement and audits for people to be fully comfortable storing their assets in a ZK-rollup running a full EVM. And cross-domain MEV research too is still in its infancy. But it does look increasingly clear how a realistic but bright future for scalable blockchains is likely to emerge.

这篇关于V神 12/6 Endgame 区块链扩容的终局的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/680972

相关文章

HDFS—集群扩容及缩容

白名单:表示在白名单的主机IP地址可以,用来存储数据。 配置白名单步骤如下: 1)在NameNode节点的/opt/module/hadoop-3.1.4/etc/hadoop目录下分别创建whitelist 和blacklist文件 (1)创建白名单 [lytfly@hadoop102 hadoop]$ vim whitelist 在whitelist中添加如下主机名称,假如集群正常工作的节

【区块链 + 人才服务】可信教育区块链治理系统 | FISCO BCOS应用案例

伴随着区块链技术的不断完善,其在教育信息化中的应用也在持续发展。利用区块链数据共识、不可篡改的特性, 将与教育相关的数据要素在区块链上进行存证确权,在确保数据可信的前提下,促进教育的公平、透明、开放,为教育教学质量提升赋能,实现教育数据的安全共享、高等教育体系的智慧治理。 可信教育区块链治理系统的顶层治理架构由教育部、高校、企业、学生等多方角色共同参与建设、维护,支撑教育资源共享、教学质量评估、

Java ArrayList扩容机制 (源码解读)

结论:初始长度为10,若所需长度小于1.5倍原长度,则按照1.5倍扩容。若不够用则按照所需长度扩容。 一. 明确类内部重要变量含义         1:数组默认长度         2:这是一个共享的空数组实例,用于明确创建长度为0时的ArrayList ,比如通过 new ArrayList<>(0),ArrayList 内部的数组 elementData 会指向这个 EMPTY_EL

【区块链 + 人才服务】区块链集成开发平台 | FISCO BCOS应用案例

随着区块链技术的快速发展,越来越多的企业开始将其应用于实际业务中。然而,区块链技术的专业性使得其集成开发成为一项挑战。针对此,广东中创智慧科技有限公司基于国产开源联盟链 FISCO BCOS 推出了区块链集成开发平台。该平台基于区块链技术,提供一套全面的区块链开发工具和开发环境,支持开发者快速开发和部署区块链应用。此外,该平台还可以提供一套全面的区块链开发教程和文档,帮助开发者快速上手区块链开发。

mysql动态扩容调研

MySQL动态扩容方案 目前可用方案 MySQL的复制: 一个Master数据库,多个Salve,然后利用MySQL的异步复制能力实现读写分离,这个方案目前应用比较广泛,这种技术对于以读为主的应用很有效。数据切分(MySQL的Sharding策略): 垂直切分:一种是按照不同的表(或者Schema)来切分到不同的数据库(主机)之上,这种切可以称之为数据的垂直(纵向)切分;垂直切分的思路就是分析

数据库遇上知识图谱、区块链、深度学习

参考资料: https://zhuanlan.zhihu.com/p/33381916 https://www.zuozuovera.com/archives/1062/ 东南大学D&Intel Lab相关ppt 数据库的核心概念——表示、存取、查询 有了数据库是干什么,大概实现的逻辑,特点,才能引申出对当今这些新技术的对比、适应和发展。 目的:研究数据表示、存取数据模型:表示数据的模型,通

基于Shard-Jdbc分库分表,数据库扩容方案

一、数据库扩容 1、业务场景 互联网项目中有很多“数据量大,业务复杂度高,需要分库分表”的业务场景。 这样分层的架构 (1)上层是业务层biz,实现业务逻辑封装; (2)中间是服务层service,封装数据访问; (3)下层是数据层db,存储业务数据; 2、扩容场景和问题 当数据量持续新增,面临着这样一些需求,两台数据库无法容纳,需要数据库扩容,这里选择2台—扩容到3台的模式,如下图

区块链技术介绍

一.概述 1.什么是区块链?   区块链是一种分布式数据库技术,它以链式数据结构的形式存储数据,每个数据块与前一个数据块相关联,形成了一个不断增长的数据链。每个数据块中包含了一定数量的交易信息或其他数据,这些数据经过加密和验证后被添加到区块链上。由于每个数据块都包含了前一个数据块的哈希值,因此任何尝试篡改数据的行为都会被迅速地检测出来。 2.区块链技术的起源   区块链的起源可以追溯到

容器第三课,JDK源码分析,自己实现ArrayList数组扩容

package com.pkushutong.Collection;/*** 测试底层方法的实现,参照这JDK源码* @author dell**/public class Test02{private Object[] elementData;private int size;private int size(){return size;}private boolean isEmpty(){r

Windows C 盘扩容方案

Intro 最近 C 盘飘红了,想要扩展一下 C 盘的空间,因此我需要把 D 盘的空间移动一些到 C 盘(如果你的 C 和 D 盘是同一个硬盘才可以这么做)。最简单的情况下,你可以使用电脑自带磁盘管理来处理,下面会介绍如何处理,但是对我来说,因为内存地址不相连,无法直接分配从 D 盘分配内存给 C 盘。 因此,在这里我会给到几个合适的空间迁移方案,大家可以从上到下都试一试,确保成功。 为什么