I’ve been looking into celestia.org, and the protocol looks like a really interesting way of solving the data availability problem. It looks like for rollups, this could be used as is to provide a way for the state merkle tree to be reconstructed from transactions committed to celestia. Users could simply send their transactions to celestia and the rollup service would just watch for events and create rollups periodically, this way you also get instant finality for your transactions. What do people think of this? I’m just wondering how mass exit could be implemented.
I’ve been looking at Celestia as well, but it’s till not clear to me how the DA problem on Mina is defined and how Celestia can help solve the problem. As a L1, Mina does not need to release all block data, it only needs to release proof of DA, which it already does because all transactions are snarked (ie proved) before putting into blocks. Snapps are similar, the state changes will be sent as transactions, which will also be proved.
What’s a proof of data availability though? A proof that most of the network has that data? How does it work?
TLDR; Data availability proof, allows a node to have strong guarantees that the data behind a particular hash (think merkle root of txs) has been published to the network in its entirety w/o that node having to download the entire data.
Why? proving a tx is included in a block is easy. this can be done by Merkle proofs. to prove the absence of a tx however is not easy (at the very least we need this to make sure there are no double-spends). for this we need to be aware of all txs that make up the tree; ie. complete list of all txs. but we don’t want to download all txs bc that’s resource-heavy. data sampling allows nodes to download (in addition to header as they normally do) a tiny bit of txs, but make sure the complete list of txs has been published and therefore known to the network. this assures them that txs within the block haven’t been withheld. which in turn assures that anyone;
can access the complete list of txs (assuming web never forgets valuable data) —> reconstruct the state off of this data → become a block producer —> generate zk / fraud proofs for light nodes to verify
now it’s no longer a problem if there is a single giant centralized block producer (zk or fraud prover), because light nodes are assured that new block producers can opt-in, continue building the chain from where it got left off.
For in-depth understanding of Celestia, i’ll suggest my report https://members.delphidigital.io/reports/pay-attention-to-celestia/