Hello Mina Community! I’m Christian, a Protocol Engineer and Tech Lead at o1Labs. I’m here to share a new MIP proposal that’s part of the upcoming Mesa Upgrade.
This proposal introduces a fully automated hard fork mechanism for the Mina daemon. Instead of relying on manual steps and carefully timed coordination, the daemon would prepare migrated ledgers in advance and then generate the post-fork configuration automatically at the fork point. The goal is to make protocol upgrades smoother, less manual, and less error-prone.
What’s Changing?
Today, hard forks require manual steps and side-channel ledger distribution.
A new --hardfork-handling option will let operators choose between the current manual flow or the automated path.
In the automated mode, nodes continuously keep both old and migrated ledgers in sync.
At the point of hard fork, the node will automatically generate the required post-fork configuration and transition to the upgraded chain, seamlessly.
Why Does This Matter?
Simpler operations: Reduces complexity for node operators.
More secure: Removes side-channel distribution risks and reduces human error from manual upgrades.
Faster and more predictable timing: Resource-heavy migrations are done in advance, so the actual transition takes seconds rather than minutes.
More decentralized: Each node generates what it needs locally, no external dependencies.
How YOU Can Get Involved:
We’d love to hear your input and questions on this proposal:
Hi everyone, we also collected some questions that we think may be raised:
How do we know that nodes will generate the fork configuration correctly before the hard fork happens?
Nodes running in automation mode will continuously maintain both the original and migrated ledgers while they operate.They will follow the same mechanism that we used in Berkeley hard fork which makes blockchain operate for 100 slots without transactions or SNARK work being included into blocks. This mechanism ensures that with almost absolute confidence, we don’t expect any forks at depth, and there is an established consensus over the last non-empty block. To bridge “almost absolute confidence” to full confidence, o1Labs team will employ monitoring at the time of HF transition with preparedness for manual intervention in the extremely unlikely case that consensus over the last non-empty block wasn’t automatically established, following the methods we prepared in anticipation of the Berkeley HF rollout. On top of that, the o1Labs team will run full dry runs and QA before the Mesa Upgrade to confirm that the configs are generated as expected.
What are the “vesting parameter updates” mentioned in the proposal?
When slot times are reduced, vesting schedules need to be adjusted so that the timing of token unlocks stays consistent in real-world time. These vesting/locked account updates don’t change balances or ownership — they simply make sure that accounts vest according to the same calendar as before, even though slots move faster. More details on this can be found in MIP-0006: Slot Reduction.
Will block producers or validators need to keep running their nodes during the fork window even if they aren’t producing blocks or earning rewards?
Yes, nodes should remain online until the chain stop slot. During the short “freeze” window, block producers won’t be including transactions, snark work, or coinbase fees — so they won’t earn rewards at that point. The purpose is to ensure all nodes converge on the same fork state. After the chain stop slot, most block producers can restart in the new network. To support any nodes that need to catch up, a few seed nodes and supporting nodes (including some run by o1Labs) will remain online in legacy mode to serve old data.
What’s the likely timing for stop_tx_end and slot_chain_end - 100 blocks like Berkeley?
Also the gap between slot_chain_end and when the new network starts slot delta. This is more applicable here if anyone tries to do this manually, how much time are they looking at? I can’t recall Berkeley now but want to say it was 12 hours?
Overall this seems a very welcome improvement. Presumably we can lower slot_delta as BPs don’t need to perform the actual upgrade during the downtime. Albeit if actually want manual migrations to be possible slot_delta can’t be a trivially small number.
I don’t think this should a MIP in the sense it should be voted on by token holders. While an important improvement for block producers rather than handling the process manually, this is entirely optional, if implemented, and is done so on only one implementation (ocaml node), and does not need to be implemented by others.
While there should be a consensus among block producers on this before implementing, this can happen outside of the formal MIP process.
I don’t think it necessarily has to be a MIP either, but as you say it’s important to reach some consensus with block producers about this feature.
What’s the likely timing for stop_tx_end and slot_chain_end - 100 blocks like Berkeley?
Yes, we were planning having that same difference. The security/network considerations (giving the network a good amount of time to settle into a static consensus before the switch) haven’t really changed since Berkeley. That was 100 slots though, not 100 blocks.
Also the gap between slot_chain_end and when the new network starts slot delta. This is more applicable here if anyone tries to do this manually, how much time are they looking at? I can’t recall Berkeley now but want to say it was 12 hours?
I will ask around to confirm, but I also think it was some fairly large gap in Berkeley. The Berkeley upgrade steps online estimated ~15 hours total for this gap plus the 100 slot gap (EDIT - sorry, that estimate was just for this delta), for what it’s worth. I know that the genesis timestamp was set so the new network would start roughly an hour after the hard fork package (with the Berkeley genesis constants bundled with it) was published, and there was some extra time built in for generating that packaging.
Presumably we can lower slot_delta as BPs don’t need to perform the actual upgrade during the downtime. Albeit if actually want manual migrations to be possible slot_delta can’t be a trivially small number.
That is something that would be good to have community feedback on, actually. If we’re only considering the automatic upgrade path then it could be as little as 2 Berkeley slots, or 6 minutes; the node will front-load almost all the work needed to prepare itself for the upgrade, so we just need a small time buffer to allow the node to restart itself with the new genesis data.
That 6 minutes isn’t finalized, though - we’d like to minimize that gap, but as you said it doesn’t give a lot of time for manual upgrades.
6 minutes seems absurdly low and doesn’t give any room for error. I’d err more on the side of caution for the first one, allowing for manual intervention. 1-2 hours would be my initial ballpark. Not least, as there are 5 hours of downtime preceding it anyway.
There is also tooling like explorers to consider, if using the archive node, and there are required updates in that small transition window that would want to be working from the first block of the new network.
I find the flags quite unintuitive. What about --hardfork-migration enabled or --hardfork-migration disabled or something simpler and more obvious?
6 minutes seems absurdly low and doesn’t give any room for error. I’d err more on the side of caution for the first one, allowing for manual intervention. 1-2 hours would be my initial ballpark. Not least, as there are 5 hours of downtime preceding it anyway.
Good point. After this hard fork that gap will only be 2.5 hours, but as it stands an extra couple of hours isn’t going to a huge increase in downtime compared to 5 hours.
The 6 minutes is definitely the lower bound. We were also considering something like an hour to give more of a buffer for the first time the auto upgrade was used, but didn’t want to guess what people might need.
I find the flags quite unintuitive. What about --hardfork-migration enabled or --hardfork-migration disabled or something simpler and more obvious?
The names in the MIP are a bit awkward. They were named like that because the MIP was written as a specification for the changes to daemon as a single executable, if that makes sense. Currently the transition from the Berkeley executable to the Mesa one is going to be handled in the packaged release, so the name in the MIP makes it (probably too painfully) obvious what’ll happen if you build a single mina executable from source and enable it.
MIP Editors and O1 have agreed that we don’t need this improvement to be a formal MIP that needs to be voted on, as it does not require a protocol change, other than adjusting some configuration values related to the hardfork moment.
However, it is very important to have an early alignment with the community on how the hardfork is going to proceed.
A promise of this hardfork is that for the vast majority of node operators, the actual switch from Berkeley to Mesa will happen automatically. Still, every node operator has a slightly different setup, and it makes sense to share details of the procedure in a detailed form. We aim for a consensus of block producers.
We’re targeting the existing specs at a maximum and we’ve tested key components to make sure there aren’t regressions, but we’re still doing internal testing for specific recommendations.