I’m George Agapov, Technical Lead at o1Labs, and I’m excited to introduce o1Labs’ first MIP towards the Mina Protocol upgrade this fall!
This MIP aims to reduce the slot time for Mina’s block production, thereby improving chain throughput and transaction responsiveness and marking our first step on the path toward achieving fast finality. We’ll also address the impact on vesting schedules and our proposed risk mitigation plan, along with the technical specifics.
You can read and comment on the full technical MIP on Github, and we’ll be engaging here with any questions or comments you have about the proposal and the process.
What’s Changing?
We propose the following changes:
Reduce slot times from 180s to 90s.
Update the coin base constant from 720 to 360 MINA.
Update the account vesting schedule logic.
Remove the zkApp soft limit.
Why Does This Matter?
The Mina Protocol has been carefully designed to scale without compromising on decentralization. So far, much of the work has been focused on developing the protocol and ensuring that the cryptography and security are robust.
Now, we must start to showcase how Mina’s design will allow it to scale quickly while maintaining Mina’s unique, succinct design.
The specific value of 90 seconds was chosen as:
Doubles the potential throughput of the network
Remains safely within the performance capabilities of the improved system
How YOU Can Get Involved:
We are eager to hear your thoughts, feedback, and suggestions on this MIP.
Hi everyone, our team has collected some FAQs which we would like to share:
General
Will the current networking layer be able to handle the increase in performance?
Yes, the networking layer will handle the 2x increase in frequency of blocks. We don’t expect it to be a bottleneck in any meaningful way.
What changes in tooling for archive nodes, explorers, wallets and other services (eg, Mina Time Machine & epoch calculator, staking pools, etc) would the MIP necessitate?
Archive nodes - no changes required.
Slot times will be halved, so any interface displaying “time of block created” or something similar may need an update. Coinbase will change, but this will be read from new blocks without any issues. There are no other changes anticipated aside from this factor.
Vesting accounts
What are vesting accounts and locked accounts?
Firstly, they are the same thing. Most of them were created at Mina’s mainnet launch in 2021. These vesting accounts have now been paid out. However, they can also be used as part of zkApp functionality.
Who is using this?
The accounts are baked into the protocol as a feature, but they are rarely used by builders. Still, we can’t just assume they aren’t used, because someone may create a transaction that will initiate usage of such type of accounts, and it’s a bad practice to just leave these accounts unaccounted for in the migration procedure.
Who else might be affected?
We anticipate that nobody else will be affected.
Snarkwork
Re: require at least some of the community members running the SNARK workers to run multiple (at least 4) SNARK workers per coordinator to ensure timely processing of SNARK work for the heaviest transactions.
What will be the incentive to provide SNARK work? Are snark fees expected to increase?
The Snarketplace is backed by market forces. We can only speculate about it.
However, if transactions on Mina increase or become much more sophisticated, we may need more snark work. If there is a higher demand for the snark work, the fee will naturally start reflecting that so that demand and supply meet.
What would a snark coordinator and 4 or more workers require in terms of server specs and cost?
For the most sophisticated transactions to be sent at maximum scale, we need at least some snark work operators to run more than 4 workers per coordinator. If they were running one coordinator and 1 worker, and now they’ll be running 4 workers, the server costs will rise proportionally to the number of extra CPUs that need to be allocated. At least one snark work operator already runs many tens of workers on a single coordinator; for them, the cost won’t change.
Great and much needed proposal, and I like new SNARK worker optimization scheme allowing for parallel proving.
If the coordinator uses many SNARK workers to calculate proofs in parallel, and one of those workers produces an invalid proof that cannot be merged because it cannot be verified, how will this edge case be handled? Will the entire proof‐creation process fail, or will there be a mechanism to reject and re‐calculate just the invalid proof?
Would the coordinator be able to start SNARK workers on demand (for example, by sending a POST request to a webhook provided via command‐line arguments) to obtain as many workers as needed for parallel proof calculation, and then send to them shutdown command as soon as the work is done?
In our design, the coordinator and its SNARK workers are assumed to be operated by the same entity and are launched from the same package/executable. This assumption simplifies trust boundaries and configuration expectations across the system.
Proof Validity Assumptions
Under this model:
Invalid proofs are not expected to be produced in normal operation.
If the same executable is used across coordinator and workers, the only realistic cause of invalid proofs would be a misconfiguration or a bug.
Therefore, proof verification by the coordinator serves as a redundant sanity check, not as a primary defense mechanism.
The coordinator verifies the proof for the full transaction, not for individual account updates or sub-proofs. This decision reflects the underlying assumption that all components are operating within the same trusted system.
If, in the unlikely event, an invalid proof is generated:
The coordinator will reject the full transaction and not forward it to the network.
This will be treated as a critical bug or a configuration issue, not a routine error.
Handling this failure at the account-update level is considered unnecessary, as such failures are not expected to happen in practice.
Dynamic Worker Scaling
We haven’t yet implemented a mechanism to launch SNARK workers on-demand via API, but in principle, this is feasible. Currently:
The coordinator is designed to accept any number of worker connections, without needing reconfiguration.
This allows infrastructure-level logic (e.g., a watchdog or autoscaler) to deploy additional workers as needed.
For example, if CPU usage of existing workers is saturated, a monitoring service could spin up more workers to distribute the load.
This means no changes to the coordinator or Mina node code are needed to support dynamic scaling—only infrastructure scripting (e.g., Kubernetes, Docker, systemd) is required.
While this kind of dynamic worker dispatch hasn’t been the focus so far, it remains a viable direction if operational needs evolve.
I fully support this proposal. One thing that concerns me is propagation, which has always been an issue on MINA. Will there be significant networking improvements bundled into the hardfork?
Given “heavier” blocks and the potential for more missed slots, this encourages BPs to add even more redundancy and more block-producing nodes to increase their chances of any block being propagated. This further increases the level of gossip of blocks from the same producer, exacerbating the issue. As seen previously, these effects have been very hard to reproduce in any testing environment.
Limiting the number of nodes that a BP can run other than via social consensus seems problematic, given the lack of slashing.
While this release does not include networking-related updates, the issue you’re referring to is being addressed through significant protocol-level improvements.
Problem in Previous Versions
In earlier releases, block creation took over 100 seconds roughly every 10th block, and under load, block creation times were significantly longer than 100 seconds. This created a cascade of issues:
Blocks were propagated only to the creator’s immediate peers
Those peers often lacked sufficient time to verify and propagate the block before the next slot
Consequently, only a portion of the network could reliably include and build on that block, leading to inconsistencies or missed blocks
Improvements in This Release
The upcoming hard fork delivers substantial optimizations to block creation and processing:
Block creation time: under 45 seconds (preliminary numbers from pre-release versions)
These are preliminary performance metrics—we will confirm final numbers from late-stage cluster testing when all hard fork protocol changes are tested together
Expected final performance: consistently under 30 seconds for creation and processing
With the new 90-second slot time, this means:
Block creators can propagate blocks to their peers with ample time for:
Full block verification
Further propagation to their peers
Second-degree peers (neighbors of neighbors) will definitely receive blocks within the slot time and most likely will be able to verify them before their own block production turn
Compared to the previous state (180-second slots with 100+ second block times), the network now has significantly higher likelihood that:
Every block reaches relevant block producers in time
Block production remains coherent across the network
Future Networking Improvements
To completely solve the issue—particularly in large or high-diameter networks—we plan to implement header-based block synchronization:
Distribute block headers first
Require nodes to validate and forward only headers (computationally cheap) without waiting for full block verification—the full block will still be verified by every node, but nodes won’t wait for complete verification before propagating to peers
This allows the network to rapidly propagate block references, ensuring all potential block producers know the latest block in time
We explored this approach before Berkeley. While that effort was paused, we’re now considering reviving it or implementing a similar architecture.
The advantage: this doesn’t require a hard fork. Berkeley’s upgrade introduced a body_reference field in the header format, allowing us to implement the new networking protocol entirely through a soft fork when prioritized.
Conclusion
While this release doesn’t include networking changes, the protocol improvements significantly reduce block creation and verification time, improving network behavior under the new 90-second slot time.
Key points:
We are maintaining network stability while reducing slot time from 180 to 90 seconds
Despite the reduced slot time, the system is more robust due to protocol optimizations
A complete solution will come with future networking updates, but we’ve already substantially improved the status quo
I’d also like to follow up on one important point. Based on our observations, the networking protocol itself—specifically the challenge of transmitting large blocks (tens of megabytes)—has not been a bottleneck in our experiments. In other words, simply sending block bytes from one node to another has not posed a significant issue.
Instead, we consistently found that block verification time is the primary limiting factor. This is why our current focus has been on protocol-level improvements that reduce verification time, rather than networking-layer optimizations.
That said, if you’ve observed cases where block propagation of large payloads is noticeably delayed or degraded, please let us know. We haven’t encountered such issues in our testing, but if there’s evidence suggesting otherwise, we’d be happy to investigate further to ensure that this release does not introduce additional risk on that front.