Currently the minimum amount of Mina required to have any chance of winning a block is 66k. Long term does anyone think this is (or will become) a barrier for greater decentralisation / adoption and if so how could this be addressed / solved?
EG, Could in the future a mobile validating node be incorporated into a Mina Wallet so holders were playing an active part in the network? Or could there be a way for Community members to run nodes in a co-operative type way?
Mina did a great job with their Genesis program to decentralize relatively fairly the network, which is really a difficult task for a PoS project and I don’ t have much examples of better work on that matter.
Right now the barrier to make a validator is high but the staking is easy and open to everyone. We also have a healthy number of validators thanks to the Delegation program.
That being said, I believe and agree that decentralization is a permanent “concern” and work, and it’s really hard to make a viable validator right now, which in the long run, could be an issue for sure. And this stuff must be thought about way before it becomes a concern.
As you said one of the the first thing to work on would be specs to run a node, it’s in the roadmap so I am not concerned and this will be a gamebreaker, leveraging Mina tech to build towards decentralisation. Then for a validator, that’s dependant on block rewards and tokenomics, so that will hopefully discussed through MIPs.
Other than that I think it’s worth following the brainstorming being done on other big established projects, such as Ethereum, as there is no simple and easy solution. I personally believe that PoS has a tendancy to centralise so I am sure people like Vitalik are actively trying to think how to break this. If there is documentation to share about it I would be interested to read
Has anyone seen quadratic voting at the protocol / block production level? It would require identity (and perhaps rewards adjusted to amount staked so that returns are still linear with investment), but could really open up a huge number of people running the network longterm
If the rewards are adjusted so that they are linear to stakes, then why do you still need identify? I agree it’ll be a great incentive for people to run BP even though the rewards will not increase - much of the frustration of running with low stake is not getting any slot for long period from the cold blooded VRF.
I was imagining that the number of blocks you make could be proportional to the square of the stake owned by an individual (ie, 100 Mina = 10 weight, 10,000 Mina = 100 weight). That means being a smaller producer would win a lot more blocks, getting around the current VRF frequencies.
But, then it means the 10,000 Mina staker would only be getting 10x more tokens by default than the 100x staker, so yield wise, if the 100 Mina staker was getting a 5% yield, then the 10,000 Mina staker would only be getting a .5% yield, so something needs to be done so yields are consistent (perhaps even adjusted in favor of smaller players?)
But all a bit early, would require a good way of doing identity first under pretty large stakes to game the system, so probably pretty far out there
I think your proposal makes a lot of sense and has got me thinking… how about the following.
circ = total circulating supply
s = amount staked with block producer
t = some threshold value of amount staked (stake pool is considered a greedy whale above threshold )
p(s) = probability of block production for block prouder
we could consider some non continuous p(s) such that there is diminishing probability / rewards once s > t. For example we could say that probability becomes logarithmic if stake > threshold.
import matplotlib.pyplot as plt
import math
circ = 10000
stake_range = range(10000)
t = 2000
p_s = lambda s: s / circ if s <= t else (t / circ) + math.log((s + 2000) / (t + 2000), 100000)
p = [p_s(s) for s in stake_range]
plt.plot(stake_range, p)
plt.xlabel("s - stake amount")
plt.ylabel("p(s) - block production probability")
plt.show()
I think that under this scheme it would be fine to leave block rewards static. This would result in constant yield for stake pools in which s <= t and diminishing returns for stake pools in which s > t. This would ultimately discourage stake pools where s > t and therefore increase decentralisation. Just an idea, probably has holes in it…
I think the main issue is it would be relatively easy for a large pool to break into smaller ones, in which case the block producing are still mainly in their hands. It also feels like the larger pools are penalised for being large, which is not really fair. Probably we need to make the block producing chances quadratical to the stakes, but the rewards too, to keep the reward still linear to stake. The smaller BPs do not get more rewards, but at least their chances of producing a block get increased and fluctuation of rewards from epoch to epoch reduced.
yeah it would be possible to circumvent in terms of creating additional pools but there is then additional effort required to orchestrate, manage and market those pools which could give other pool operators a better chance of getting a slice of the pie. Disincetivising large pools is a good thing for decentralisation IMO, I believe this is the model that ouraborous praos is based on. But yeah maybe the quadratic solution is better, I’m not sure.
Another possibility would be to reduce the block time which I think would be beneficial anyway. This would increase the number of blocks produced per unit of time. Even though the probability of block production would remain the same, the expected number of blocks per unit of time would increase.
One of the interesting things that I found completely different about Mina is that the minimum amount of Mina to produce a block is actually “greater than 0”. For instance, in this particular block, the block producer was able to produce a supercharged block with 1.7k Mina. http://minaexplorer.com/block/3NKwAnBrSWRRyjxpCBX5CCJ5JGQDa37X1t83EZRcFbCZ3e6BcRXr
Assuming that a new method is determined for selecting a block producer is selected to give more weight to those with small stakes (based on the method that Evan was talking about), I would assume that the number of people producing blocks on the network for a given slot would also increase.
For instance, if you look at the number of unique block producers per block here:
Could you elaborate on the plots you have provided please. They look rich with insight but more explanation would be welcomed. Having such long forks isn’t ideal. A deterministic finality gadget could potentially resolve this? Missed slots aren’t great either, compounded by the large block time could result in poor user experience. BABE consensus (used in polkadot) addresses this by introducing “secondary slots” which are allocated in a round-robbin style. This means that if none of the validators are assigned a VRF slot then it will default to the secondary round robbin slot. This ensures that slots are not missed and that block time is consistent.
I’m using Gareth’s BigQuery API for his Archive Node data so it is a bit limited in that it is based off of what he’s seen on his nodes that send data to his archive node and is sent to his BigQuery DB. Therefore, there is always the potential for missing blocks because it was never seen on his nodes, etc.
Number of block producers:
Given block height X, look at all blocks in the archive node received canonical and non-canonical and just get the unique number of block producers
Insight: The number of unique block producers is an indication of the number of block producers submitting a block to the network for the given slot
Longest Fork:
For orphan block A, look at the hash and recursively look back in the hashes until the hash is in the canonical chain. The number of blocks is the length of the fork. There may be multiple smaller forks but the fork of the longest length for that particular block height is what I cared about for what I was doing
Insight: The longest fork is the longest fork from the canonical chain. I have a theory that this is what causes most nodes to lose the 100% uptime that so many care about which is actually outside the control of the sidecar
Slot Difference:
For Block X and Block Y, look at the slot number and calculate the difference.
Insight: Empty slots imply that there were no active block producers were selected for the slot or the block producer did not produce a block in the allocated time.
As for the quantile part you see on the plots, that’s just a moving quantile average over the blocks.
Thank you very much for taking the time to detail the data. The slot difference is somewhat worrying… with a ~ 3 minute block time and a ~12 slot difference would mean no block in 36 mins? Maybe the secondary slot mechanic + deterministic finality could help here. In either case I’m hoping that we can do something to reduce block time.
I believe that the minimum amount needed to win a block should be raised. Because it may somehow ensure better service from the block producer. And this would be more beneficial to the network. However, people with less amount should be able to delegate their tokens and that is great enough I think.
Unfortunately, I think the 3 minute block time might be a bit necessary for now until block production and validation is improved such as via GPUs. Gareth suggested it a while back that it should be something be investigated. I have seen times between 1 to 3 minutes for block creation start time to the receipt of the block.
Can you elaborate what you mean by “secondary slot mechanic + deterministic finality”?
On the ‘ez to implement’ side, we could apply supercharged block reward to validators’ concentration.
Slots won by addresses delegating to low concentration validators would get a higher coinbase reward.
Numbers should be adjusted to the targeted validators’ number. Let’s say we want 1000 validators, then the concentration target per validator would be 0.1%.
With the current inflation rate, 720 MINA is the default coinbase. We could set to 1080 MINA the supercharged coinbase reward. It applies to slots won by addresses delegating to a validator that concentrates less than 0.1% of the network.
Then the incentive for stakers to decentralize is a 10.5% yield instead of 7%
Of course, I picked these numbers for the example.
I think this model is interesting, because
1/ it is easy to understand for holders
2/ when have to numbers to play with. The supercharged yield decides how much we incentive holders to decentralize. The concentration target manages the target validator number.
It would be good to understand where the bottlenecks lie for block production. It would be ideal if it could be reduced without having to migrate over the GPU’s or specialised hardware. If you want to read about secondary slots and deterministic finality take a look at this description of polkadots setup Polkadot Consensus · Polkadot Wiki.
Evan has mentioned about instant finality on Mina in the Chainsafe Con last week: indeed I think it would be cool and much needed. The 3 min block time is due to ~1-2 block producing time and the rest for gossiping, but the team has plans to reduce that as well.
Thanks for the heads up, I just watched the chainsafe stream. Sounds like Evan and the team are well aware of these inefficiencies and already have some ideas for addressing them which is very promising.
Sure. I think getting the conversation going is important - finality issue might become a major limitation to Mina very soon, and battle tested finality gadgets as you mentioned seem good options.