POSDAO feedback and questions

#1

Hi there,

congrats on the paper! It was very interesting to compare POSDAO to the mPoS approach we’re taking for the Trustlines Blockchain. I’ve been told here would be a good place to provide feedback and ask some questions about it.

Bridge fees

Why are those paid to the chain validators? Wouldn’t it be more natural
to pay them to the bridge validators?

Staking token

In the reference implementation you use xDAI for transaction fees and
the native token for staking, arguing that price stability is important
for users but not so much for stakers. But couldn’t one also argue that
it’s exactly the other way around? Stakers will likely hold many more
tokens than users, even in a less liquid form, so they should be more
interested in a stable price.

Allocation strategies

I’m curious if you’ve done the math on various allocation strategies to
find if there’s an optimum and what it is (I’ve seen the simulations in
the appendix, but it should be possible to get an analytic solution).
Concretely:

  • What’s best for candidates, in case they got some additional tokens to spend: (1)
    Increase their candidate stake, (2) get a new candidate slot, (3) or
    delegate to themselves? It should not be (3) because then there will
    likely not be a lot of people willing to become validators. If it’s (2)
    you could get rid of dynamic stake sizes entirely and just have a fixed
    amount. Intuitively, (1) feels like the best outcome (but I don’t know
    if it actually makes a difference).

  • What’s best for delegates? Delegate to (1) “saturated” (i.e. for which
    the 1/3 clause is already in effect) or (2) “unsaturated” candidates? I
    would guess (2), which could mean that they’d attempt to pressure
    candidates into splitting their stake into multiple candidate slots.

Long range attacks

I think the only solution is explicit checkpointing (i.e. “weak
subjectivity”), finalization of the consensus protocol as you seem to rely on doesn’t help. The
problem is that a new node in the network doesn’t know which finalized
fork appeared earlier.

RANDAO attacks

You say if validators don’t reveal in the last reveal round of a staking
period, they are treated as malicious. Doesn’t this run into censorship
attacks by the block proposers at the end of each round?

In general, RANDAO appears very problematic for validator sampling and
also hard to analyze (what if few validators in a row collude? What if
they try to amplify their influence over time, i.e. to get more more and
more slots over time?). Have you looked into other schemes, such as
DFinity-style threshold signatures or Ethereum PoW block hashes (with
their own pitfalls of course)?

Accountability

The POSDAO seems to face a similar challenge as our mPoS approach: While
everything works fine if the security assumption is met (some kind of
honest majority), there’s only limited accoutability in case of
misbehavior (as it isn’t provided by the underlying consensus protocol).
Our approach is to solve this by adding a slashing mechanism, and also
by having the socially plausible threat of forking, leaving the attacker’s stake locked for
a long time (similar to what you do in case of RANDAO last revealer
attacks). I guess in your case, the fact that the attackers stake is
tied to the value of the token leads to some amount of accountability.
Do you have any thoughts on this? Are you considering adding a slashing
mechanism later, e.g. in the Honey Badger implementation, or do you
think this is not necessary?

Bridge

Especially for small chains, bridge validators will have a lot of power
as they control both the token supply on the side and on the main chain.
I think this is dangerous. E.g., they can (threaten to) stop doing their
job and the token value and with it the security of the chain will
plummet. In case of a fork, they control what the canonical chain is
(the chain their main chain bridge contract is referencing and that is thus
tracked by exchanges). They can also set arbitrary restrictions on who
can transfer staking tokens between the chains, and therefore restrict
who can and who can’t become a validator. One could argue that this
makes the security model of the chain fall back to PoA.

I don’t see an easy solution for this though. One way is to only have a bridge main->side using the same validator set for bridge and chain, and nothing in the other direction. Depending on the chain, this can be sufficient, but I see of course why this is not a viable solution in general.

reportMalicious

If I understand correctly, this allows a 50% majority to remove other
validators. Doesn’t this reduce the security model from 2/3 honest
majority to 1/2? Do you log any proof of misbehavior, in order to allow
other nodes/validators to check why someone is reported and if this was
legit?

Parity

What are the changes you’ve made to Parity, in particular to Aura? Just
the finality condition (1/2 -> 2/3) or is there more? Do you think they
will be merged back into the official Parity repo at some point?

4 Likes
#2

@jannikluhn, thanks for your questions!

Bridge fees
Why are those paid to the chain validators? Wouldn’t it be more natural to pay them to the bridge validators?

Initially, we didn’t plan to make staking token inflation, so the bridge fee was an only way how we could reward the pools (that’s why we decided to give the fees to the pools rather than bridge validators). But later we decided to add the inflation mechanism (when staking token is minted and accrued to validator pools), so maybe we would redirect the bridge fees so that they would be accrued to bridge validators. I personally not sure about that, maybe @igorbarinov has other considerations about this.

In the reference implementation you use xDAI for transaction fees and the native token for staking, arguing that price stability is important for users but not so much for stakers.

Yes, the stable price of the native coin (like xDai) is important for those who e.g. want to make their games based on smart contracts because the players are interested to have predictable assets and fees when performing some actions (there, of course, can be other similar cases, not only games).

As for stakers, that depends on price volatility of the staking token: we likely wouldn’t expect it to be high. That’s my personal opinion, and I would look at the answers of other team members on this good question :slight_smile:

I’m curious if you’ve done the math on various allocation strategies to find if there’s an optimum and what it is (I’ve seen the simulations in the appendix, but it should be possible to get an analytic solution).

We haven’t done this yet. For now we only have the NetLogo model outlined in the Appendix D. As an example, anybody also can try to play with the spreadsheet https://docs.google.com/spreadsheets/d/1AQna5b24euqPb2fowN8j9oXUicbst2kVS6j2IYFUFq4/edit#gid=1995349155 for some concrete case and see which way is more profitable: increase own stake, get a new slot, etc. I agree that maybe we should implement some separate simulation for that.

Long range attacks
I think the only solution is explicit checkpointing (i.e. “weak subjectivity”)

Yeah, probably. Have you implemented such a solution in your mPoS?

RANDAO attacks
You say if validators don’t reveal in the last reveal round of a staking period, they are treated as malicious. Doesn’t this run into censorship attacks by the block proposers at the end of each round?

The validators must reveal their secrets many times a staking epoch increasing the entropy of the random seed which is used for validators selection at the end of staking epoch. On the last reveal round each validator can make a decision: reveal their secret or not, to try to influence the outcome (the “last actor” problem). But if they don’t, they will be punished (leaving the validator set and freezing their stakes for a long time or even forever - we didn’t decide yet if it will be forever). To do this kind of attack when a few validators collude they need to organize their group so that the group would be right at the end of validator set (without gaps) because if there is at least one honest validator between two malicious validators, that honest validator will change the random seed unpredictably. We expect that it’s hard to organize the group that way and it will be expensive for a validator to not reveal their seed.

Accountability
Do you have any thoughts on this? Are you considering adding a slashing mechanism later

I think we would make the following: if some validator is treated as malicious, their own stake is frozen forever but the stakes of their delegators are frozen temporarily (say, for 90 days). I guess, that way it would be strict enough for malicious validator and less risky for their delegators because the latter cannot be sure they stake on the honest validator. This is not complicated to implement, so this could be done in the first version of the network. @igorbarinov do you agree?

2 Likes
#3

Especially for small chains, bridge validators will have a lot of power as they control both the token supply on the side and on the main chain. I think this is dangerous.

Yes, that’s one of the reasons why we decided to separate the bridge validators from the consensus validators: because we don’t know the consensus validators. I guess will have some trusted setup for the bridges.

reportMalicious
If I understand correctly, this allows a 50% majority to remove other validators.

Yes, it is supposed to be changed to 2/3 for the reportMalicious as well.

Do you log any proof of misbehavior, in order to allow other nodes/validators to check why someone is reported and if this was legit?

The misbehaviour can be discovered in two cases:

  1. By the contract when the malicious validator don’t reveal on the last round or didn’t reveal too often during the collection round. This info is stored in the contract and thus can be read by anyone.

  2. By the Parity engine, when some validator produces an invalid block. In this case the reportMalicious is called by every honest validator with the proof parameter, but AFAIK this is empty for Aura, so we don’t save this in the contract. Am I right, @andreas?

Parity
What are the changes you’ve made to Parity, in particular to Aura? Just the finality condition (1/2 -> 2/3) or is there more?

The changes we made for Aura are listed in the section 9.3 Modified Parity client for AuRa of the Paper.

Do you think they will be merged back into the official Parity repo at some point?

No, we’re not sure this will happen :slight_smile:

1 Like
#4

Payout to chain validator is an optional reward which is introduced in the reference implementation. It can be used or not depending on the use case.

Rationale:

  • Bridge fees to chain validators create an additional incentive for token holders of the staking token
  • In the future iteration, we might propose inheritance of bridge validators from the chain validators. It requires replication of the list of chain validators to the bridge contracts on the mainnet side and more thoughts on the security model of the bridge.

On the other hand, the option to pay to bridge validators is enabled by the TokenBridge existing functionality and already used in ETC to WETC TokenBridge between Ethereum Mainnet and Ethereum Classic Introducing ETC Bridge and wrapped Ethereum Classic (WETC) There are 0.2% fees on exit from any side of the bridge towards validators of the bridge.

I think we would make the following: if some validator is treated as malicious, their own stake is frozen forever but the stakes of their delegators are frozen temporarily (say, for 90 days). I guess, that way it would be strict enough for malicious validator and less risky for their delegators because the latter cannot be sure they stake on the honest validator. This is not complicated to implement, so this could be done in the first version of the network. @igorbarinov do you agree?

The idea of lifetime freezing which is burning per is not the subject of the current implementation. The protocol can be adjusted in the future to introduce different penalties.
Removing from the validator set is the minimal default action the protocol should implement to adjust faulty validators. From my experience with POA and xDai during the last two years, most faults are from:

  • not following instructions/ specs of hardware by validators
  • not using/ misusing provided playbooks by validators
  • infrastructure faults of cloud providers
  • billing issues of validators

I expect that on xDai DPOS the most frequent consensus fault will be the situation when a candidate was selected to participate in the protocol as a validator and she was not prepared for it.

We have the same exact situation on xDai Chain with BurnerWallet and AlphaWallet which were voted in by the governance to be validators and they were not prepared to be validators and thus are causing consensus faults by not producing block.

On xDai Chain we don’t have the feature for removal of faulty validators from the validator set by the consensus itself. I feel evil to propose ballots to remove voted in validators from the consensus to resolve consensus faults. :grimacing: I’d prefer to have this managed by consensus with very high reputational risks which is:

  • disability to participate in consensus with the frozen stake
  • freezing for 90 days of delegators’ tokens and not received benefits of delegation by delegators`.

P.S. Byzantine Fault Tolerance and Sybil attack usually overestimated in complex systems, e.g. with human interactions in multiple roles. It was overseen in humor form by James Mickens in his hilarious article The Saddest Moment .https://scholar.harvard.edu/files/mickens/files/thesaddestmoment.pdf

3 Likes
#5

Yes, you’re right, someone who controls a supermajority of any past validator set could perform this attack. We should probably add published checkpoints as a remedy to the document; but it’s kind of an “external” solution: there’s not much in the algorithm itself that would need to be done for this.

In some cases this isn’t necessary. E.g. the evidence for failing to reveal the randomness secret is on chain, so it’s already public. In that case, even the reporting call happens inside a smart contract.

In other cases it isn’t feasible, e.g. proving that you received the same message twice. Here all you can do is “vote” that the sender is faulty. (And also only in an implementation where messages can’t get duplicated accidentally.)

But in many cases a proof will be possible, and it’s simply something that we haven’t implemented yet. As @varasev already mentioned, the proof field is currently left empty by Aura. And the FaultKind enum variants in hbbft don’t have any fields yet: For those where it makes sense, we will probably add a field containing the proof.

2 Likes
#6

We consider the bridge as an external entity with different security model which was proven in the wild by different projects and use-cases. The staking token and native token may have different set of valdiators and governance

The TokenBridge security model is used by POA20 Bridge, xDai Bridge, WETC Bridge by POA team, and by

  • Colu
  • Ocean Protocol
  • Sentinel Chain
  • Swarm City

The POA20 Bridge is in production since May 2018. It holds $585,000 worth of POA20 tokens on each side (price data on 2019-04-04T08:45:00Z)
The bridge was audited by several companies:

  • LevelK
  • BlockchainLabsNZ
  • MixBytes
  • Digital Security
  • PepperSec
  • Secureware
  • Harris, Wiltshire & Grannis LLP (legal audit)

The security model of the bridge allows separation of validators from governance. Here is more info about it TokenBridge roles: Administrators, Validators and Users

For example,

  • validators can be professional independent validators with the PoA model. Their motivation is % from the exit from the staking bridge.
  • governance of the validator set can be managed by majority voting by DPoS token-holders on Mainnet via a governance framework, e.g. Aragon One.
  • governance of fees/limits can be managed by the foundation, or by multisig of elected community members AND foundation like it is with Aragon.
  • governance of upgradability of smart contract can be managed by offline custodian service and can be used only on subjective resolution or black swan types of events.
2 Likes
#7

Thank you all for your detailed replies! Most of my questions are answered.

Yeah, probably. Have you implemented such a solution in your mPoS?

Parity provides this out of the box via the “forkBlock” and “forkCanonHash” fields in the chain spec file. So the checkpoints can be distributed via the normal update mechanism, and we’ll probably just add a new checkpoint whenever the validator set is updated anyways.

On the last reveal round each validator can make a decision: reveal their secret or not, to try to influence the outcome (the “last actor” problem). But if they don’t, they will be punished

Yes, I get this, but what happens if the validator tries to reveal, but the block producer doesn’t include the reveal-transaction? They can just claim to have never seen it and the validator will be punished even though they did nothing wrong.

1 Like
#8

Yes, I get this, but what happens if the validator tries to reveal, but the block producer doesn’t include the reveal-transaction? They can just claim to have never seen it and the validator will be punished even though they did nothing wrong.

This case is not possible because the reveal-transaction is included into the block by the revealing validator. I.e. a validator reveals their secret only at the block for which this validator is a block producer. Moreover, the reveal-transaction is forcibly included at the beginning of the block, so there cannot be a case when someone creates a lot of dummy transactions to fill the block and thus delay the reveal-transaction. We modified the Parity so that the validator node would work that way.

2 Likes