Celo Discord Validator Digest #8
Oracle fixing and long-range attacks, vote distribution at celovote.com, tracking attestation status, useful info and community tools.
One of the troubles we have faced as a validator for Celo is keeping up with all the information that comes up in the Celo's Discord discussions. This is especially true for smaller validators whose portfolios include several networks. To help everyone stay in touch with what is going on the Celo validator scene and contribute to the validator and broader Celo community, we have decided to publish the Celo Discord Validator Digest. Here are the notes for the period of 8-14 June 2020.
Discussions
Oracle fixing and long-range attacks
Community member jon-chuang asked how the network could resist price manipulation:
... Can I ask for the cUSD price stability mechanism, are there plans to conduct some analysis on its resistance to price manipulation, especially where an attacker has a large amount of the stake (say 10-30%)?
@zviad | WOTrust | celovote.com: ... It depends on what kind of price manipulation you are asking. there is a talk here: https://docs.celo.org/celo-codebase/protocol/stability by @asa | cLabs where he talks about all the simulations that they have run. However all of that is about stability of cUSD / USD pricing. cGLD / USD pricing can definitely be manipulated by large holders of USD, cUSD or CGLD, since it is a free market. Also whoever controls Oracle reporters can cause massive havoc. While stability protocol can protect itself if Oracles are down, it can't really protect itself if oracles are publishing malicious values. Afaik, it would be possible to fully drain the reserve if you get control over majority of oracles and start reporting malicious exchange rates to the system.
The same community member was curious about the long-range attacks:
... Can I ask how Celo is resistant to long range attacks?
...
... I am thinking of the standard scenario, where, say a few years down the line, when many of the earlier validators have left the game, and hence no longer have skin in the game, it would be a possibility that for some given epoch, an attacker could convince more than 2/3 of the validators to share their secret keys via a huge monetary reward. These validators have plausible deniability as to the use of their keys by claiming they may have been hacked. Then, an attacker could carry out a man in the middle attack, and using Plumo or otherwise, convince an end user at some point in the future that a different validator set, even one completely controlled by the attacker, than the one used by the real network, is in play. Then, since the entire security of the chain rests in the 2/3 consensus, they can convince the user of anything regarding the state of the chain.
@alchemydc | zanshindojo: Sounds like the best defense against that attack vector is to incentivize a diverse and decentralized set of validators that are presumably more resistant to collusion simply by being numerous and independent.
@zviad | WOTrust | celovote.com: The long range attack that @jon-chuang is describing targets existing validators not currently active ones. I. e. people who were validators once but are no longer part of the network. So in this case numerousness would actually be counter productive because you have higher chance to have 2/3 of validators for a particular epoch that are no longer active and potentially susceptible to bribery to give up their now defunct keys. (that particular epoch can be even few years in the past) I am no expert in this, so I would definitely wait for one of the @Protocol Engineers to chime in. However, as far as I can tell, the "light syncing" protocol has a checkpointing mechanism, so it's not that simple to just man-in-the-middle it with an alternate history chain. I am not exactly sure how the trusted checkpoints make it into the light clients or how often they get updated though. But as long as it gets updated at least monthly, that type of risk would decrease exponentially.
@victor | cLabs: ... So at the moment there is not a protection in effect, but what @zviad | WOTrust | celovote.com is describing could be put into effect fairly easily, and that's what some of us at cLabs has been discussing. Checkpointing, say once per month, would ensure that, as long as the validator set does not change too fast, long-range attacks are infeasible because the unelected set will no longer be able to create a block that will be accepted by a nodes with the check point. Check point distribution is always a bit of an interesting challenge, but adding the checkpoint to the source code is a common and workable method.
@jon-chuang: ... This solves the problem for nodes that are already running. However, how about users that hope to sync from scratch? E. g. new users. Will they try to check checkpoints with other existing users they know?
@victor | cLabs: ... Adding the checkpoint to the source code is exactly designed to help users syncing from scratch. The logic is that if the user already trusts the code, as they must, then they will trust the checkpoint included. After the fully sync at least once, checkpoints can be maintained on-chain or any other distribution channel. For users who know and trust someone else operating a node, they could certainly ask for a checkpoint, or even a full chain data dump, from that trusted person.
Vote distribution at celovote.com
There was a discussion started by syncnode (George Bunea) regarding the way how votes cast using the celovote.com tool are distributed to validators. Currently, between 85 and 90% of votes go to three-four predefined groups without explicitly mentioning this on the website:
@syncnode (George Bunea): What percentage goes to the predefined validators and what percentage is randomly distributed? Because if you're assigning 50% or more of the votes to the preferred validators than you should clearly specify this!
@zviad | WOTrust | celovote.com: Most of the votes do go to priority groups, but the key criteria is always that groups have to pass estimated APY threshold (more about that metric here: https://celovote.com/scores). Doesn't matter how much of a priority a group has, if their estimated APY drops below the threshold, their votes shift to other groups. From users perspective, they care about maximizing returns, so the actual list of groups that are voted for doesn't really matter as long as they are getting groups that have close to maximum estimated APY-s. (but they are still all visible in
votes
tab once votes are cast).As FAQ mentions, I do plan on adding a separate page that goes more in depth in current vote priorities and vote distributions, but that page wouldn't really be that useful for the users (no actual user has cared about it), but it could be useful for other validator groups. It hasn't been a priority to add this, since total number of votes in Celovote service is still small compared to what is required to have any substantial affect on election landscape.
@syncnode (George Bunea): Then you should mention it like that on your tool...
Your preferred validators take now more than 85% of the votes but you're letting the impression that the votes are somehow randomly distributed.
@zviad | WOTrust | celovote.com: ... We have tried to outline things pretty clearly in FAQ, and I strongly believe this is one of the best setups for users who are mainly interested in getting maximum returns. Afaik no other individual validator group would actually notify their users or prompt them to change votes if they are actually having issues and not delivering maximum returns, where as in Celovote that would happen automatically and there are no groups that would remain voted for if they aren't actually delivering close to maximum returns.
Also all votes and information is pretty clearly visible for a user as soon as they sign up and they can also un-signup at anytime if they are dissatisfied with what they are getting.
I will still take this feedback and try to see if we should reword few sections in FAQ. But I do feel like that is a pretty unfair assessment overall. The primary goal of Celovote is to promote self custody, and offer users a real alternative that is just as easy to use compared to giving up full custody of their coins to one of the large custody solutions (i.e. coinlist, coinbase, etc). There is no lock-in and everything is orders of magnitude more transparent compared to those custody solutions so I definitely feel it is uncalled for to call this setup "shady" in any way.
@syncnode (George Bunea): The tool is basically distributing votes for a predefined list of validators groups if they are performing well enough in terms of uptime, so you should state it this way, instead of: "Celovote tracks validator performance and automatically distributes votes to the top groups on your behalf."
@nambrot | cLabs: I'm just answering in my capacity as an individual community member (and not cLabs), but I agree that with the current wording, it does feel a bit misleading. I think clarifying "preferred" group on the home page would go a long way to accommodating that concern. I do believe that services such as celovote are critical in aiding the goals of decentralization of Celo (which is why we as developers have added authorizing voter keys!). In fact, celovote can (and should!) be used in conjunction with custodians even, i. e. you could keep the account key on a custodian, but trust something like celovote for election and governance voting.
Tracking attestation status
If you are curious to know how the Celo Foundation will check validators' attestation status, the following conversation might have an answer:
@nambrot | cLabs: In the cli, we have
celocli identity:current-attestation-services
, which I imagine the foundation would use in the beginning in the absence of any actual attestation requests. Of course once attestation requests are happening, I imagine they'd use that.@mbay2002 | Qoor: What is that command supposed to do? I get
celocli identity:current-attestation-servicesFailed to initialize libusb.Fetching currently elected Validators... !FetchError: request to http://35.200.18.149/status failed, reason: connect ETIMEDOUT 35.200.18.149:80 at ClientRequest.<anonymous> (~/.nvm/versions/node/v10.20.1/lib/node_modules/@celo/celocli/node_modules/node-fetch/lib/index.js:1455:11)
35.200.18.149 does not belong to our group.
@nambrot | cLabs: Ah yeah, the current CLI version does not include some fixes yet, my understanding is that a new CIL version is imminent. The command is basically list all elected validators and list their attestation service status.
@Peter [ChainLayer.io]: What does it call to verify that status?
@nambrot | cLabs: It basically just looks for metadata registration, checks that an attestation service url is claimed, that one can get the /status endpoint that the account is correctly configured.
@Peter [ChainLayer.io]: Ah ok, so if
test-attestation
works that should definitely work right?@nambrot | cLabs: Yeah.
Useful info
Turns out that if a validator fails to propose a block, this will lead to longer block times:
@zviad | WOTrust | celovote.com: ... I just want to confirm my understanding, when a validator is fully down and fails to propose a block when it’s their turn, that will essentially lead to block time of 10 seconds right ? So for example if a validator misses 1000 consecutive blocks they fail to propose ~10 blocks so epoch change will shift by ~50 seconds in future, instead of being around same time as day before.
@victor | cLabs: ... That is correct. A failed proposal will result in a round change and a longer block time. Timeout is at 8s, so a little over 8s is a typical block time to see if the proposer is down for a block. It's also true that some epoch are longer than others and the epoch boundary will shift forward over time for that reason. Each epoch is at least 24 hours.
cLabs has proposed the release process for deploying regular software updates:
@victor | cLabs: I just opened to publish the release process we are building to start deploying regular software updates. It includes one document for a process of releasing new versions of the blockchain client (
geth
) and one for smart contract upgrades. Comments greatly appreciated! https://github.com/celo-org/celo-monorepo/pull/4045
Community
Thecelo.com has a new optimization, which simplifies the link to the group web page, and can now access the group page via domain and keybase name, such as:
https://thecelo.com/group/< domain >
> https://thecelo.com/group/bi23https://thecelo.com/group/< keybase name >
> https://thecelo.com/group/sunxmldapphttps://thecelo.com/group/< address:0x000... >
> https://thecelo.com/group/0x07fa1874ad4655AD0C763a7876503509be11e29EAlso added some simplified page links:
https://thecelo.com/groups
https://thecelo.com/governance
https://thecelo.com/validators
The Celo Cauldron by PretoriaResearchLab has seen a new major release with some cool features:
Realtime block-by-block signer visualization is here!
https://cauldron.pretoriaresearchlab.io/block-map
New features
Single-block updates as they arrive rather than chunks of 100
Dark mode, looks far better I didn't even leave an option for light
Can search for any historical block using the box
Correctly listing missed block as prior block to the one it was reported on
Favourites, scale size, and stay at head selections will cache locally so you don't have to set every time you visit
There may be some performance issues as AWS auto-scales read/write provisions in the backend, which I'm in the process of optimising. Please try a refresh first if there are any hard errors, and report any lingering issues on GitLab.
Thanks for all your suggestions and support. Up next - subscribe for email updates and more complex statistic reporting! (Best displayed on desktop, not really super useful on mobile at this point.)
Like what we do? Support our validator group by voting for it!