Celo Discord Validator Digest #10

Stability protocol and cUSD transfers, security audit for Master Validators applying for Celo Foundation votes, missing signatures (again!), useful info and community tools.

One of the troubles we have faced as a validator for Celo is keeping up with all the information that comes up in the Celo's Discord discussions. This is especially true for smaller validators whose portfolios include several networks. To help everyone stay in touch with what is going on the Celo validator scene and contribute to the validator and broader Celo community, we have decided to publish the Celo Discord Validator Digest. Here are the notes for the period of 22-28 June 2020.

Discussions

Stability protocol and cUSD transfers

Last week, cLabs revealed a plan to begin voting on the stability protocol and cUSD transfers activation. This was the last step necessary for the network to function as it is intended, however, there were concerns from the validator community regarding parameters of the protocol and possible negative consequences for validators trying to cash out their rewards:

@zviad | WOTrust | celovote.com: Hey, this is great news overall, but I was a bit surprised to see even further reduction to reserveFraction value. With reserveFraction set to 0.001, as far as I understand that basically means having just 30k CELO buckets every 5 minutes for on-chain exchange uniswap-like mechanism. It makes sense that low reserveFraction value protects reserve itself from losing CELO in arbitrage scenarios, but doesn't it basically shift some of that cost instead to individuals who would trade on-chain to cash out their cUSD? (or to even convert cUSD -> CELO to invest further into network). It seems like such low reserveFraction would make it very difficult (or very expensive) for validators who earn cUSD to convert that cUSD to CELO to either reinvest in their validators, or to cash out and sell it on exchange. Just liquidating 20k cUSD that is paid out per epoch in epoch rewards would be difficult without incurring spread costs >1 percent. I am no export on this, so I definitely might be missing something. But from my understanding, it feels like such low reserveFraction wouldn't actually decrease arbitrage opportunities or arbitrage gains necessarily, it would just shift the cost of that from reserve to individuals who need to trade on-chain (and right now these would be all the validators who earn cUSD as epoch rewards).

...

... It will be very difficult for anyone to provide cUSD -> USD conversion without putting >1% markup on it. (this is of course before we have real use cases for cUSD other than it being a vehicle to buy CELO or earn as rewards. Once there is organic demand for cUSD for other use cases, things will change somewhat.)

...

... The exchange rate that you get does depend on the amount you trade. And that is where the "unfrozen reserve balance" and "reserveFraction" come into play. When they are low that means that trading even say 1000 CELO, can mean that you will get 1-2% worse rate compared to the "market rate".

@Roman | cLabs: The reserveFraction of 0.1% was chosen with the tradeoff between reserve protection and on-chain liquidity in the early days after stability protocol activation in mind. Given that the oracles were quite recently deployed, CELO market liquidity is still evolving, arbitrage bots first need to be deployed and potentially calibrated by users, additional CELO exchange listings might occur and so forth, it seemed most prudent to focus on reserve protection in the early days and to increase the reserveFraction parameter and therefore on-chain liquidity as soon as a more steady state is reached. As you pointed out @zviad | WOTrust | celovote.com, this means that selling large chunks of cUSD early on can lead to significant slippage and it is therefore recommended in the governance proposal to spread out larger trades over time. For a validator who is looking to convert cUSD to CELO, the worst case scenario right after stability protocol activation would likely be that no Hummingbot instances are running and that no user is willing to sell CELO for cUSD on chain which leaves only the reserve on the other side of cUSD sales. With the proposed parameter setting (reserveFraction of 0.1%, bucket updateFrequency of 5 minutes) and under the above worst-case scenario, 100 cUSD could be sold every 5 minutes incurring a slippage of approximately 0.35% for each trade (estimated given the current CELO/USD price level etc.). This gives 28,800 cUSD sold per day with this slippage which is more than the daily total validator earnings of about 20.5k cUSD.

If additionally, arbitrage bots are running that bring the on-chain price roughly back inside the on-chain spread after every block, trades of such size (~100 cUSD) could basically be executed every block without incurring a big price disadvantage (<1%) which means that in theory (and overstating the reality as this argument for example ignores gas fees), a multiple of 12*24 of the above 28,800 cUSD could be sold by validators every day pretty cost efficiently as long as trades are well spread out over time. I would argue that, while the stability protocol is looking for a more steady state, this is a justifiable downside given the additional reserve protection that this reduced reserveFraction parameter provides. I would be very interested in learning more about different perspectives on this though!

@zviad | WOTrust | celovote.com: ... I wasn't quite sure if there really were any practical dangers for reserve with previously set reserveFraction, but if there are indeed some worries, makes sense to be safer in the beginning just in case. Worst case scenario, even with current parameters i think it should be possible to go from cUSD -> USD at ~2-2.5% loss (including spread, slippage, exchange fees, etc), which is definitely not the end of the world, especially in this early stages.

I think one part in your response that I am not fully convinced by is that arbitraging entities will help with on-chain price (they will help with volume but make price worse for avg individual trading by hand). Entities that will do arbitrage, can't just accumulate cUSD infinitely either, they need to liquidate it back too. So I actually think because of that, entities that to do arbitrage with a reasonable setup, will actually be the ones that take the best cUSD -> CELO rates in those 5 minute blocks, so for regular validators trading by hand, they will actually get even worse average rates overall. (so there will be higher volume available on-chain but probably even worse average rates). But as you mentioned the tradeoff for initial safety of reserve does make sense, until we can all see what stable state actually looks like with everything turned on.

zviad | WOTrust | celovote.com also voiced concern about price oracle code not being open sourced and the quality of exchanges that trade CELO:

Afaik, reporting oracle code is still not open source. This seems like a pretty big issue. If there is still some waiting for a security audit or something like that it would be useful to know at least. Or just some more information about what the hold up is to have the code open source would be useful. It is quite problematic that code for this very important part of the network is still not publicly visible. Since Oracles are centralized for now, it would definitely help to know more details about oracle setup, and access restrictions. This article doesn’t have much detail around access control: https://medium.com/celoorg/an-introduction-to-celo-oracles-fd1a534669bb. Important questions in my mind would be:

  • Who/how many people have access to Azure account where HSM keys are stored?

  • Who/how many people have direct access to oracle machines?

  • Who has access to deploy new code/image to oracle machines?

  • What does internal or external auditing of this system look like? Is there an audit trail for all potential actions that might cause changes in Oracle operation?

There is still a concern on number and quality of exchanges that have picked up CELO, but this is less of a concern for now compared to the two above. Bittrex is a reasonable exchange but with medium to low volume overall. OkCoin is even less real volume, and also has history of fair bit of questionable events in its past. Its unfortunate that these are the only two options for now, it isn’t end of the world, but it will definitely be concerning if this still remains the case 2-3 months from now.

@kp | conclave: ... Besides the concerns above - which will take time to be wholly addressed - I'd appreciate thoughts (from anyone) about whether there are valid reasons for voting “no” on proposal #7 (unfreezing the stability protocol), as having it successfully pass would take us one (massive) step closer towards Celo being accessible to everyone, everywhere. Enabling CELO-cUSD exchange would also be the first opportunity for validators to partially offset the expenses that they/we have been floating these past months.

While I agree that transparency around the oracle software is important, based on the cLabs team's track record, I think we can presume that they have legitimate, serious reasons for not open-sourcing code immediately. Also, I'd love to hear different takes on this, but I can't seriously consider the idea of them obscuring code in order to pull off something shady, since they would absolutely be caught and the consequences would be ruinous.

If the concern is highly-unfavorable rates due to low-liquidity, then - until that's addressed - users can at least protect themselves by setting the "minimum amount received" parameter. It doesn't seem like a strong reason to *withhold the option altogether, though.

@syncnode (George Bunea): All the proposers summited so far, except the one for changing the cGLD name were already known and planned even before launching the mainnet so it was more than enough time to discuss them so for me there was no surprise so far. Also we discussed weeks ago the oracles situation and status when I asked if it’s possible to decentralize the oracles as well and hopefully we will see that happening in the future. So basically with the proposal 7 being accepted we simply follow the plan that was shared and in theory known by most of us. Like @zviad | WOTrust | celovote.com suggested it would be great to know more about the oracles (source code, audits, setup, etc). Besides the plus of confidence that it may bring we will also have a clearer image on potential attack vectors.

The answers to some of these questions were given on Celo forum here: https://forum.celo.org/t/governance-proposal-to-activate-celo-stability-protocol-and-enable-cusd-transfers/547/4


Security audit for Master Validators applying for Celo Foundation votes

There were questions from the validator community regarding whether the winners of the Master Validator badges that underwent independent security audit during the Great Celo Stake-Off competition in February need to do the audit from scratch again. The short answer is no, more details below:

@Jay | Cypher Core: Had this question while filling out the foundation vote application. If one had participated in the security audit in stakeoff and won a master validator seal, would s/he have to go through the checklist again? seems redundant, no?

@claire | cLabs: If you've gone through the security audit for the stake off my sense is you should be good to go but let me check with the security team.

@Deepak Nuli|cLabs/MultiSig: I agree with Claire. We introduced a few more checks for people running validators in a cloud environment.

^ That should not really affect your current scores/seals.

@Jay | Cypher Core: Understood. Though what I was asking was that if one had won the seal, would he have to do the checks all over again for the validator due diligence in cohort 3?

@claire | cLabs: If you [won] the master validator seal in stake off no need to re-run audit though recommended to do a checkup every so often.

@Jay | Cypher Core: That's good to know. So when asked to submit a completed checklist while filling out the cohort 3 application, do I simply skip it? Or include something that can prove my validator seal?

@Deepak Nuli|cLabs/MultiSig: How about make a note about that in one of the answers somewhere so we see it?


Missing signatures

The missing signatures saga continues with more input from the community:

@BisonD: We have been investigating a particular issues with missing signatures that @chris-chainflow has reported here and we found something peculiar. In the istanbul.valEnodeTableInfo the signer is advertising on an IP address that is not valid, possibly because it has changed. This invalid entry seems to be consistent across every node I checked, however, only 1 other node seems to be missing signature submissions when it proposes a block. Further, the validator that is behind this proxy node does not seems to be having issues with the external IP that was set in the proxy pair.

First, while milestone planning it would be great to get DNS entries as valid proxy addresses.

There are situations where controlling IP address and affinity is challenging or could risk resilience.

...

Today, the proxy addresses that are used by the validator are IP addresses. These could possibly change. Allowing a DNS entry as the proxy source would help abstract this and prevent cases of changed IPs (which I think is one of the issues we are seeing).

But back to the missing signatures, I suppose a question is how and when does the istanbul.valEnodeTableInfo get updated? Is it only when a proxy restarts? Another question: If a validator starts with the public IP address of its proxy as one thing and this changes for some reason, what should we expect to happen?

Also, I know this might be a silly question, but is there a way to get a running validators proxy enodeurlpair that was passed in via --proxy.proxyenodeurlpair?

^ geth --exec " admin.peers" attach

The result of this call implies that the public IP used in the proxy pair is not used?

@victor | cLabs: It looks like the answer is a no, and that RPC is a TODO item [1] in the code that will be fixed with multi-proxy [2]

[1] https://github.com/celo-org/celo-blockchain/blob/b3ec9b9eabe6629bc3fc10cfe3cb0c5aae561a3b/consensus/istanbul/backend/api.go#L172-L180

[2] https://github.com/celo-org/celo-blockchain/pull/1026/files#diff-fcdc

@BisonD: How is the public IP in the proxy pair used by the validator?

It may be possible that the public IP of our proxy has changed due to a platform rescheduling, yet it still seems to be mostly communicating with the validator and signing/proposing blocks.

@victor | cLabs: The public IP is what the validator communicates out to the world as where it can be reached (via proxy). When the validator crafts it's signed announce message to share it's enode with other validators, that is the IP that is included.

@BisonD: What is the recommended approach to re-announcing to the network that a validators publicly accessible IP address has changed?

@Rob | Polychain: Anecdotally, we've rotated public IPs of our proxy without immediately relaunching our validator and they've continued to operate just fine.

@victor | cLabs: This is a really good observation. The IP address in the enode table is used when making an outbound connection, so if a validators is broadcasting an incorrect / out-of-date IP address, no other validators will be able to initiate a connection to them. The validator broadcasting the incorrect IP will still be able to initiate connections though. So if two validators both have incorrect IPs in their enode table entries, they will be pairwise unconnected, since neither can make a connection to the other. If ceiling(1/3 * N) + 1 validators have incorrect IP entries, then that set would not be able to successfully propose a block because they could not send it to the other validators with incorrect entries. Validators with a correct entry still could because every all validators can still initiate a connection to them. In the extreme case, as long as 1 honest validator has a correct IP address in the table, blocks can be produced but the network would experience a lot of round changes.

This could potentially explain the issues we've been seeing with validators missing blocks proposed by specific validators. If both validators have incorrect / out-of-date IP addresses in the enode table, then they will not be able to communicate.

@BisonD: I have spent some time looking at validators that seem to have this pairwise missing sig pattern and looks like 2 of 8 have incorrect IP entries

The IP associated with these signers is stale or unreachable: 0x0d560fc0dc2e1a6befa45e72546373e1eef8e3340x59f7b67e6beae0223ddc91eec010b670c553e8e0

It seems that at least these additional signers are locked in some pairwise missing signatures (this is from last week and may have changed) 0x26b6cc3529828aa83b8ad2c7cf8cd55811b954f60x43882141555003b3e71110f567373b59ac4cb0bd0xbf9ac3fd4c1a8530580d4d7aa07d235e20de5d7d0x16b528a3d3e88456a9586ac06ecdbb9f1bf5bcf00xffbcf262c1d5c4392ef469ba79f2cd195d2affda0x0610b8b4e6f5c3241d53ed3374ddca8969cd053c

I can say that in at least one of these pairings all participants have reachable IP addresses in the validator table 0x26b6cc3529828aa83b8ad2c7cf8cd55811b954f6 => [0x43882141555003b3e71110f567373b59ac4cb0bd, 0xbf9ac3fd4c1a8530580d4d7aa07d235e20de5d7d]

Also to note, the 2 signers with unreachable IP addresses are also in a pairwise interaction.

I suppose its possible that these 2 signers with stale IP addresses are provoking the larger cluster, but in this mini-network, the stale IP signers are not fully connected to the other nodes, nor does there appear to be a path through them that connects all the nodes missing signatures in this pattern.

... it could totally be there though.


Useful info

  • Hummingbot.io has launched its arbitrage trading strategy for Celo that "allows any CELO holder to earn arbitrage profits while contributing to cUSD stability".


Community

Hi folks, fixed a bug on https://cauldron.pretoriaresearchlab.io/block-map that wasn't showing epoch crossovers correctly when you key rotate. Now showing as expected, one validator with two signers over the key-rotation boundary (see image). Additional changes:

  • Added "Favourites Only" switch so you can view only those validators you are interested in

  • Removed Baklava faucet and block-map while forno server is undergoing maintenance in preparation for network reset


Like what we do? Support our validator group by voting for it!