Celo Discord Validator Digest #11
Celo-stealing bot and docker settings, downtime slashing, attestation requests and community tools.
One of the troubles we have faced as a validator for Celo is keeping up with all the information that comes up in the Celo's Discord discussions. This is especially true for smaller validators whose portfolios include several networks. To help everyone stay in touch with what is going on the Celo validator scene and contribute to the validator and broader Celo community, we have decided to publish the Celo Discord Validator Digest. Here are the notes for the period of 29 June - 5 July 2020.
Discussions
Celo-stealing bot and docker settings
Last week, a community member noticed a strange movement of funds on her Baklava account:
@suvis: this seems sort of weird. my account has transferred to
0x352828EF008Cf9f63Cc79ec002449E2d8B16370a
, which has collected many 1 celo from different addresses.@victor | cLabs: ... It does indeed look like the key for
0x7955Fb4F3225cbC4D570798654098D641CB1e5bB
is compromised and some bot is draining it of funds. I transferred 1 CELO to the address, 1 CELO appeared on the account and shortly after was transferred away to0x352828EF008Cf9f63Cc79ec002449E2d8B16370a
...
So probably a bot scanning the internet for node with open and exposed RPC ports. @suvis You may want to check your open ports.
@syncnode (George Bunea): @suvis you can check with:
netstat -an | grep 8545 && lsof -i:8545
to see if the port is being used, and what is using it.Also you can use telnet to ping that port from another machine (desktop, laptop, etc) and see if it’s open for connections.
If you’ve launched docker with 8545 port open for everyone this will bypass the firewall restrictions and allow connections from outside on rpc port.
You should restrict it only to localhost.
Soon after it became evident that the bot also stole some Celos from several mainnet validators as well who had their port 8545 exposed to the outside world:
@Sami Mäkelä: @cryptodad | LetzBake! check your validator, it's sending funds to a bot address https://explorer.celo.org/address/0x35d96442d4d1199926a3dbb228532011643fc046/transactions
also @chorus-joe @Felix | chorus.onehttps://explorer.celo.org/address/0x606311948f7426ddfd23c1521b15eddb52e83b29/transactions
@daithi | Blockdaemon was affected few days ago, but now it's stopped https://explorer.celo.org/address/0x39ec4f2a82f9f0f39929415c65db9ea5df54e41d/transactions
@chorus-joe: It is exactly what you describe.
Open 8545 port, which ofc you can use the unlocked key to sign txs.
It is the VALIDATOR_SIGNER_ADDRESS key; so can be rotated and will be in a couple of hours.
I cannot see that you can extricate the key in any way; so unable to obtain the bls consensus.
@trevor | cLabs: Is there a way you could change firewall rules to not allow external traffic to that port?
https://github.com/chaifeng/ufw-docker#solving-ufw-and-docker-issues
this works
@Thylacine | PretoriaResearchLab: I have my iptables config only allows SSH (configured on non-standard port), 8545 from localhost, explicitly drops 8545 from everywhere else. Docker then manually configures it's own forwarding chains relevant to the networking settings of the containers you start and it seems to all be working. Explicitly dropping 8545 from outside the machine was a requirement during the TGSCO security audit, so it's probably worth doing that if anyone hasn't.
@cryptodad | LetzBake!: Problem solved. I put the node now behind a Nginx reverse proxy on a separate instance that only allows connections from specified computers on my private nebula network. The proxy forwards requests to another proxy on the accounts node that only allows connections from the reverse proxy and which forwards all RPC requests to localhost.
...
This is the IP of the attacker, but it's almost certainly a hijacked server: 217.227.187.219. A whois query points to the German "Deutsche Telekom AG". I already informed their abuse team to take it down.
...
This is the Celo address of the attacker: https://explorer.celo.org/address/0x352828ef008cf9f63cc79ec002449e2d8b16370a/transactions
He was mostly able to only steal micro-funds in the area of 0.0000x Celo, like in my case, but there are also transactions with higher value. Since their issue is already solved, I'm not naming the affected validator.
...
How the incidence could happen:
I was not aware that Docker bypasses the INPUT rules in iptables and thus opened the RPC to the public on my accounts node, but then only allowed access via private network, which was completely ineffective in this case.
I rotated the validator signer key to another sever to update the current active server-pair. The mistake here was that I copied this key to the accounts node and unlocked it, although I later realised it is not needed at all when the
proof of possession
is done directly on the validator for security reasons. I also assumed thatunlock
would automatically reverse tolocked
after some time, which apparently is not the case without the--duration=duration
flag.I transferred some minor funds (like 0.1 Celo) to the new signer address, as I was not sure if some gas is needed to sign blocks, which I now know is not required.
By bypassing my iptables rules and with the signer address unlocked, the attacker had now full access to the funds on this address. All other addresses I use were protected, as the keys were either on Ledgers or locked with 40-character long random passwords.
Downtime slashing
Last week, we again saw some validators missing blocks for hours which again sparked a discussion about the downtime slashing not being active:
@warfollowsme | Celomap.io: DowntimeSlasher still not working?
@zviad | WOTrust | celovote.com: For slashing to work, new contract needs to be deployed. So it will have to go through governance process afaik even after code is complete. so we are still pretty far away from slashing to work.
TDLabs validator has been down for >13,960 blocks, that is definitely a new record. If slashing was enabled, it would have been slashed twice because of this.
@warfollowsme | Celomap.io: Yeah, already figured it out. I got a gas error with slash calling, and when I started to look why I found this pull request https://github.com/celo-org/celo-monorepo/pull/3806. so yes, validators can sleep peacefully I guess next few month. Unless celo team will manually regulate this behavior, for example, to remove votes, because in fact, this is a violation of their requirements.
Attestation requests
Meanwhile, mainnet validators started receiving attestation requests and earning their fees:
@cmcewen | cLabs: Very very very excited to report that we have successfully verified a few folks on mainnet and sent the first transaction utilizing the lightweight identity protocol!
@zviad | WOTrust | celovote.com: Looks like we get paid: ~0.000323 cUSD for attestation requests. which is quite a bit smaller than what it costs to send an sms over something like Twillio. But I guess validator pay out is more than enough to cover this extra cost.
@victor | cLabs: Hmmm, that's odd. If you check out network parameters the attestation fee is set to 0.05 cUSD.
@Sami Mäkelä: Looks like the attestation contract is keeping the fees...
Hmm, you need to withdraw them, hopefully there's a method for that in the release gold contract.
@zviad | WOTrust | celovote.com: Ah wild, I just checked the contract and indeed there is a setup to withdraw fees later on. (even though I got paid some fees in cUSD, I guess that was gas fees paid in cUSD).
But there is no celocli command or release gold contract implementation of it yet. (but I am guessing it will take a while before the amount becomes relevant).
At the same time it turned out that if a person requesting an attestation doesn't click the link they receive as a result, the attestation is going to be failed for a validator:
@Peter [ChainLayer.io]: I have the same.. one validator shows 1/1 the other 1/nothing. Both received an attestation and both were answered according to the logging..
...
Logging for 0x4F.. shows an sms send successfully, sounds like somethings buggy somewhere then
...
Ok so if the person registering doesn't click the link the attestation counts as unsuccessful and we wont get the fee?
@Sami Mäkelä: From the description, it looks like the person registering should post it, so probably it's like you said "if the person registering doesn't click the link the attestation counts as unsuccessful and we wont get the fee?"
...
Well, the person who requested attestations will just lose the fee then.
@Peter [ChainLayer.io]: Yes and the validator pays money for the text and has a failed attestation in his name ...
The sms fee doesn't worry me too much (unless this starts happening 100s of times per day), but failing attestations when you are not failing them sounds kinda wrong to me.
@nambrot | cLabs: I appreciate everyone sharing their concerns about missing attestations. I'd imagine in the beginning, the foundation will be rather forgiving as we will figure out the kinks of the identity protocol.
It is indeed the case that while a validator might operate the attestation service appropriately, if a user does not actually complete the attestation by submitting the
complete
transaction, the stats will look worse for the validator + the validator will not be compensatedHowever, in that case, the user will also be adversely affected, since attestations for their account will look worse as well.
So in general, we will have aligned incentives between honest validators and the requesting user.
Nevertheless, the inability to figure out who to really blame (did the validator not send the SMS/did the carrier to transmit it/did the user not submit the TX), is the reason why it is difficult to for example find slashable conditions for this part of the protocol.
Since attestations are uniformly randomly selected, we expect the "natural failure rate" to be uniformly distributed as well, so I wouldn't consider the occasional missed attestation (especially in the beginning) as a source of concern.
Even with the small sample size right now, we do see some concern in some validators missing all attestations, and that's something I'll be looking at today.
Incomplete attestation could not be found
is happening a lot it seems, and it might indicate an outdated attestation service (since we updated it to account for the new privacy changes after RC1 launch). I'll add some more debugging info that can hopefully deployed today.
In addition to this, all validators should make sure that they are running the latest attestation image on their attestation nodes, otherwise they might fail attestation requests:
@Francesco | Simply VC: I see 19 of these
attestationRequest: {"account":"0xe3eb8D848E30d0Cdb39cb7bD55edF7a789ca53C9","phoneNumber":"+4915xxxxxxxxx","issuer":"0xc2023E0D13F53d181f29B99D829CD75628337D15","salt":"wJzf6Kar3ivkt","smsRetrieverAppSig":"GH+4Okn6nOW"
and 19incomplete attestation found
between 16:12:32 UTC yesterday and 16:47:30Has anybody got any incomplete attestation found errors on mainnet? Have seen some talk about this in previous testnets, but never in mainnet. Our celo attestations node is synced and the test-attestation command works successfully.
attestation-service[61] INFO: No incomplete attestation found err: Error: No incomplete attestation found at AttestationRequestHandler.<anonymous> (./lib/requestHandlers/attestation.js:152:35) at step (./lib/requestHandlers/attestation.js:33:23) at Object.next (./lib/requestHandlers/attestation.js:14:53) at fulfilled (./lib/requestHandlers/attestation.js:5:58) at process._tickCallback (internal/process/next_tick.js:68:7) req_id: "c720fd8e-a671-4305-9f6f-ee194421addb"
Each of them having a similar error to the one I sent above.
Actually since my latest deploy pulled the updated image of 11 May, clearly what we were running was prior to the 11 May update...
@nambrot | cLabs: Yeah, my suspicion is a bunch of validators are in your position where they are not actually running the latest version.
...
Now that there have been a few attestations on Mainnet, I want to throw attention of some of the early lessons that I think we should address:
Yes, this is an early phase, and missed attestations do not necessarily indicate fault on the validator. That being said, the Celo network is currently completing less than 50% of attestations. That is unacceptable as this is breaking one of the key user experience improvements Celo is claiming. This can't be completely explained by faulty clients.
We have two current hypothesis about issues on the validator side:
1) Validators running an outdated docker image. When RC1 was stood up, the privacy service was not yet stood up. Once that work was completed, the attestation service image was updated, as the older one is no longer valid. We know of several instances, so please re-pull the image (
us.gcr.io/celo-testnet/celo-monorepo:attestation-service-rc1
is the "old" tag, I also tagged the newest release withus.gcr.io/celo-testnet/celo-monorepo:attestation-service-1-0-1
). We also included https://github.com/celo-org/celo-monorepo/pull/4313 in the latest update which includes a bunch of helpful debugging probes to avoid these issues in the future2) There are a few validators who do not even seem to have an appropriately registered attestation service online. I'm going to occasionally call people out for it. Please don't make me call you out.
Community
thecelo.com now has the Exchange tab where you can see the detailed info about Celo's exchange rate and recent orders.
chris-chainflow has created a new simple Telegram bot that sends an alert when a new proposal has been submitted on Celo: https://github.com/Chainflow/celo
Adrian | moonlet.io provided an update on the Moonlet wallet for Celo:
🔷 ACHIEVEMENTS: 🔷
✅ We finalised the UI/UX part. The goal was to make it as abstract as possible in order to be able to reuse it for different context. See the video with the Quick Vote flow.
✅ We covered all CELO operations. However, we are facing some challenges to display the Reward balance as posted here: https://discordapp.com/channels/600834479145353243/600840423958904842/727445624450056192
✅ We developed a Notification Center that would allow any user to stay informed with regards to transactions status, activate votes or make withdrawals.
✅ We build an API in order to help us display relevant info about user's and validators status, therefore to keep the content relevant.
🔷🔷 NEXT STEPS: 🔷🔷
Our goal is to launch a testing version in August, once we have a stable build and all user flows are covered. It would be great to have as many of you on board during this testing phase.
🔷🔷🔷 OUR ASK 🔷🔷🔷
Hopefully our initiative will add value for the whole community, both CELO holders and validators. However, this requires a lot of development effort on our side. Therefore our ask is to help us receive around 205636+ CELO votes in order to succeed to have our second validator elected http://thecelo.com/group/moonlet and keep the ball 🏀 rolling.
Thanks so much!
Like what we do? Support our validator group by voting for it!