One tree for every epoch that GROOT staking pool is running. Over 200 are already towering like Oaks.
+1 tree/epoch>0 minted blocks, +2/epoch>20
GROOT is performance-oriented staking pool for the Cardano project. Planting a smaller or bigger forest solely from pool profit will be a special gift.
Pool ID 4687b887da2d05e326e1d28b7c424f2ce5c9a933809f10c870bb8e34
Elements that need to be considered in order to build a highly performant and resiliently operational Cardano Stake Pool.
Finding a cost-effective combination of services, preferably without using the main three cloud providers and their services.
Pruning a Black Roses shrub, some work, passion. That's one of, if not the most efficient cluster inside Cardano Network.
Delegate to The Best Cardano Pool – We Are GROOT
Friendly note: When total amount delegated is below 3 million ADA or over 60 million ADA the annualized interest can be significantly affected.
Every epoch is a lottery in the first case, usually not bonanza. Rewards are considerably reduced in the second.
The Brave Way if your stake won’t change the things too much and you know it but your altruistic approach is about principle and concepts not about some money. In the long run the pool can benefit if there will be more heroes choosing that pool. An immediate benefit for the pool is that the long run become shorter every time a person act like you. You already know what you doing. We respect you!
The Flower Power Way when you have enough stake to make a difference for a pool with low stake. Is a great and appreciated gesture, profitability can also be bigger as the blocks produced after one pool has meet and exceeded the "threshold" will make bigger profits for delegators and Stake Pool Operators in the first phase, then will get equalized. Also will help the entire Cardano Ecosystem. We love you too!
The Easy Way is to find a pool in 15-40 million delegated range and let Cardano take care of your future. You can find useful websites in the next sections.
Note: If total numbers of minted blocks is bellow 100 for the pool you are interested, the stats on the below pages are not really relevant yet as the amount of information is limited. Check the pool website, description and any other information that you may find useful to assist with your decision.
The Any Way you choose is important to keep an eye on the pool you delegated to, if not once every two weeks at least once in a while, any changes to pool take place after 2 epochs, one epoch last for 5 days.
Bookmark one of the below pages or anything similar with your pool information on and the check is taking less than 1 minute.
If you don’t check there are some risk like: the pool can get saturated (saturation affects all of the delegators not only the last), the owner might suddenly change the fees, the pool gets retired, the pool may change the name in something you don’t like, etc.
Note: All calculations are approximations to closest digit and the 340 fee is ignored, as long as more than 6 blocks/epoch are made it's negligible.
Stake or delegated stake is the most important variable in block assignment equation at the moment, see the friendly note above.
Used to notify delegators randomly when we are close to saturation.
Both Cardano delegators and Stake Pool Operators (SPO) need to use some of the following resources at one point. All the metrics websites displays the same informations taken from Cardano blockchain explorer however there are differences between the presented information. Check and use whichever you like.
Useful staking pool metrics for both delegators and pool operators in Cardano Network
Cardano blockchain technically related content
Telegram channels for Cardano Stake Pool Operators and Cardano Developers
For cold operations use a computer which is not connected, and will never be, to the internet.
Initial pool setup is taking below five data transfers between cold and hot environment. The KEY rotations operation happens at three months or four times a year. Make it 5 or 10 transfers a year, that's all the effort but your funds are secure.
Note: Have Money - Love Story! No Money - I'm Sorry!
When we install any computer VPS/Dedicated or a RaspberryPI in the kitchen don’t let him hanging around without securing it as fast as possible after deployment/installation. Turn it off if no time for but don't let it exposed.
If your SSH keys are not in place, we have to either create a new pair on the newly installed computer either if we have a set prepared to copy them on the new computer via ssh-copy-id, rsync, WinSCP or whatever you’d like.
Test if your keys are working by opening a new SSH session, without closing the current one, if not good try again as you’ve missed something.
If you can’t decide which port to use, have a look at this list of the 1000 most scanned ports and better avoid them. Also, you can consult iana.org for a detailed description of the port you choose, if it’s used by whoever.
Restart SSHD again and test.
Before upgrades optionally we can set the language and time zone. The language may save you some time on various future updates. If you go this route, when you generate the en_US or what else you prefer, pay attention as some providers are generating en_GB and change the second line to GB if that’s the case. Configure your time zone or better set it on UTC.
During the upgrades you might be asked if you want to keep some current config files or replace, sshd_config can be one, keep current version on all if you don’t know what’s better. Replacing it will implicitly require reconfiguration of that specific file.
# 6. Before upgrades sudo locale-gen en_US.UTF-8 sudo update-locale LANG=en_US.UTF-8 #timezone config sudo dpkg-reconfigure tzdata # 7. Upgrades and clean sudo apt-get update -y sudo apt-get upgrade -y sudo apt-get autoremove sudo apt-get autoclean #Optionally you can enable unattended-upgrades sudo apt-get install unattended-upgrades sudo dpkg-reconfigure -plow unattended-upgrades
After the upgrades reboot the computer, this reboot is not optional and sometimes is better to run the upgrades again after the restart as some images/packages can be old, if we skip it the system performance might be affected. Optionally there is the unattended upgrades package, the upgrades from there are only stable versions but if you don't like it just leave it.
You can skip the following part but I recommend a set of limits on connections coming to your server. Limits per Cardano node port used and also the global number of connections coming in. For this we need to add rules on the before.rules config file in UFW, because is before rules (before user rules) your IP should also be added here. We use before rules to make sure we also have a summary look onto the other settings and we don't configure rules that will be subsequently ignored.
Two important aspects are the facts that if we write a rule inside the UFW rules config and we mess something it will get and stay disabled when reloading the rules, with a message about which line is broken and second: the rules are read in order.
If 1st rule is to allow on port 6005 and then you have a denied IP it won’t block the access of that IP if is connecting to port 6005. If you want to deny an IP you should use ufw insert 2 deny from 184.108.40.206, use insert 2 or 3 (assuming you have at least 2 or 3 rules) for easy check/remove after, avoid position 1 as you use this one for some critical allows.
We’ll use masking, basic masking is very easy to understand for UFW block/allow IPv4, advanced masking might be a bit confusing.
The basics are like that: 8>.16>.24>.32=
So, when you allow/deny with mask 8, for the above example any IP starting with 104 will be allowed/denied.
# 8. UFW install sudo apt-get install ufw # 9. Add your IP, node port and enable sudo ufw allow from xx.xx.x.xxx sudo ufw allow 6005/tcp sudo ufw enable #10. Add your IP in before.rules # Limit connections coming in sudo nano /etc/ufw/before.rules -I ufw-before-input -s 220.127.116.11 -j ACCEPT -A ufw-before-input -p tcp --syn -m connlimit --connlimit-above 3 --connlimit-mask 32 -j DROP -A ufw-before-input -p tcp --syn --dport 6005 -m connlimit --connlimit-above 60 -j DROP -A ufw-before-input -p tcp --syn -m connlimit --connlimit-above 20 --connlimit-mask 24 -j DROP -A ufw-before-input -p tcp --syn -m connlimit --connlimit-above 400 --connlimit-mask 0 -j DROP
First rule is to allow our IP.
Maximum 3 connections per IP, whatever the port. The node needs only one per IP, let’s say two in case the other party have some problems connecting and their server is initiating another connection without closing first one. So, I set 3 because I feel pretty.
Maximum 60 connection coming via port 6005, the node port. This should be the third rule (or second for blocking category) to avoid the potential unintended mess that can be caused by a legit but in trouble node who's trying to connect chaotically. We have first rule no more than 3 connections and then if is Cardano node related is free for 60. Placing this rule after the 400 one could make our node useless for Cardano related new connections in the face of a concerted random attack, we don't forget that we are here to run the Cardano.
Maximum 20 out of 255 potential IPs with mask 24, take note that the number of computers behind those 254 can be bigger.
Maximum 400 total TCP IN connections to this server by using mask 0, we don’t need 400 but let's be selfish.
Considering the above rules the maximum needed is 180 for operations and couple of more if we upgrade or watching a movie in the same time. This kind of configuration is like a protection in case something hard to anticipate is happening and we still want to be operational as a Cardano stake pool and keep the Ouroboros running but in the same time don't get knocked out.
You can tailor the numbers however you want or you consider reasonable. Didn’t used mask 16 and 8 as the IP distribution around the world is not equal. There’s no benefit in configuring some useless rules for those, in order to have useful rules in here there's some work to do and the efficacy will still be questionable. After you add any rules straight to *.rules make sure ufw reload.
When on before.rules you can check and change for your needs: 5th set of rules are relating to your server ping response, we can DROP it from here or from sysctl, I recommend this one as you and the IPs allowed above will be able to ping the server for various checks or monitoring if needed but nobody else can. On sysctl is disabled for everyone.
UFW provide us with limits per maximum initiated attempts in a defined period, the default is 6times/30seconds if you enable it, the command is:
ufw limit 725/tcp this will deny any IP who tried more than 6times/30sec/port725.
All the rules that you input by command are added for IPv4 and IPv6, if you want to actively use the UFW adding and removing rules and only need the IPv4 ones, you've probably noticed it is adding rules for both protocols. Changing the IPv6 to no in the first set of rules here sudo nano /etc/default/ufw will block all IPv6 and only allow the loopback communication, also no rules for IPv6 will be added and the status/rules will be easier to read.
If we don’t have a static IP and our ISP can’t give us one the simplest and safest way is to add the entire range, with mask 8 allowed we are around 200 times less exposed without locking ourselves out.
#11. Additional UFW before.rules checks #DROP ping response from UFW not sysctl # ok icmp codes for INPUT -A ufw-before-input -p icmp --icmp-type destination-unreachable -j DROP -A ufw-before-input -p icmp --icmp-type time-exceeded -j DROP -A ufw-before-input -p icmp --icmp-type parameter-problem -j DROP -A ufw-before-input -p icmp --icmp-type echo-request -j DROP #12. UFW allow class for dynamic IPs # if you want to use UFW ALLOW user rules sudo ufw insert 1 allow from 18.104.22.168/8 # 22.214.171.124/8 have the same effect
Allowing 10 IPs with mask 8 instead of allowing the SSH port is still around 200 times less exposed, so don’t lock yourself out, use whichever mask you are sure will meet your needs. If you know for sure that only the last set of digits of your IP will be changed use mask 24. This rule will be added as first one (UFW discussion above) in the before.rules right after the "End of required lines" or ufw insert 1 allow from if you use command line for user.rules
If you don’t have a fixed IP or class of IPs or you don’t want to add an IP to your firewall or we have to connect from an unknown location we use simple port knocking, you can go for advanced ones but they are more annoying.
#13. Port knocking sudo nano /etc/ufw/before.rules -A ufw-before-input -m state --state NEW -m tcp -p tcp --dport 725 -m recent --rcheck --name SSH -j ACCEPT -A ufw-before-input -m state --state NEW -m tcp -p tcp --dport 21856 -m recent --name SSH --remove -j DROP -A ufw-before-input -m state --state NEW -m tcp -p tcp --dport 21857 -m recent --name SSH --set -j DROP -A ufw-before-input -m state --state NEW -m tcp -p tcp --dport 21858 -m recent --name SSH --remove -j DROP
The knocking is like that: before being able to connect to our SSH port 725 we need to first initiate any connection (we can use Putty and change port, or any mean of sending one data packet) on port 21857 for example; nothing will happen as the packet will be dropped but our port 725 will be opened for us right now, so we can connect via SSH and do the autumn ploughing, after we disconnect we have to initiate a connection to 21856 or 21858 to close the SSH listening. *6 and *8 are used to avoid our port being opened without being closed straight away by random port scanners.
Very important 7: don’t use Google, Ubuntu and Fedora NTP servers; they are inaccurate. Google used to be good, right now is at best a nonsense with low Stratum and very good latency but hilarious time. Note: time.nist.gov seems a reliable source in many US locations.
Very important 9: use minsources 3 (global setting), as it compare the times of minimum 3 sources before deciding, it automatically eliminates the bad one. Without this option the accuracy might get affected if for example there is an NTP very close but not accurate and some far away, chrony will just synchronize with the closest as it considers the others affected by latency.
The easy option to configure is to use pools from ntp.org, choose 0/1/2.country.pool.ntp.org and 2/3/4.continent.pool.ntp.org or whatever you’d like, iburst should be present on all of the servers and pools, for pools I go with maxsources between 4 and 8.
#14. Chrony settings sudo apt-get install chrony sudo nano /etc/chrony/chrony.conf pool 1.sg.pool.ntp.org iburst maxsources 7 maxpoll 8 pool 1.asia.pool.ntp.org iburst maxsources 7 maxpoll 8 server ntp.xtom.com.hk iburst maxpoll 8 maxupdateskew 5.0 rtcsync makestep 0.1 -1 leapsectz right/UTC local stratum 10 minsources 3
We don’t use minpoll and our maxpoll is 8, not less than 6 in any case. Poll of 8 means at maximum 256 seconds we'll do a new synchronization with the NTP servers. Poll 7 is 128 seconds, poll 6 is 64 and so on. There are some compulsive disordered myths about using maxpoll 2, 3 or even 0 or below.
If we are in a position of making more than 1 request per minute something else in our configuration is really not fit for purpose. You may use it when you have your own set of 3 NTP servers.
Some of the reasons against:
The recommended way is to search Stratum 1 & 2 servers near server location and use them. If your Google search is currently broken there's another solution:
#15. Start Chrony only with IPv4 (add -4) sudo nano /etc/systemd/system/chronyd.service ExecStart= /usr/lib/systemd/scripts/chronyd-starter.sh -4
Common issues: listening or giving you IPv6 sources but IPv6 disabled, edit chronyd service and start with -4 option, IPv4 only; none of the servers are working, check your firewall and then with your provider to see if they use their own servers.
For improved performance and security, I’ve prepared a set of rules that can be added to any node. Check the last part if are using multiple virtual nodes on one host as some of those marked might interfere with some of your virtualized hosts, however if you run a single instance of Linux all the settings are just copy/paste.
#16. Sysctl settings sudo nano /etc/sysctl.conf #add following lines # Ratio tcp/app net.ipv4.tcp_adv_win_scale = 2 # Latency over Throughput net.ipv4.tcp_low_latency = 1 # Long live the King net.ipv4.tcp_slow_start_after_idle = 0 # No route save net.ipv4.tcp_no_metrics_save = 1 # Send data in first packet net.ipv4.tcp_fastopen = 1 # Disable IPv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 # BBR net.core.default_qdisc = fq net.ipv4.tcp_congestion_control = bbr # Hansel and Gretel net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.rp_filter = 1 # ASLR kernel.randomize_va_space = 1 # SYN attacks net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.tcp_syn_retries = 4 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syncookies = 1 # Ignore ICMP broadcast request and bogus response net.ipv4.icmp_echo_ignore_broadcasts = 1 net.ipv4.icmp_ignore_bogus_error_responses = 1 # Reboot 10 seconds after fatality on Kernel, avoid downtime kernel.panic = 10 vm.panic_on_oom = 1 # Swap use vm.swappiness = 10 # Use the below set careful if you run multiple machines on one host # No routing and redirects net.ipv4.ip_forward = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.secure_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.default.secure_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_source_route = 0 net.ipv6.conf.default.accept_redirects = 0 net.ipv6.conf.default.accept_source_route = 0 # Martians (Only if you are interested to look on the logs) net.ipv4.conf.all.log_martians = 1 net.ipv4.conf.default.log_martians = 1 # No pongs to pings, don't enable if you use ping net.ipv4.icmp_echo_ignore_all = 1 net.ipv6.icmp.echo_ignore_all = 1
Will be counterproductive for all of us to list each of the rules but there is a small description before them, if interested feel free to search for.
The last rule is the response to ping, as we previously added our IP to before.rules and set the others requests to DROP is not necessary to activate this option as this will stop any response. However if not needed go for it.
IPv6 is faster and better than v4 mostly because lacking in NAT and gateway. In 1-2 years when providers will be more prepared, we’ll have a noticeable improvement in latencies, however at the moment we keep it disabled when running only the Cardano node.
Regarding the security, there are some flying poetries around narrating the flaws and other epic fails, my suggestion is to make your research and don't forget that both IPv4 & IPv6 are transport protocols and both need to be protected.
Sysctl might not be fully executed if some of the modules are not loaded at the time of sysctl execution, in this case an easy option is to add as a cronjob 33 seconds after restart, you can tweak the time as not more than 5 seconds are needed in 99% of the situations.
#17. Delays and synchs #Delay the cardano-node service with 60 seconds #usually called cardano-node.service or cnode.service sudo nano /etc/systemd/system/cardano-node.service #add the below line before ExecStart ExecStartPre = /bin/sleep 60 #Edit crontab and add these lines sudo crontab -e @reboot sleep 33; /sbin/sysctl --load=/etc/sysctl.conf @reboot sleep 35; chronyc -a 'burst 4/4' @reboot sleep 53; chronyc -a makestep @reboot sleep 55; /sbin/hwclock --utc –systohc
We have these two together as we have to edit the same file fstab in order to add both of the entries. A swap file is better to be present even if the amount of memory that we have is more than enough. Securing the shared memory means that no programs can be executed from there.
#18. Create swap file and secure shared memory #Check if swap is on and happy with size swapon -s #Disable if not happy sudo swapoff -a #Set size (2G=2GB) sudo fallocate -l 2G /swapfile #Secure the swap sudo chown root:root /swapfile sudo chmod 0600 /swapfile #Make it sudo mkswap /swapfile #Enable it sudo swapon /swapfile #Make sure is there swapon -s #Add the following lines in fstab sudo nano /etc/fstab #1st is swap, 2nd is shared memory security /swapfile none swap sw 0 0 tmpfs /run/shm tmpfs defaults,noexec,nosuid 0 0
Download all of the snippets in one TXT file.
With these settings our server is ready to run a Cardano stake pool secure and very efficiently.
In regards to security there are two valid statements:
In regards to resiliency and life in general there is one simple thing, is not about IF but WHEN. Don’t keep all your eggs in one basket and be prepared to lose the battle but win the war...IF is worthy!
Your server, cold wallet, EEPROMs, biometrics, etc. can be stolen, hacked, destroyed at any point. Facts like some mofos having five factors authentication, guesting themselves on their servers to protect some imaginary roots and using some quantum-photonic encryption in order to be imba are mostly BS.
Any layer is prone to failure or penetration, is a matter of time and/or who's on the other side. Assuming that one of the distinctive advantage of the other side is not bending knees backwards but they're passionate about hacking into 30 dollars servers.
Each layer adds complexity to a system which does not automatically convert to security. The weakest link and human factor will remain as it is but the easiness of restoring/replacing a simple system won't. Focus and achieve the right balance for you.
My suggestion is to never use more layers than you can handle and if you believe that something is not safe it isn't. Take it easy, get more knowledge, understand what's happening and if your time, effort or investment for those specific circumstances have logic. If in doubt don't do it!
Keep your bloody cold keys always COLD and have backups.
Copy/paste those lines of code, talk with God, use some sorceries but keep them fucking COLD.
Those keys are everything that matters on our case here; everything else like losing a server or all of them, the pool shut for a week, month or whatever; your relays hacked by some IRC fans, another set of lunatics sending some PayPal phishing from your producer…trivial details.
One of the fastest and most resilient stake pool it doesn’t automatically mean profitable; in fact, one has nothing to do with the other. My reason to do it is a combination between being part of Cardano project, a global optimized network with a defined scope, helping the blockchain and everyone in, improving my handsomeness, etc.
Budget and those couple of trees planted for this project were done in order to have everything settled, peace of mind and don’t feel the need to spend my next two years nurturing servers and frenetically checking if I got one block or ten. This was also one of the reasons why I didn’t want to start with 2-3 and scale up and have a substantial number of "what if" variables to think about. Is simple: do it right or don’t do it at all. Right for me is what I currently configured, for other people can be totally different.
The plan was to have relays in 6 out of 7 continents and to cover as many countries connected to intercontinental submarines network cables as possible, I consider that a global decentralization in networking terms. I had servers in most of the planned ones but I didn’t keep them as they had no utility, yet.
There are some elements to be considered and differences between creating an autonomous system, truly resilient network that performs great as a staking pool and random deployment of 20 servers around the globe.
Hardly possible to achieve resiliency for a Stake Pool when the servers meet any of these characteristics:
With these in mind I started drinking, one important aspect was the cost as I will run the stake pool for minimum two years and evaluate after.
One of the easiest solutions is to use Google, Amazon and Microsoft together as their networks have one of the most varied usage of T1&2, so in terms of performance it should be good, is actually not. First was the moral dilemma about using their monopolistic positions for a project that explicitly want otherwise, however their prices are pretty comical for our needs and didn’t make any sense.
Note: I’ve tested GC, AWS and Azure compared with the affordable providers for same locations and the performance were on the same range. If you plan to use the big three services there is a slightly overall difference as follows: AWS performs slightly better in many European locations, GC slightly better in the US ones and Azure slightly better in East Asia. If I should use them this would be my distribution.
The tests were all about slot fetching between relays and block producers, the latency and time synchronization was the most important variable.
Being around on the Testnet I’ve noticed the biggest issue was this, right now everyone is happy and not concerned about as the stake is the most important variable in stake pool profitability, more of the efforts needs to be put into attracting delegators.
In order to currently miss a block you need to have something misconfigured where in the ITN you should have everything configured in order to won the block. Going forward, the latency and time synchronization is still the most important aspect for the nodes and Cardano network performance, in my opinion. IOHK are currently developing the Ouroboros Chronos which is a time synchronization without the need of NTP and classic synchronization as we know it, even after Chronos is deployed we still need to perform great. Depending how much flexibility we'll have on Chronos there is a chance that if our pool is not properly configured the protocol will just reject us every time, also the number of transactions will increase.
I will illustrate my opinion of what I understand through networking performance in Cardano decentralized system: We have one block producer and one relay in Belgium for example, all the blockchain information are fetched from one relay to producer; when we have two relays one BE and one NY the fetching/24hours should be 50-70%/50-30% in BE favour…if the ratio is 80% or more in favour of BE that means the NY server is almost useless. The producer chose to get the information from the fastest source and every other pool operators servers connected to the network do the same.
If any of the relays are not exchanging information with the producer that mean is no help for Cardano blockchain currently and no use for our stake pool or other staking pools. Is almost useless as it always come after party with some goodies, the only theoretical use will be if all other are going down and he's the last one standing.
Not feeding or doing it only for couple of times a day happens for two reasons:
Before we continue, are all these necessary? Depending on what you want to achieve but for a pool to simply run: No, they aren’t.Minimum requirements: one BP and one relay in same location, if you go with this setup don’t place your relay in another place as this might affect performance and actually increase the downtime. The when will fail answer is expected to happen two times if you place this setup in two different locations. There’s no benefit in doing that, don’t do it.
Minimum with redundancy: one BP in one location, one relay close to it but on another exchange, one relay on the other continent.
Note that is working good between US and Europe, Asia will be performance detrimental. Also, on this setup you can configure the relays to be ready to pick-up the block producer functions.
Note: For the above and below example we can have a sensible longer distance but not more than 40ms round-trip time (standard ping is showing round-trip time, RTT) between BP and R1.
Example: • NY – Kansas, LA – Seattle • NL, UK, FR – DE, CH, BE • in this way we can increase performance and resiliency.
If we go with setups like: • NY, Kansas, Miami – LA, Seattle • ES, FR – DK, FI, SE, AT • we will get worst performance without necessarily increasing our resilience.
Minimum level of resilience: like the above setup plus one more relay on a location not too far from R2.
Note: Being resilient is at least partially ambiguous for our use case as we should define what is an acceptable level of service: Is missing a block the end of the world? No, Cardano is resilient by its design; delegators probably don’t get affected if generally the pool is good. Is there a question of ethics as we know there is a chance, but not a certainty, of missing blocks if we trade resiliency or performance, we also know that the chances of failure are below 1% or 0.01%...what one considers unacceptable it doesn’t mean the other should consider the same, both can be right.
True level of resilience and also some performance gain: one BP and five relays grouped by two, with a latency of below 40ms RTT:
This setup consisting of 6 servers in total, we can say that whatever the requirements for our resilient stance we fulfil them, probably. In any situation we have the capacity to move the operations all over the three continents that are performing good.
Beyond resilience and true performance: one or more BP and nine relays strategically placed by three in AS, EU, US.
Nine relays placed over the main positions around the three continents we can say that in terms of performance we are good, we help the Cardano network, we do a great job.
If we want to achieve even more performance we still have to deploy some more. Anyway at this point our own network of relays can receive most of the produced blocks very fast no matter where they are produced and when we produce will propagate through the network en fanfare.
So, nine is a professional setup prepared to mitigate most of the potential failures. Going over 9 in terms of performance gain is mostly for the entire Cardano blockchain then it actually is for any of our own pools.
The some more:
We solved most of the performance and the resiliency but if we missed some countries that have a good geopolitical stance or good land for crops, we may add it even if that’s the only gain.
All the locations were tested with the Cardano node actually running, dropped after I had the guarantee that is no information to be exchanged between Ouroboros protocol and those locations or the amount of information is currently way too low to worth the investment.
When selecting the providers, I’ve tested the latencies and traceroutes between various locations to make sure I don’t have a ridiculous location or provider, most of them were tested with two or three different providers.
I’ve created 5 pools considering simple running of a producer without certificate a nonvalid test. We need pool for reference, in same and different location to see what looks like normal number of blocks and TXs and what don’t.
After all of these tests I decided which locations are useful for Cardano network (hint: above ones) for a Staking Pool, for the blockchain and I keep them.
There are differences between providers mainly related from those tier one and tier two routes how we anticipated in the beginning, also some of them have bad configurations in place.
If you want to choose a provider with good peering use tools like radar on qrator.com you’ll find out how many networks are peered with your specific ASN but don’t rely only on that as 1 peer can be more valuable than 4-5.
Then check the pings at various times of the day, use tools like SmokePing for monitoring, if the ping is unstable and there is big variation that provider should be excluded.
Between 7-10AM on every country the traffic is increasing, sometimes the providers have to reroute part of the traffic, usually is done for the new connections and the old ones are not affected. Other times the old connections are instructed to use new route or same route on other fibre which is translated on changes in latency for 6-10 hours, if the changes are small or in your favour everything is good. If the changes are against, happen every day or too many times a day you might consider looking around. Also if their balancers are improperly configured there will be missing traffic with spikes on latency for up to 30 minutes, those have to be avoided.
More than 90% of blocks Are produced in Europe, North America and Asia. Keeping a node running in Mongolia is a waste of resources, there are big chances that Cardano will change Mongolians people lives in the next couple of years but not now. Being Mongolian and running one of your nodes there is a matter of national pride and I’ll do it if I was one but not otherwise. Same for South America, Africa, Middle-East, other parts of Asia, etc.
Feel free to ask everything about anything.
I'll get back to you not later than 5 days!