Staking on AWS on the cheap

So, you decided to run your staking node on AWS? That’s great, AWS is probably the biggest cloud provider, is very reliable, and has a decent interface and feature set, but it’s not the cheapest. Let’s see how we can stake on AWS without spending a lot of money. The decision to be made is about the instance type, and a payment plan. Let’s first see about the instance types.

Avalanche recommends a 2-core, 2.0+GHz CPU with 4GB of RAM as a minimum for running a validator. Looking at the AWS instance types there is a mind-boggling amount of different machine configurations. We don’t need specialized HW, beefier networking or machine learning, so we can concentrate on General Purpose or Compute Optimized types.

The ‘default’ configuration (and usually recommended in many tutorials and how-tos) is the current generation of Intel-based instances, the C5 family. In that, c5.large is the one with two cores at 3.5GHz CPUs and 4 gigs of RAM. And let’s say 20GB of disk space. Let’s see how much will that set us back.

Default option for running an instance is On Demand, meaning it starts when we tell it, stops when we tell it, pay at the end of the month for the time instance was running. On Demand pricing for c5.large is $0.085 per hour, which comes out as $64.05 monthly. Ouch, not exactly cheap.

But, c5.large is an Intel based machine, running on Xeons, and those are expensive. AvalancheGo, on the other hand, can run on other system architectures too. So, let’s see what other options are there.

Looking through the list, we can see c6g instance types. Those are 6th generation of compute types, with G designating Graviton CPUs, that is to say Amazon’s custom, ARM based CPUs, running at 2.5GHz. Those are cheaper.

On Demand cost for c6g.large is $0.068, coming out at $51.64 monthly. Ok, that’s better, some 20% cheaper. $150 in savings on a yearly basis. But, maybe we can do better.

c6g is the second generation of Amazon’s ARM CPUs, but there’s also the a1 family, with gen1 Graviton CPUs, running at 2.3GHz. So, also above spec. Since AvalancheGo is not all that CPU hungry (it even runs fine on below-spec Raspberrys!), we might go with that.

On Demand cost for a1.large (also 2 core, 4GB RAM) is $0.051 per hour, $39.23 per month! Now we’re getting somewhere, further 20+% cheaper! This is almost 40% cheaper than c5 instance, almost $300 saved yearly.

But, On Demand is also the costliest payment option. You can get significant savings with Reserved Instances which is when you say that you will use the instance for a long time (1 to 3 years). If we get a Reserved Instance for a period of one year, our a1.large instance drops from $0.051 to $0.0321, coming out at $25.43 monthly, 35% cheaper than On Demand. And you still pay monthly. You can get further savings if you pay upfront, but those are not that dramatic (10-15%).

Then there are Spot Instances those are instances that dynamic pricing, which depends on demand, and can also bring substantial savings. You set a maximum price you’re willing to pay hourly for an instance, and if the prices goes above, your instance is stopped (enters Stop state), with Avalanche currently demanding 60% online time, that might be an option too. Setting that up is more involved, and savings harder to measure in advance, so we’ll leave that as an exercise for the reader. :slight_smile:

Also, pricing varies from datacenter to datacenter, and as Avalanche is a global network, location of your node is irrelevant, so it’s worth checking out other Amazon regions for pricing.

As a conclusion, if you’re running on AWS, before launching an instance, make sure you research the options available, you might be paying 2-3x as much as you could be. For my money, I’d say a1.large with 1 year Reserved Instance option is the way to go.

p.s. If you’re running on ARM CPUs, make sure you download arm64 builds of the AvalancheGo binary!


BTW, I’d love to see similar option breakdown for other cloud service providers, so please chime in if you’re familiar with what other services offer.

How does 5.68 €/month sound?

One major diffence to AWS is; there is no firewall configurable from the console. You need to use ufw.


Great write-up, jpop! AWS is indeed the biggest cloud provider.
I opted for the t3a.medium, which was the minimum instance type that matched the required 2 CPU and 4GB RAM. T3 instances work by providing a baseline level of CPU performance to address many common workloads while providing the ability to burst above the baseline for times when more performance is required. The validator node doesn’t appear to be a “bursty” application and so it may not seem like an obvious choice, however the t3a.medium is 0.0376 per hour which is slightly cheaper than the a1.large (0.0510).
The baseline performance of a t3a.medium is 20%, and so far, in my experience, running a validator gets nowhere near that baseline, so you won’t need to use CPU credits. However, my recommendation would be to turn off Unlimited mode (which is on by default) so that there is no danger of getting charged for any additional CPU credits, should the node instance spike above the baseline.
A couple of other tips I wanted to add: US East is the cheapest region (at least it is right now) but at the same time, it does tend to be AWS’ test-bed for new services so it is more prone to very minimal outages.
Also, compute costs are only one factor. The most significant cost factor is data being transferred to and from your node. Data transfer and networking costs can add up to a lot. Again, based on what I have seen with my node, I am getting about 7.5GB in and 7.5GB out per day. My numbers might need checking, but i believe AWS provides 15GB per month data transfer in its free tier… so something else to be cognizant of.

Yep, data costs are a significant factor i neglected. AWS charges only for bytes served, not bytes received, so just half the traffic is charged for.

BTW, AWS has a pricing calculator that allows for easy experimentation with different parameters:

It involves quite a bit of clicking around, but it is very informative.