Service comparison / compute pricing
Cloud Compute Pricing 2026: AWS EC2 vs Azure VMs vs GCP Compute Engine
Compute is 60-70% of most cloud bills. This is the ledger: list pricing for general purpose, compute-optimised, memory-optimised, ARM, and GPU instances across all three providers in their major US region. Verified April 2026.
Headline finding
On-demand list pricing for equivalent compute specs differs by less than 5% across providers in most US regions. The real lever is discount strategy, not list shopping.
General purpose, on-demand
| Spec | AWS | Azure | GCP |
|---|---|---|---|
| 4 vCPU / 16 GB | $0.1632/hr m7i.xlarge | $0.1680/hr D4s v5 | $0.1580/hr n2-standard-4 |
| 8 vCPU / 32 GB | $0.3264/hr m7i.2xlarge | $0.3360/hr D8s v5 | $0.3160/hr n2-standard-8 |
| 16 vCPU / 64 GB | $0.6528/hr m7i.4xlarge | $0.6720/hr D16s v5 | $0.6320/hr n2-standard-16 |
| 32 vCPU / 128 GB | $1.3056/hr m7i.8xlarge | $1.3440/hr D32s v5 | $1.2640/hr n2-standard-32 |
| 64 vCPU / 256 GB | $2.6112/hr m7i.16xlarge | $2.6880/hr D64s v5 | $2.5280/hr n2-standard-64 |
Compute optimised, on-demand
| Spec | AWS | Azure | GCP |
|---|---|---|---|
| 4 vCPU / 8 GB | $0.1530/hr c7i.xlarge | $0.1690/hr F4s v2 | $0.1648/hr c3-standard-4 |
| 8 vCPU / 16 GB | $0.3060/hr c7i.2xlarge | $0.3380/hr F8s v2 | $0.3296/hr c3-standard-8 |
| 16 vCPU / 32 GB | $0.6120/hr c7i.4xlarge | $0.6760/hr F16s v2 | $0.6592/hr c3-standard-16 |
Memory optimised, on-demand
| Spec | AWS | Azure | GCP |
|---|---|---|---|
| 4 vCPU / 32 GB | $0.2016/hr r7i.xlarge | $0.2080/hr E4s v5 | $0.2148/hr m3-megamem-32 (per 4 vCPU) |
| 8 vCPU / 64 GB | $0.4032/hr r7i.2xlarge | $0.4160/hr E8s v5 | $0.3934/hr n2-highmem-8 |
| 16 vCPU / 128 GB | $0.8064/hr r7i.4xlarge | $0.8320/hr E16s v5 | $0.7868/hr n2-highmem-16 |
GPU instances, on-demand
GPU pricing changes faster than other categories. Always verify directly with the provider. List rates below are approximate as of April 2026. Reserved and committed-use pricing for GPUs is typically negotiated outside public price lists.
| Spec | AWS | Azure | GCP |
|---|---|---|---|
| 1x H100 80GB | approx $4.50/hr p5.xlarge equivalent | approx $4.10/hr NC H100 v5 | approx $3.93/hr a3-highgpu (per H100) |
| 8x H100 80GB | approx $32-40/hr p5.48xlarge | approx $30-36/hr ND H100 v5 | approx $29-34/hr a3-highgpu-8g |
| 1x A100 40GB | approx $3.06/hr p4d (per GPU) | approx $3.40/hr NC A100 v4 (per GPU) | approx $2.93/hr a2-highgpu (per GPU) |
| 1x L4 | approx $0.71/hr g6.xlarge | approx $0.65/hr NV4 v3 | approx $0.59/hr g2-standard-4 |
Custom silicon / 20-40% better price-performance
ARM and custom silicon
AWS Graviton 4 (m8g)
$0.1469/hr m8g.xlarge
approx 10% cheaper than m7i with similar performance, up to 30% better price-performance for many workloads
Azure Cobalt 100 (Dpsv6)
approx $0.1512/hr Dpsv6 4 vCPU
approx 10% cheaper than D-series equivalent, optimised for cloud-native workloads
GCP Tau T2A (Ampere Altra)
$0.0840/hr t2a-standard-4
approx 47% cheaper than n2-standard, lower per-thread performance, best for scale-out
Spot / Preemptible / Spot VMs
Interruptible compute
| Provider | Discount | Interruption | Use cases |
|---|---|---|---|
| AWS Spot | 60-90% off list | 2-minute warning, 5-10% per-month interrupt rate typical | ECS, EKS data plane, batch, training, CI runners, Fargate Spot. |
| Azure Spot VMs | 60-90% off list | 30-second warning, eviction-based on capacity or price | AKS spot pools, Azure Batch, low-priority VM scale sets. |
| GCP Spot VMs | 60-91% off list | 30-second warning, no 24-hour limit | GKE spot node pools, Dataproc, Cloud Build workers. |
Region pricing variation
Same instance, different region
Region selection can be the largest single compute cost lever. Local hardware, power, and tax drive material differences. Approximate premiums vs the provider baseline US East region.
| Region | AWS | Azure | GCP |
|---|---|---|---|
| US East (Virginia / east-us / us-central1) | baseline | baseline | baseline |
| Europe West (Ireland / west-europe / europe-west2) | +5% | +5% | +10% |
| Asia Pacific (Singapore / southeast-asia / asia-southeast1) | +15% | +10% | +15% |
| South America (Sao Paulo / brazil-south / southamerica-east1) | +30% | +15% | +25% |
Common questions
FAQ
Which cloud has the cheapest compute?+
On-demand list pricing for equivalent specs is within 5% across AWS, Azure, and GCP in most US regions. GCP Compute Engine n2-standard tends to be a few cents per hour cheaper than AWS m7i and Azure D-series, but the difference is too small to drive provider selection. Effective rate after Sustained Use Discounts (GCP) or Savings Plans (AWS, Azure) varies more than list rate.
Are ARM instances really 20-40% cheaper?+
On price-performance, yes for compatible workloads. AWS Graviton 4 instances list around 10% below x86 equivalents and deliver 20-40% better price-performance for general workloads. The catch is compatibility: most managed services (RDS, Lambda) and Linux containers run unmodified on ARM, but third-party agents, proprietary binaries, and Windows workloads do not always have ARM builds.
How much do GPU instances actually cost?+
An H100 80GB instance lists around $3.93-4.50 per hour per GPU on-demand across providers in 2026. An 8-GPU H100 node lists at $29-40 per hour. Spot pricing can drop these by 30-60% with eviction risk. Long-term reservations (1-3 year commitments) for GPU capacity often require negotiation outside the public price list.
Why does the same instance cost more in Sao Paulo?+
Local hardware, power, networking, and tax all factor into regional pricing. South American regions list 15-30% above US East. Asia Pacific runs 10-20% above US East. Europe is typically 5-10% premium. For workloads where region is not a regulatory requirement, deploying in a baseline region can be the largest single cost lever.
Should I commit to RIs or Savings Plans on compute?+
For stable production compute, both AWS and Azure offer up to 72% discount on 3-year all-upfront commitments. Compute Savings Plans on AWS and Savings Plans on Azure are flexible across families. Reserved Instances lock you to a configuration for slightly more discount. The rule of thumb: cover 60-80% of steady-state with commitments, leave the spiky top on on-demand or Spot.
Continue reading