GreenOps: Measuring Carbon Footprint of Compute
devops sustainability
Tech infrastructure produces significant carbon emissions. GreenOps applies observability and optimization principles to environmental impact. Here’s how to measure and reduce your cloud carbon footprint.
The Scale
Global data centers: ~1% of global electricity
Cloud computing: Growing 20%+ yearly
Single ML training run: Can equal 5 cars' lifetime emissions
This matters.
Carbon Metrics
Key Terms
| Term | Meaning |
|---|---|
| Carbon intensity | gCO2/kWh of electricity |
| PUE | Power Usage Effectiveness (facility overhead) |
| Scope 1 | Direct emissions (your generators) |
| Scope 2 | Indirect (purchased electricity) |
| Scope 3 | Supply chain (hardware manufacturing) |
What You Measure
Carbon = Energy × Carbon Intensity
Energy = (Compute + Storage + Network) / PUE
Cloud Provider Tools
AWS
# AWS Customer Carbon Footprint Tool
# Available in AWS Cost Management console
# Programmatic access
import boto3
ce = boto3.client('ce')
response = ce.get_cost_and_usage(
TimePeriod={'Start': '2024-01-01', 'End': '2024-02-01'},
Granularity='MONTHLY',
Metrics=['UnblendedCost'],
GroupBy=[{'Type': 'DIMENSION', 'Key': 'REGION'}]
)
# Carbon data available in sustainability console
Google Cloud
# GCP Carbon Footprint
# BigQuery export available
SELECT
project.id,
SUM(carbon_footprint_total_kgCO2e) as total_carbon
FROM `billing_export.carbon_footprint`
GROUP BY project.id
ORDER BY total_carbon DESC
Azure
# Azure Emissions Impact Dashboard
# Integrated into Cost Management
# API access
az rest --method get \
--uri "https://management.azure.com/providers/Microsoft.CarbonOptimization/..."
Open Source Tools
Cloud Carbon Footprint
# Install
npm install -g @cloud-carbon-footprint/app
# Configure cloud providers
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
# Run
ccf
Dashboard shows emissions by service, region, time.
Kepler (Kubernetes)
# Deploy Kepler for pod-level energy metrics
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kepler
namespace: kepler
spec:
selector:
matchLabels:
app: kepler
template:
spec:
containers:
- name: kepler
image: quay.io/sustainable_computing_io/kepler
# Exports to Prometheus
Scaphandre
# Rust-based power consumption monitoring
cargo install scaphandre
# Run with Prometheus exporter
scaphandre prometheus
# Metrics available at :8080
Reduction Strategies
1. Region Selection
Carbon intensity varies by region:
AWS us-west-2: ~300 gCO2/kWh (hydro)
AWS us-east-1: ~400 gCO2/kWh (mixed)
AWS ap-south-1: ~650 gCO2/kWh (coal)
Choose low-carbon regions when latency allows.
2. Right-Sizing
# Over-provisioned = wasted energy
# Find idle resources
def find_oversized_instances():
"""Find instances with <10% CPU avg over 30 days."""
# Use CloudWatch/Prometheus data
pass
# Kubernetes: VPA for automatic right-sizing
3. Spot/Preemptible Instances
# Lower carbon: use excess capacity
# Kubernetes spot node pool
nodePool:
name: spot-pool
provisioningModel: SPOT
# Uses capacity that would otherwise idle
4. Time-Shifting Workloads
from electricity_maps import ElectricityMaps
api = ElectricityMaps(api_key="...")
def schedule_job(region: str):
forecast = api.get_carbon_intensity_forecast(region)
# Find low-carbon window
best_hour = min(forecast, key=lambda x: x['carbonIntensity'])
# Schedule job for that time
return schedule_at(best_hour['datetime'])
Run batch jobs when the grid is greenest.
5. Efficient Code
# Inefficient (more compute = more carbon)
result = [expensive_operation(x) for x in items]
# Efficient
result = [cached_operation(x) for x in items]
# Profile and optimize hot paths
6. ARM Architecture
# ARM instances: 40-60% more energy efficient
# AWS Graviton, Azure Ampere, GCP Tau T2A
platform:
os: linux
arch: arm64
# Rebuild for ARM = significant energy savings
Kubernetes Optimization
Carbon-Aware Scheduling
# Karmada / custom scheduler
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: low-carbon
spec:
# Custom scheduler considers carbon intensity
schedulerName: carbon-aware-scheduler
Pod Resource Limits
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
# Proper limits = efficient bin packing = less hardware
Reporting
GRI Standards
Report:
- Total energy consumption (GJ)
- Energy intensity (GJ per revenue)
- GHG emissions (Scope 1, 2, 3)
- Emission intensity
- Reduction targets and progress
Dashboard Example
# Grafana dashboard metrics
carbon_emissions_kg{service="api"}
carbon_emissions_kg{service="ml-training"}
energy_kwh{cluster="production"}
carbon_intensity_gco2_kwh{region="us-west-2"}
CI/CD Integration
Carbon Budget
# .github/workflows/carbon-check.yml
- name: Estimate carbon impact
run: |
eco-ci estimate --workflow ${{ github.workflow }}
- name: Fail if over budget
run: |
if [ "$CARBON_ESTIMATE" -gt "$CARBON_BUDGET" ]; then
exit 1
fi
Green Testing
# Run expensive tests less frequently
@pytest.mark.carbon_intensive
def test_ml_model_training():
# Only runs on merge, not on every push
pass
Quick Wins
| Action | Effort | Impact |
|---|---|---|
| Shut down dev environments at night | Low | 30% savings |
| Right-size oversized instances | Medium | 20-50% |
| Choose green regions | Low | 20-60% |
| Use ARM where possible | Medium | 40-60% |
| Spot/preemptible for batch | Low | Uses excess capacity |
Final Thoughts
GreenOps isn’t just about environment—it correlates with cost optimization. Less compute = less money = less carbon.
Start by measuring (you can’t improve what you don’t measure), then optimize the biggest contributors.
Sustainable computing: good for the planet, good for the budget.