AI infrastructure is growing faster than our ability to measure its impact. This project connects the dots between hardware efficiency, energy mix, and carbon output to surface where the real leverage is.
Key conclusion: Facility efficiency (PUE) 1, grid carbon intensity 2, and GPU generation 3 each contribute independently to total energy footprint. Optimizing all three together produces dramatically better outcomes than any single lever alone — hardware improvements alone won't close the gap while AI workloads keep growing at 10× the pace of efficiency gains.
How do energy consumption patterns, electricity source, and GPU hardware characteristics interact to determine the most energy-efficient strategies for scaling AI data centers?
The data shows that the best-practice combination — high-efficiency facility (PUE ~1.1), low-carbon grid, and latest-gen GPUs — can reduce total energy footprint by over 10× compared to worst-case deployment. Location and hardware choice are the most underutilized levers.