AI Water Usage


Training and running AI models requires billions of gallons of water for cooling — a hidden environmental cost that falls disproportionately on drought-stressed communities with little say in where data centers are built.


  • Researchers estimated that training GPT-3 consumed approximately 700,000 liters (185,000 gallons) of water for cooling — and inference (everyday use by millions of people) adds substantially more over time.
  • Google's total water consumption increased nearly 20% in a single year, a jump the company attributed directly to AI workloads at its data centers.
  • Many AI data centers are being sited in already water-stressed regions — including the American Southwest, parts of Latin America, and Southern Europe — where the additional water draw exacerbates existing scarcity.
  • Tech companies are legally required to report energy use in some jurisdictions but face no comparable requirements for water disclosure, making independent assessment of AI's water footprint extremely difficult.

Data centers generate substantial heat — primarily from the chips performing calculations — that must be removed continuously to prevent hardware failure. The dominant cooling method in hyperscale data centers is evaporative cooling: water is used to absorb heat and then evaporated into the atmosphere, removing that heat from the building. This is highly efficient from an energy standpoint but consumes water that is effectively lost to the local water cycle — unlike water used in a closed loop, it is not returned to the watershed. A single large data center can consume millions of gallons of water per day. At the scale of hyperscale AI infrastructure, this becomes a significant and often invisible regional resource draw.

The specific water costs of AI are difficult to measure precisely because tech companies rarely disclose water usage at the facility level, and the water footprint is split between two sources: 'on-site' water (used directly at the data center for cooling) and 'off-site' water (used at power plants to generate the electricity the data center consumes). A widely cited 2023 study by researchers at UC Riverside estimated that training GPT-3 consumed approximately 700,000 liters of water, and that each ChatGPT conversation consumes roughly 500 milliliters — about the volume of a standard water bottle. These figures depend heavily on the energy mix and cooling method of the specific data centers used and should be understood as order-of-magnitude estimates, not precise accounting.

The siting of water-intensive data centers in drought-stressed regions is a well-documented pattern. Mesa, Arizona — in one of the driest metropolitan areas in the United States, in a state managing a decades-long water crisis driven by Colorado River depletion — hosts major data centers for Google, Microsoft, and Meta. Companies are often legally permitted to draw water from municipal supplies under existing contracts, but those contracts predate the current scale of AI workloads by years or decades. Communities face a difficult negotiation: the economic incentives for hosting data centers are real, but the water consumed is drawn from the same aquifers and river allocations that support agriculture, residential use, and ecosystem health.

Water disclosure is a significant gap in AI accountability frameworks. Energy reporting requirements for data centers exist in some jurisdictions — notably the EU's Energy Efficiency Directive — but water use reporting is far less common. Most major AI companies report aggregate global water use in annual sustainability reports, but these figures are often not broken down by facility, use type, or watershed. This makes it impossible for affected communities to assess local impacts or for regulators to set appropriate conditions on data center permits, effectively insulating corporate decisions from democratic accountability.

Water scarcity is among the most acute resource challenges of the coming decades. The UN estimates that by 2025, two-thirds of the world's population could face water-stressed conditions; by 2050, as many as 5.7 billion people may face freshwater shortages at least one month per year. Against this backdrop, the installation of water-intensive industrial cooling infrastructure in already-stressed regions represents a genuine allocation conflict. Data centers are competing for water with agriculture — which accounts for roughly 70% of global freshwater withdrawals — and with residential use, both of which have considerably less political power than large technology corporations in negotiations with local governments.

The water problem compounds the energy problem in ways that resist easy solutions. AI data centers increasingly pursue 'free cooling' — using outside air to cool servers when ambient temperatures are low enough — to reduce water consumption. But this strategy only works in cooler climates or cooler seasons, and as global temperatures rise, the window of viable free cooling shrinks. Simultaneously, the power plants generating electricity for data centers in water-rich regions also consume water for cooling — so reducing on-site water use may simply shift the water footprint to the electricity supply chain. The two resource challenges are deeply interconnected and cannot be addressed independently.

The communities most affected by data center water consumption often have the least information about it and the least power to contest it. Environmental impact assessments for data center construction vary dramatically by jurisdiction, and water use is inconsistently evaluated. Companies typically negotiate water use agreements with municipal utilities rather than with affected communities directly, and those agreements are often treated as commercial secrets. Advocacy organizations and investigative journalists have been the primary sources of public information about data center water use — a gap that reflects systemic failures of disclosure and democratic accountability at every level of government.

The water cost of AI is not a fixed feature of the technology — it is a consequence of design choices and policy environments. More efficient chip designs, better waste heat recovery, closed-loop cooling systems, and thoughtful geographic siting could all substantially reduce water consumption per unit of compute. The EU's Code of Conduct for Data Centres and emerging water disclosure requirements in some U.S. states are early attempts to create accountability. Whether these regulatory approaches develop fast enough to shape the current buildout — rather than merely catching up to its consequences — is a genuinely open question.


Sources & Further Reading

  1. Making AI Less Thirsty: Uncovering and Addressing the Secret Water Footprint of AI Models arXiv / UC Riverside (2023)
  2. Google's AI Poses a Challenge to Its Pledge to Be Carbon Free by 2030 The Guardian (2023)
  3. Big Tech Is Draining Water in Arizona During a Drought AZ Central (2022)
  4. The Water Footprint of Electricity from Large Hydropower Projects Nature Sustainability (2021)
  5. Water Scarcity United Nations (2023)
  6. Energy Efficiency Directive European Commission (2023)