What is an AI Data Center? An interactive map of all the active AI Data Centers in the World

Spread the love
5
(1765)

AI Data Centers — Global Infrastructure Map

Major AI compute facilities worldwide · Click any marker for details

Showing 0 of 0 facilities
Total Facilities
Operational
Expanding Now
Companies
Countries / Regions
Filter ›

Last updated February 2026. Sources: Oxford University, S&P Global, Bain & Company, World Economic Forum.

Somewhere in the West Texas flatlands, on land that used to grow cotton, a building the size of 15 football fields is humming with the sound of 100,000 computers thinking. There are no windows. The cooling systems alone consume more electricity than most small towns. And this is just one of the facilities in a global building spree that is quietly reshaping the world’s economy, geography, and balance of power.

This is the AI data center race. And it is moving faster than almost anyone predicted. Major technological behemoths are in a massive gold rush to be the most dominant player in the post AI world.

The short answer is complicated, because it depends on what you’re counting.

Microsoft has built the most individual facilities, with 10 operational AI data centers spread across 7 countries. Amazon AWS has planted its flag in more countries than anyone else, with infrastructure running across 8 nations from Virginia to São Paulo to Singapore. Google matches Amazon almost facility for facility, with 9 locations anchored by decades of networking experience.

But then there’s OpenAI’s Stargate project, which is playing an entirely different game. Instead of building many medium-sized facilities, Stargate is constructing a small number of campuses so large they have their own dedicated power substations. The flagship in Abilene, Texas is already running. The total investment committed through 2028 is $500 billion, making it the single largest private infrastructure investment in human history.

Counting facilities, Microsoft wins. Counting raw power, Stargate is in a league of its own.

CompanyFacilitiesCountries2025 SpendKey AI Model
Microsoft / Azure107$80 billionGPT-4o, Copilot
Amazon AWS98$100+ billionBedrock, Claude
Google / DeepMind96$75 billionGemini, AlphaFold
Meta AI75$60-65 billionLlama 3.x
OpenAI / Stargate8 sites4$500B by 2028GPT series, o3
Oracle Cloud65Stargate partnerOCI AI platform
xAI21UndisclosedGrok 3

Walk into a traditional data center and you’d find rows of servers quietly processing emails, storing files, and running company websites. Walk into an AI data center and the first thing you’d notice is the heat, followed immediately by the noise.

These are not the same machines. AI chips, particularly NVIDIA’s H100 and H200 GPUs, generate extraordinary amounts of heat while doing the mathematical work of training and running large models. A traditional server rack draws around 5 to 10 kilowatts of power. An AI rack pulls 40 to 130 kilowatts today, with projections reaching 250 kilowatts by 2027. That difference is not just technical, it changes everything about how the building is designed, cooled, and powered.

Most new AI facilities now use liquid cooling, where water or specialized fluid is piped directly to the chips. It is 3,000 times more efficient than blowing cold air, and at this scale, efficiency is everything. Running a single large AI training job, the kind that produces a model like GPT-4, consumes roughly as much electricity as 1,000 American homes use in a year.

And yet, despite the enormous cost of running them, AI data centers generate $12.50 in revenue per watt annually, nearly three times the $4.20 that traditional facilities produce. That gap explains why Microsoft, Amazon, and Google are collectively spending over $250 billion on AI infrastructure in 2026 alone, and why Wall Street is funding it enthusiastically.

The announcement came in January 2025, at the White House, with cameras rolling. OpenAI, SoftBank, Oracle, and NVIDIA were committing $500 billion to build AI infrastructure across the United States. President Trump stood alongside Sam Altman and called it a national priority. The project had a name: Stargate.

What made Stargate different from every previous data center announcement was not just the money, it was the scale of individual sites. The Abilene campus in Texas is already operational. The Milam County campus, being built by SoftBank’s energy division, is targeting 1.5 gigawatts of capacity to be delivered over 18 months, a construction pace that engineers described as historically unprecedented.

By the time all announced Stargate sites are complete, the total compute capacity will exceed that of most national governments. The Abilene campus alone, according to infrastructure analysts, has more AI processing power than the combined AI infrastructure of most G20 countries.

The project is also going global. A campus in Abu Dhabi is being built in partnership with UAE-based G42, Oracle, and NVIDIA, with an operational target of 2026. In Patagonia, Argentina, a $25 billion investment is creating the first hyperscale AI campus in Latin America, using AMD Instinct GPUs and drawing power from the region’s renewable energy resources.

“Data center and AI-related investments accounted for 80% of the increase in US private domestic demand in the first half of 2025.” — S&P Global Market Intelligence

Look at where these facilities are built and a clear logic emerges. Every location is a calculated answer to the same four questions: Where is the power? Where is the water? How much does the land cost? And is the political environment stable enough to justify a 20-year investment?

This is why Northern Europe has become one of the most important AI infrastructure regions in the world. Meta built its Luleå campus in Swedish Lapland specifically because Arctic air provides free cooling for most of the year. Google chose Hamina, Finland because a former paper mill gave it access to cold sea water for the same purpose. Cooling can represent 30 to 40 percent of a data center’s operating costs, so a location that essentially eliminates that expense is worth an enormous premium.

It explains why Iowa hosts some of the largest campuses in the US despite being far from any major tech hub. Wind power is cheap, land is plentiful, and the state government has been actively courting data center investment for years. Google’s Council Bluffs campus, Microsoft’s West Des Moines facility, and Meta’s Prineville operation all follow the same logic: go where the electricity is affordable and reliable.

China presents a different picture. Oxford University’s research confirms 22 AI-specialized data centers, concentrated in Beijing, Hangzhou, Guizhou, and Tianjin, making it the second most infrastructure-rich nation after the US. But US export controls on NVIDIA’s most advanced chips have forced Chinese operators to increasingly rely on domestic alternatives from Huawei and Cambricon. The race continues, just with different hardware.

And then there is the Middle East, where enormous sovereign wealth funds and a strategic desire to be at the center of the AI era are combining to create something new. The UAE alone has attracted over $20 billion in announced hyperscale investments. France committed €50 billion to a UAE AI campus at the February 2025 Paris AI Summit. These are not just technology investments. They are geopolitical ones.

Here is the uncomfortable reality underneath all of this building: nobody has figured out the electricity problem yet.

The AI industry is consuming power at a rate that is straining grids across the United States and Europe. Bain & Company has identified power availability, not GPU supply and not construction costs, as the single biggest bottleneck constraining AI data center expansion right now. Ten gigawatts of new data center capacity broke ground globally in 2025, enough to power roughly 7.5 million homes. And demand is still outpacing supply.

xAI’s Memphis Colossus runs continuously at over 150 megawatts. That is approximately what it takes to power a city of 100,000 people, dedicated entirely to running one company’s AI systems. Multiply that by dozens of facilities across a dozen companies and the scale of the challenge becomes clear.

This is why Microsoft, Google, and Amazon have all begun investing seriously in small modular nuclear reactors. It is why Google signed a deal to restart the Three Mile Island nuclear plant in Pennsylvania. The AI race has quietly become an energy race, and the companies that secure reliable, clean, large-scale power sources in the next five years may have a structural advantage that is very hard for competitors to overcome.

Most technology infrastructure stories stay inside the technology industry. This one has escaped.

S&P Global’s economists noted that data center and AI-related spending accounted for 80 percent of the growth in US private domestic demand in the first half of 2025. That is a staggering figure. It means that the AI building spree is now a significant driver of the broader US economy, affecting construction jobs, electrical engineering, steel production, and commercial real estate in ways that reach far beyond Silicon Valley.

Countries that fall behind in AI infrastructure are increasingly aware of the long-term consequences. France’s €109 billion AI investment announcement in early 2025 was fundamentally about not ceding the infrastructure race to the US and China. India’s $15 billion partnership with Google, announced in late 2025, follows the same logic. Governments that once worried about semiconductor supply chains are now equally worried about who controls the buildings where AI actually runs.

Oxford University’s research found that only 32 nations currently have any AI-specialized data center infrastructure at all. For the other 160-plus countries, the question is not whether to join the race, it is whether they will ever be able to.

  1. The United States hosts 5,427 operational data centers, roughly 45 percent of all facilities worldwide.
  2. Only 32 nations have AI-specialized data centers, per Oxford University research.
  3. The average cost per AI rack reached $3.9 million in 2025.
  4. xAI’s Memphis Colossus houses 100,000 NVIDIA H100 GPUs under one roof.
  5. Meta raised $62 billion in debt since 2022, nearly half of it in 2025, to finance AI expansion.
  6. Global data center investment hit a record $61 billion in 2025, with debt issuance nearly doubling to $182 billion.
  7. AI data centers generate $12.50 per watt in annual revenue versus $4.20 for traditional facilities.
  8. 70 percent of all global data center capacity will be dedicated to AI workloads by 2030.
  9. The AI data center market will grow from $236 billion in 2025 to $934 billion by 2030, a compound annual growth rate of 31.6 percent.
  10. Stargate’s $500 billion commitment is the largest private infrastructure investment ever recorded.

Which country has the most AI data centers? The United States leads with 26 AI-specialized facilities. China follows with 22. The European Union collectively hosts 28 across its member states, though no single EU country comes close to the US total. Only 32 nations worldwide have AI-specialized infrastructure at all, according to Oxford University.

What is the world’s largest AI data center? As of early 2026, the Stargate Abilene campus in Texas is among the most powerful AI-dedicated facilities ever built. xAI’s Memphis Colossus, with 100,000 NVIDIA H100 GPUs, is the largest single-company AI training cluster. Microsoft’s Wisconsin campus, currently under construction on a 1,030-acre site, will be among the largest by physical footprint when complete.

How much does it cost to build an AI data center? The average AI rack cost $3.9 million in 2025. A mid-scale facility running 50 to 100 megawatts costs between $1 billion and $3 billion to build. Hyperscale campuses like Stargate run into tens of billions. Global investment in data center construction hit a record $61 billion in 2025.

Why are AI data centers mostly in the Northern Hemisphere? Cold climates dramatically reduce cooling costs, which represent 30 to 40 percent of operating expenses. Add in the concentration of power grid infrastructure, fiber-optic networks, and technology capital in North America, Europe, and East Asia, and the geographic clustering makes obvious economic sense.

What is Project Stargate? A $500 billion joint venture announced in January 2025 between OpenAI, SoftBank, Oracle, and NVIDIA to build AI data center infrastructure across the United States and internationally. It is the largest private infrastructure investment in history. The flagship campus in Abilene, Texas is already operational, with additional sites under construction across the US, UAE, and Argentina.

Sources: Oxford University AI Data Center Geography Report · S&P Global Market Intelligence · Bain & Company Data Center Forecast 2030 · World Economic Forum · MarketsandMarkets · Fortune Business Insights · Company investor filings (Microsoft, Amazon, Google, Meta, OpenAI, Oracle, xAI) · WEF Paris AI Summit, February 2025

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1765

No votes so far! Be the first to rate this post.

Leave a Reply