AI Data Center Stakeholders
Building AI infrastructure requires 30+ specialized companies working in concert. A hyperscaler like Microsoft finances, builds, and operates its own facilities. A neocloud like CoreWeave leases colocation space, raises private credit, and buys GPU servers from OEMs. An enterprise buys through resellers and colocates in someone else's building. As scale decreases, intermediaries increase.
Smaller clusters, more middlemen
Which stakeholders each project needs
Some stakeholders, like NVIDIA and colo providers, touch every project type. Others are unique to one. Private credit backs neocloud builds but barely exists for the others. The VAR defines enterprise procurement but is invisible at hyperscale.
How a hyperscaler builds a campus
Hyperscaler projects start with power. Securing 100+ MW can take 2-4 years when new substations or transmission lines need to be built. The hyperscaler negotiates directly with the utility, often locking in 10-15 year terms and signing power purchase agreements (PPAs) with renewable energy developers. For example, Microsoft signed a 20-year PPA with Constellation Energy to reopen Three Mile Island's Unit 1 nuclear reactor. [1]Constellation Energy, "Constellation to Launch Crane Clean Energy Center, Restoring Jobs and Carbon-free Power to the Grid" (September 2024). 20-year PPA with Microsoft.https://investors.constellationenergy.com/news-releases/news-release-details/constellation-launch-crane-clean-energy-center-restoring-jobs
Hyperscalers self-finance from operating cash flow and bond issuances. For instance, Microsoft raised $11.5 billion in bonds in October 2024, its largest offering since 2017, partly for AI data center expansion. [2]Bloomberg, "Microsoft Sells $11.5 Billion of Bonds in Year's Biggest Tech Deal" (October 2024) No private credit funds, no venture capital. No external stakeholders impose covenants on GPU utilization rates.
GPU servers are ordered in tens of thousands from OEMs (original equipment manufacturers) like Dell, HPE, and Supermicro, or from ODMs (original design manufacturers) like Foxconn and Quanta that build custom designs. GPU allocation is negotiated directly with NVIDIA.
How a neocloud launches a GPU cloud
Neoclouds occupy other people's buildings and use other people's money. Every external dependency a hyperscaler might avoid, a neocloud relies on. The neocloud leases space from a colocation provider like Equinix or QTS.
CoreWeave raised $7.5 billion in debt from Blackstone and other lenders, plus $1.1 billion in Series C equity at a $19 billion valuation, in 2024. [3]CNBC, "AI startup CoreWeave raises $7.5 billion in debt, Blackstone leads" (May 2024). Debt facility led by Blackstone, Magnetar, and others.https://www.cnbc.com/2024/05/17/ai-startup-coreweave-raises-7point5-billion-in-debt-blackstone-leads.html [4]CNBC, "Nvidia-backed GPU cloud provider CoreWeave is worth $19 billion" (May 2024). $1.1 billion Series C at $19 billion valuation.https://www.cnbc.com/2024/05/01/nvidia-backed-gpu-cloud-provider-coreweave-is-worth-19-billion.html The debt is collateralized by GPU hardware and forward customer revenue.
GPU supply is often a binding constraint for neoclouds. NVIDIA allocates across customers, and neoclouds compete with hyperscalers who have deeper relationships and larger orders. Lead times for the latest NVIDIA GPUs have ranged from 3-12 months depending on demand cycles. [5]Tom's Hardware / UBS, "Wait times for Nvidia's AI GPUs ease to three to four months" (February 2024). H100 lead times peaked at 8-11 months in late 2023 before improving to 3-4 months.https://www.tomshardware.com/tech-industry/artificial-intelligence/wait-times-for-nvidias-ai-gpus-eases-to-three-to-four-months-suggesting-peak-in-near-term-growth-the-wait-list-for-an-h100-was-previously-eleven-months-ubs NVIDIA still has an interest to diversify its customer base, and purposefully carves out limited allocation for neoclouds.
How an enterprise deploys AI
Enterprise deployments are the smallest in scale and can involve the most intermediaries. Some skip hardware entirely and buy GPU cloud capacity from a hyperscaler or neocloud. The decision depends on data sensitivity, workload predictability, and whether the enterprise has staff to operate GPU infrastructure.
Enterprises that buy hardware often go through a broker or VARs (value-added resellers). For a deployment under 1,000 GPUs, a broker or a VAR is the standard procurement channel. The VAR handles:
- OEM sourcing and allocation access
- Enterprise pricing and financing
- Hardware configuration and testing
- Logistics and delivery coordination
- Ongoing support contracts
(Almost) every role in an AI data center project
The roles below cover some of the major categories, although it's definitely not exhaustive. Bigger projects tend to add more specialized service providers around the edges: environmental consultants, permitting agencies, physical security firms. Smaller projects need fewer of those, but add more intermediaries between themselves and their suppliers: brokers, resellers, and distributors that each add a hop between the GPU and the project.
References
- Constellation Energy, "Constellation to Launch Crane Clean Energy Center, Restoring Jobs and Carbon-free Power to the Grid" (September 2024). 20-year PPA with Microsoft.
- Bloomberg, "Microsoft Sells $11.5 Billion of Bonds in Year's Biggest Tech Deal" (October 2024)
- CNBC, "AI startup CoreWeave raises $7.5 billion in debt, Blackstone leads" (May 2024). Debt facility led by Blackstone, Magnetar, and others.
- CNBC, "Nvidia-backed GPU cloud provider CoreWeave is worth $19 billion" (May 2024). $1.1 billion Series C at $19 billion valuation.
- Tom's Hardware / UBS, "Wait times for Nvidia's AI GPUs ease to three to four months" (February 2024). H100 lead times peaked at 8-11 months in late 2023 before improving to 3-4 months.
Frequently Asked Questions
How long does it take a hyperscaler to secure 100+ MW for AI data center power?
Hyperscaler projects start with power. Securing 100+ MW can take 2-4 years when new substations or transmission lines need to be built. The hyperscaler negotiates directly with the utility, often locking in 10-15 year terms and signing power purchase agreements (PPAs) with renewable energy developers.
How do hyperscalers finance AI data center expansion?
Hyperscalers self-finance from operating cash flow and bond issuances. Microsoft raised $11.5 billion in bonds in October 2024, its largest offering since 2017, partly for AI data center expansion. No private credit funds, no venture capital.
How did CoreWeave fund its GPU cloud buildout in 2024?
Neoclouds occupy other people's buildings and use other people's money. CoreWeave raised $7.5 billion in debt from Blackstone and other lenders, plus $1.1 billion in Series C equity at a $19 billion valuation, in 2024. The debt is collateralized by GPU hardware and forward customer revenue.
What is the standard procurement channel for an enterprise GPU deployment under 1,000 GPUs?
For a deployment under 1,000 GPUs, a broker or a VAR is the standard procurement channel. Enterprise deployments are the smallest in scale and can involve the most intermediaries. The VAR handles OEM sourcing and allocation access, enterprise pricing and financing, hardware configuration and testing, logistics and delivery coordination, and ongoing support contracts.
Coverage creates a minimum value for what your GPUs are worth at a future date. If they sell below the floor, the policy pays you the difference.
Learn how it works →