
AC Research
American Compute
The AC Research team publishes analysis on GPU economics, AI infrastructure financing, and hardware depreciation. Our work is informed by direct experience underwriting GPU residual value insurance solutions.
Articles
Who Has NVIDIA's Blackwell GPUs: Market Size & Fragmentation (Q2 2026)
A market-level breakdown of who bought, deployed, and clustered NVIDIA's 3.2 million Blackwell GPU packages through the end of 2025, with segment allocation, named mega-clusters, and forward projections through 2027.
The GPU Black Market that Washington Can't Shut Down
The U.S. banned AI chip exports. Billions still reached China — moved by shell companies, Southeast Asian middlemen, and operators gaming the system.
Where Data Centers Are Built in the U.S. and Why
The history of US data centers from military bunkers to AI megacampuses, and why ten states, led by Virginia, Texas, and Georgia, dominate new construction.
Compute Offtake Agreements
How compute offtake MSAs work from the seller’s perspective: contract structure, pricing models (Reserved Instances, Bulk Credits, on-demand), payment enforcement, SLA exposure, liability caps, termination economics, and customer restrictions. Based on CoreWeave’s public MSAs with Microsoft, OpenAI, Meta, and NVIDIA, plus HPE/Soluna and NVIDIA DGX Cloud terms.
Natural Gas for Data Centers
Natural gas generates 40% of US electricity and is the fastest path to on-site power for data centers. How the supply chain works, combined cycle vs simple cycle plants, on-site generation trends, and the economic and carbon tradeoffs.
What AI Infrastructure Lobbyists Are Fighting Over
Federal lobbying filings mentioning "data center" grew 7x between 2021 and 2025. Utilities, construction contractors, regional coalitions, and foreign AI startups are all filing alongside tech companies on grid cost allocation, permitting speed, energy policy, and tax treatment.
What Powers the Grid for AI Data Centers
An overview of every energy source feeding the US grid, how they compare on cost, capacity factor, carbon intensity, and speed to deploy, and why the generation mix matters for data center operators.
How to Finance a GPU Cluster
The practical guide to financing a GPU cluster at the $5M-$100M scale. How to structure the deal, why utilization determines your terms, and how to layer equity, senior debt, and mezzanine to close the capital stack.
Disaggregated Inference: How NVIDIA, AWS, and Cerebras Are Rethinking LLM Inference
Disaggregated inference started as a software technique for splitting prefill and decode onto separate GPU pools. NVIDIA, Groq, Cerebras, and AWS are now taking it further with chips purpose-built for each phase.
GPU Tech Refresh: When to Upgrade Your AI Cluster
H100 rental rates rebounded 40% to $2.35/hr. Blackwell is sold out through September 2026. For a 256-GPU fleet: holding generates $6.6M in 3-year net cash, a full B200 refresh produces $2.3M after debt service on $3.5M equity.
Total Cost: Owning vs Renting GPU Clusters
A 256-GPU B200 cluster costs $2.31/GPU/hr to own, $2.50/GPU/hr on a long-term reserved contract, and $3.00/GPU/hr on-demand. Ownership wins when utilization stays high (above ~70%), otherwise you are paying for capacity you are not using.
What Is an AI Factory, and Why Is NVIDIA Franchising Them Out
An AI factory is just a data center full of GPUs. NVIDIA coined the term to sell complete factory configurations, from chips to full data center blueprints, bundled with $4,500/GPU/year software licensing. The model is closer to McDonald's franchising than component sales.
Why Compute Is Not a Commodity
GPU compute is not fungible. Even identical GPU models trade at 2-8x price spreads depending on networking, location, and provider. Commodity exchanges require standardized grading that compute lacks. Financial products for compute work when structured like infrastructure bonds, not futures contracts.
The Modular Data Center Opportunity
Power scarcity is pushing AI infrastructure toward networks of small, modular data centers in remote locations. Finding 2-5 MW takes months. Finding 500 MW near a metro takes years. Modular deployment has precedent in oil refining, LNG, and nuclear, and NVIDIA is formalizing it through its AI factory franchise with Bechtel.
How Power Reaches an AI Data Center
How power physically reaches a data center from the grid, why the interconnection queue is the biggest bottleneck in AI infrastructure, and what operators are doing about it. Only 13% of interconnection requests since 2000 have reached commercial operation.
Memory for AI Accelerators
HBM, GDDR, and SRAM compared: how memory hierarchy, bandwidth, and capacity determine AI accelerator performance, cost, and which workloads each chip can serve.
Who Is Building Compute and Why Is It So Lucrative
Private equity firms, family offices, and entrepreneurial operators are building sub-$100M inference GPU clusters. Locked-in offtake agreements and GPU-backed debt make 2-3x MOIC achievable over a 3-5 year hold.
The Power Budget of an AI Data Center
Where every watt goes in a modern AI data center, from GPU TDP through server overhead, cooling, and power delivery losses. A 100,000-GPU cluster needs roughly 200 MW from the grid, but only 41% reaches the GPUs.
AI Cluster Cost Breakdown: OpEx, TCO, and Payback (2026)
Operating costs for AI clusters: power, cooling, colocation, staffing, licensing, maintenance, and total cost of ownership across different deployment scenarios.
Insurance for GPU Clusters
Four types of coverage protect a GPU cluster: all-risk property insurance, transit insurance, an OEM warranty, and residual value insurance. What each covers, how pricing works, and why standard business policies leave most of the hardware value unprotected.
How Data Moves in a Data Center
Every link in a data center follows the same physical pattern: chip, SerDes, cable, SerDes, chip. How SerDes, copper, and optics determine GPU cluster networking speeds from 100G to 1.6T.
NVIDIA Software Ecosystem for AI
NVIDIA’s software stack, from CUDA through cuDNN, TensorRT, and NCCL to Dynamo and NIM, is the reason 90% of cloud AI workloads run on NVIDIA GPUs. What each layer does, how it connects to the hardware, and why switching is hard.
How to Read Colocation Contracts for GPU Clusters
A colocation contract is a license to place equipment, not a lease. The provider can pass through power cost increases, cap its own liability at three months of fees, take a security interest in your GPUs, and charge you the full remaining term if you leave early. Here is how to read the clauses that matter for GPU deployments.
Best OEMs for AI GPU Servers: Tier List (2026)
Every AI server runs the same NVIDIA silicon. The OEM you buy from determines pricing, lead times, and support. Dell, Supermicro, HPE, Lenovo, Cisco, Gigabyte, ASUS ranked and compared, plus the Taiwanese ODMs that actually build them.
Liquid Cooling vs Air Cooling for GPU Servers
Air cooling blows air over heatsinks. Liquid cooling pumps coolant through cold plates on the GPU die. At Blackwell power levels, the cooling method determines rack density, facility requirements, and which hardware you can run.
NVLink and NVSwitch
NVLink is NVIDIA’s high-bandwidth GPU-to-GPU interconnect. NVSwitch is the routing chip that turns those links into a full mesh. How they work, six generations of specs, and when each one matters for training and inference.
GPU Cluster Networking 101
How GPU clusters are networked: NVLink within servers, InfiniBand or Ethernet between them, switches, topology, optics, and real costs from 16-GPU to 24,576-GPU scale.
Bare Metal for AI Compute
Bare metal means renting a physical server with no virtualization layer. For GPU compute, bare metal is becoming the default because the hardware is the product, cloud premiums don't justify themselves at full utilization, and AI coding tools let any team build its own stack.
How to Underwrite AI Infrastructure Investments and Why GPU Financing Fails
Demand is easy to secure for AI infrastructure as of 2026. The real risk is deployment: power, permitting, construction, and hardware delivery. Here is how to evaluate schedule risk for data center builds and GPU cluster rollouts.
HGX, DGX, MGX: NVIDIA's Server Platforms
HGX is the GPU baseboard, DGX is the turnkey server, MGX is the modular reference architecture. How they relate, what OEMs change, and which platform fits your deployment.
NVIDIA AI GPU Differences from Ampere to Blackwell
NVIDIA’s six flagship data center GPUs compared: V100, A100, H100, H200, B200, and B300. Specs, architecture changes, and which generation to buy in 2026.
Starting a Neocloud in 2026
What it takes to launch a GPU cloud business: hardware costs at three scales, GPU-backed debt structures, colocation constraints, pricing models, CoreWeave unit economics, and the five risks that kill neoclouds.
GPUs as Loan Collateral
What makes good collateral, how GPUs compare to aircraft, railcars, and other established asset classes, and what lenders should evaluate when underwriting GPU-backed loans.
Private Credit and Asset-Backed Securities for GPU Financing
How private credit, ABS, and SPVs became the primary funding mechanism for AI infrastructure. History from Ginnie Mae to GPU-backed bonds, with aircraft and taxi medallion precedents.
Data Center Tiers Explained
What data center tiers actually measure, how certification works, the history of the Uptime Institute standard, notable fraud cases, and what tiers miss about AI workloads.
Neocloud Business Model and Unit Economics
How neoclouds make money selling GPU-hours: contract vs on-demand pricing, cost structure with CoreWeave FY 2025 financials, debt financing mechanics, and what threatens the model.
AI Data Center Stakeholders
Every stakeholder in an AI data center project: power providers, lenders, colos, OEMs, VARs, brokers, consultants, ITADs, and more. Three project lifecycles show how they assemble differently for hyperscalers, neoclouds, and enterprises.
NICs and DPUs for GPU Servers
A NIC connects your server to the network. A DPU is a NIC with its own CPU. Which one you need depends on what your cluster is doing besides training.
When a GPU Dies in Production
How GPU failures are detected, what causes them, what they cost in training and inference, and the full replacement workflow from RMA to validation.
SXM vs PCIe for GPU Servers
SXM and PCIe GPUs use the same silicon. The difference is the connector, and it determines bandwidth, power, cost, and flexibility. Here is how to choose.
AI Cluster Cost Breakdown: CapEx (2026)
What goes into the Bill of Materials for an AI cluster: GPU servers, InfiniBand networking, storage, infrastructure, and real BOMs at 16-GPU, 576-GPU, and 24,576-GPU scale.
Where to Buy GPU Servers
OEMs, brokers, used vs refurbished, warranties, and what to check before you write a $200K+ check for GPU hardware.
Every GPU Infrastructure Term You Need to Know
Every term you'll encounter when buying, building, or operating a GPU cluster, defined in plain English. From GPUs and NVLink to colocation and TCO.