
Brenden Reeves
COO, American Compute
Brenden leads operations and research at American Compute. Previously an ML consultant for several Fortune 500 companies and an ML engineer building NLP systems, graph-based RAG pipelines, and enterprise data platforms. His research background includes distributed graph neural network training for high-energy physics at CERN.
LinkedIn →Articles
Natural Gas for Data Centers
Natural gas generates 40% of US electricity and is the fastest path to on-site power for data centers. How the supply chain works, combined cycle vs simple cycle plants, on-site generation trends, and the economic and carbon tradeoffs.
What Powers the Grid for AI Data Centers
An overview of every energy source feeding the US grid, how they compare on cost, capacity factor, carbon intensity, and speed to deploy, and why the generation mix matters for data center operators.
How Power Reaches an AI Data Center
How power physically reaches a data center from the grid, why the interconnection queue is the biggest bottleneck in AI infrastructure, and what operators are doing about it. Only 13% of interconnection requests since 2000 have reached commercial operation.
The Power Budget of an AI Data Center
Where every watt goes in a modern AI data center, from GPU TDP through server overhead, cooling, and power delivery losses. A 100,000-GPU cluster needs roughly 200 MW from the grid, but only 41% reaches the GPUs.
How Data Moves in a Data Center
Every link in a data center follows the same physical pattern: chip, SerDes, cable, SerDes, chip. How SerDes, copper, and optics determine GPU cluster networking speeds from 100G to 1.6T.
Liquid Cooling vs Air Cooling for GPU Servers
Air cooling blows air over heatsinks. Liquid cooling pumps coolant through cold plates on the GPU die. At Blackwell power levels, the cooling method determines rack density, facility requirements, and which hardware you can run.
NVLink and NVSwitch
NVLink is NVIDIA’s high-bandwidth GPU-to-GPU interconnect. NVSwitch is the routing chip that turns those links into a full mesh. How they work, six generations of specs, and when each one matters for training and inference.
HGX, DGX, MGX: NVIDIA's Server Platforms
HGX is the GPU baseboard, DGX is the turnkey server, MGX is the modular reference architecture. How they relate, what OEMs change, and which platform fits your deployment.
Starting a Neocloud in 2026
What it takes to launch a GPU cloud business: hardware costs at three scales, GPU-backed debt structures, colocation constraints, pricing models, CoreWeave unit economics, and the five risks that kill neoclouds.
Neocloud Business Model and Unit Economics
How neoclouds make money selling GPU-hours: contract vs on-demand pricing, cost structure with CoreWeave FY 2025 financials, debt financing mechanics, and what threatens the model.