How Data Moves in a Data Center

·Brenden Reeves

Data moves through a SerDes, a serializer that encodes the data into a signal format. It then travels over copper or fiber before another SerDes turns it back from signal format into data. This pattern is the same whether the link connects two GPUs inside a server or carries data across an entire building.

Short links stay on copper. Longer links move to optics. Data travels in lanes, where each lane is one independent path for the data, and there are usually a few lanes running in parallel. Total link speed comes from lane speed multiplied by lane count. G means gigabits per second, so the shift from 100G to 200G per lane doubles what each link can carry. [1]IEEE, "P802.3dj Task Force: 200 Gb/s, 400 Gb/s, 800 Gb/s, and 1.6 Tb/s Ethernet" (accessed March 2026)https://ieee802.org/3/dj/index.html [2]Ethernet Alliance, "2026 Ethernet Roadmap" (2026)https://ethernetalliance.org/roadmap/

A common path for cabled links

Sending sideChipSerDesPCB traceConnectorCopper or fiber cableReceiving sideConnectorPCB traceSerDesChip

How data gets from chip to cable

Inside a chip, data moves on wide parallel buses (groups of wires carrying many signals side by side) across short distances. That works within a single chip, but breaks down between chips. The signals start interfering with each other and arriving out of sync. [3]Synopsys, "NRZ to PAM-4: 400G Ethernet Evolution" (accessed March 2026)https://synopsys.com/articles/pam4-400g-ethernet.html

SerDes solves this. On the sending side, the serializer takes data that was moving across many wires in parallel and sends it as a serial stream over a single pair of wires. On the receiving side, the deserializer spreads it back out so the chip can use it. The diagram above shows the full chain: chip, SerDes, board traces (the thin copper pathways etched into the circuit board), connector, cable, and then the same steps in reverse.

Every piece in the chain weakens the signal a little. The board traces, the connectors, the cable itself. The faster the link, the less room there is for that signal loss: at higher speeds each bit occupies less time, so the signal changes faster, and copper attenuates higher-frequency signals more aggressively. A 25G copper cable can run 5+ meters. At 100G per lane, passive copper tops out around 2-3 meters. At 200G per lane, even 1 meter of passive copper is a challenge. [4]OIF, "CEI-224G Framework" (accessed March 2026)https://www.oiforum.com/technical-work/hot-topics/common-electrical-i-o-cei-224g/

That shrinking reach is the reason faster links move from copper to optics.

Copper vs optical

Copper cables carry electrical signals. Optical cables carry light. The choice comes down to distance, power, and cost.

Passive DACs (Direct Attach Copper cables) are short copper cables with connectors on each end. They need no power and no electronics inside the cable. At 400G, passive DACs reach about 3 meters. At 800G (8 lanes of 100G), that drops to about 2 meters.

AECs (Active Electrical Cables) extend copper's reach by embedding small chips inside the cable that clean up and re-amplify the signal. Credo's HiWire AECs reach up to 7 meters at 800G. [5]Credo, "HiWire Active Electrical Cables" (accessed March 2026)https://credosemi.com/products/hiwire-aec/ They draw more power than passive DACs but less than optical transceivers.

Optical transceivers convert the electrical signal into light using a laser, send it through fiber, and convert it back to an electrical signal at the far end. The conversion adds latency and power, but fiber has far less loss per meter than copper, so it can cover much longer distances.

Passive copper (DAC)Active copper (AEC)Optical
Typical reach at 400GUp to 3 mUp to 7 mRoom-scale to multi-km, depending on module
Typical reach at 800GUp to 2 mUp to 7 mRoom-scale to multi-km, depending on module
Power per port~0 W (passive)~10 W at 800G~15 W at 800G
Cost per linkLowestMediumHighest
Where usedWithin rackWithin row, short inter-rackBetween racks, rows, buildings

Sources: OIF CEI-224G [4]OIF, "CEI-224G Framework" (accessed March 2026)https://www.oiforum.com/technical-work/hot-topics/common-electrical-i-o-cei-224g/, Credo HiWire [5]Credo, "HiWire Active Electrical Cables" (accessed March 2026)https://credosemi.com/products/hiwire-aec/

Representative ranges, not hard limits. Optical reach and power vary by module family.

How cabling changes at each hop

GPUinside serverPCB tracescentimetersNetwork adapterinside serverPassive copper (DAC)1-3 mTop-of-rack switchActive copper or short-reach optical3-30 mSpine switchsame rowSingle-mode optical30 m - 2+ kmSpine switchanother row or building
Each hop uses the cheapest medium that works at that distance. Copper is cheapest and lowest power, but it runs out of reach as distances grow.

Four terms come up repeatedly when describing how data moves, and they are easy to confuse:

  • Bus: A group of parallel wires inside a chip. Data moves across many wires at once over very short distances.
  • Lane: One independent serial data path between chips. A single pair of wires carrying one stream of bits. This is what a SerDes produces.
  • Link: One connection between two devices, made up of one or more lanes bundled together. A 400G link might use 4 lanes of 100G each.
  • Port: The physical socket on a device where a cable plugs in to form a link.

Total link speed = lane speed x number of lanes. A 400G link uses 8 lanes of 50G or 4 lanes of 100G. An 800G link uses 8 lanes of 100G. A 1.6T link will use 8 lanes of 200G.

Lane speed depends on the encoding scheme. Older links use NRZ (Non-Return-to-Zero), which switches between 2 voltage levels: high or low, 1 or 0. Newer links use PAM4, which uses 4 voltage levels, so each pulse carries 2 bits instead of 1. That doubles the data rate without the signal needing to switch any faster. The tradeoff: those 4 voltage levels are packed closer together, which makes the signal harder to read cleanly and demands better SerDes electronics. [3]Synopsys, "NRZ to PAM-4: 400G Ethernet Evolution" (accessed March 2026)https://synopsys.com/articles/pam4-400g-ethernet.html

NRZ vs PAM4 encoding

NRZ: 8 pulses, 8 bits0110110100PAM4: 4 pulses, same 8 bits012310110100
The same 8 bits (10110100) take 8 NRZ pulses but only 4 PAM4 pulses. PAM4 uses 4 voltage levels to carry 2 bits per pulse, doubling throughput at the same pulse rate.

SerDes lane speed by generation

10G · NRZIEEE 802.3ae (2002)Links: 10G, 40G25G · NRZIEEE 802.3by (2016)Links: 25G, 100G50G · PAM4IEEE 802.3cd (2018)Links: 50G, 200G, 400G100G · PAM4IEEE 802.3ck (2022)Links: 100G, 400G, 800G200G · PAM4IEEE 802.3dj (2026)Links: 200G, 800G, 1.6T
Higher lane speeds mean fewer lanes per link and simpler cabling. A 1.6T link at 200G per lane needs 8 lanes; at 100G per lane it would need 16. Dashed bar indicates a standard still in development.

All of these lanes converge at network switches, the devices that connect servers to each other. The switch chip determines how many ports the switch has and how fast each port can run. Broadcom's Tomahawk 5, for example, is a 51.2 Tbps switch chip. That total bandwidth can be split into 64 ports of 800G or 128 ports of 400G. The previous Tomahawk 4 generation was 25.6 Tbps, half as much. [6]Broadcom, "Broadcom Ships Tomahawk 5, Industry's Highest Bandwidth Switch Chip" (2022)https://investors.broadcom.com/news-releases/news-release-details/broadcom-ships-tomahawk-5-industrys-highest-bandwidth-switch

How this maps to GPU clusters

As of early 2026, GPU clusters connect servers using 400G network ports. Each 400G port is built from 4 lanes of 100G PAM4, the same SerDes generation shown in the chart above.

GPU clusters use either InfiniBand or RoCE (RDMA over Converged Ethernet) for inter-server communication. InfiniBand dominates in large training clusters because of its low latency and built-in congestion management. RoCE runs over standard Ethernet switches and is cheaper to deploy, making it common in inference clusters and among hyperscalers building on their existing Ethernet infrastructure. Both run over the same physical cables and SerDes, so the choice comes down to protocol and switch hardware. As of early 2026, the current InfiniBand generations are HDR at 200G per port and NDR at 400G per port.

Cluster generationNetwork speed per portAdapterPorts per server
A100 (2020-2022)200G InfiniBand (HDR)ConnectX-68 [7]NVIDIA, "DGX A100 User Guide" (accessed March 2026)https://docs.nvidia.com/dgx/dgxa100-user-guide/introduction-to-dgxa100.html
H100 (2022-2024)400G InfiniBand (NDR)ConnectX-78 [8]NVIDIA, "DGX SuperPOD H100 Reference Architecture" (accessed March 2026)https://docs.nvidia.com/dgx-superpod/reference-architecture-scalable-infrastructure-h100/latest/network-fabrics.html
B200 (2024-2026)400G InfiniBand (NDR)ConnectX-7, BlueField-38+ [9]NVIDIA, "DGX BasePOD B200 Reference Architecture" (accessed March 2026)https://docs.nvidia.com/dgx-basepod/reference-architecture-infrastructure-foundation-enterprise-ai/latest/core-components.html
GB200 NVL72 (2025)400G InfiniBand (NDR)ConnectX-7, BlueField-38+ per tray [10]NVIDIA, "DGX GB Rack Scale Systems User Guide" (accessed March 2026)https://docs.nvidia.com/dgx/dgxgb200-user-guide/hardware.html
Vera Rubin NVL72 (H2 2026)800G InfiniBand (XDR)ConnectX-9, BlueField-48 per tray (2 per GPU) [15]NVIDIA, "Inside the NVIDIA Vera Rubin Platform: Six New Chips, One AI Supercomputer" (2026)https://developer.nvidia.com/blog/inside-the-nvidia-rubin-platform-six-new-chips-one-ai-supercomputer/

Sources: NVIDIA DGX A100 [7]NVIDIA, "DGX A100 User Guide" (accessed March 2026)https://docs.nvidia.com/dgx/dgxa100-user-guide/introduction-to-dgxa100.html, DGX SuperPOD H100 [8]NVIDIA, "DGX SuperPOD H100 Reference Architecture" (accessed March 2026)https://docs.nvidia.com/dgx-superpod/reference-architecture-scalable-infrastructure-h100/latest/network-fabrics.html, DGX BasePOD B200 [9]NVIDIA, "DGX BasePOD B200 Reference Architecture" (accessed March 2026)https://docs.nvidia.com/dgx-basepod/reference-architecture-infrastructure-foundation-enterprise-ai/latest/core-components.html, GB200 NVL72 [10]NVIDIA, "DGX GB Rack Scale Systems User Guide" (accessed March 2026)https://docs.nvidia.com/dgx/dgxgb200-user-guide/hardware.html, Vera Rubin NVL72 [15]NVIDIA, "Inside the NVIDIA Vera Rubin Platform: Six New Chips, One AI Supercomputer" (2026)https://developer.nvidia.com/blog/inside-the-nvidia-rubin-platform-six-new-chips-one-ai-supercomputer/

Where data center interconnects are going

As of early 2026, the fastest deployed links run at 100G per lane using PAM4, giving 400G and 800G ports.

Three changes are coming next: 200G SerDes, co-packaged optics (CPO), and linear-drive pluggable optics (LPO). All three aim to keep bandwidth rising without letting link power grow just as fast.

200G SerDes

200G SerDes doubles the per-lane rate from 100G to 200G, still using PAM4. An 8-lane link at 200G per lane gives 1.6T, which is the next major port speed after 800G. The IEEE is developing the standard (P802.3dj) and expects to finalize it around 2026. [1]IEEE, "P802.3dj Task Force: 200 Gb/s, 400 Gb/s, 800 Gb/s, and 1.6 Tb/s Ethernet" (accessed March 2026)https://ieee802.org/3/dj/index.html

The challenge is physical. At these speeds, the signal weakens so quickly that passive copper cables may not work beyond 1 meter. Board designers have less room for error: shorter traces, better materials, and fewer connection points between the chip and the cable. [4]OIF, "CEI-224G Framework" (accessed March 2026)https://www.oiforum.com/technical-work/hot-topics/common-electrical-i-o-cei-224g/

Co-packaged optics

In most switches, optical transceivers plug into the front panel, and the electrical signal has to travel 15-30 cm of board trace to reach them. At higher speeds, that trace eats into the signal. Co-packaged optics (CPO) moves the optics onto the switch chip's package itself, converting electrical to light right next to the chip. [11]Broadcom, "Third-Generation Co-Packaged Optics (CPO) Technology" (2025)https://investors.broadcom.com/news-releases/news-release-details/broadcom-announces-third-generation-co-packaged-optics-cpo [12]Corning, "Broadcom TH5 Bailly Co-Packaged Optics System" (accessed March 2026)https://www.corning.com/optical-communications/worldwide/en/home/the-signal-network-blog/corning-contributes-to-broadcom-th5-baily-cpo-system.html

Broadcom's Bailly platform does this for their Tomahawk 5 switch chip. They claim about 5.5 watts per 800G port with CPO versus about 14 watts with traditional pluggable transceivers, a significant power saving across a switch with 64 ports. [13]Broadcom, "TH5 51.2T Bailly CPO (Co-Packaged Optics)" (2023)https://docs.broadcom.com/doc/th5-51.2t-bailly-cpo

The tradeoff: if the optics fail, you replace the entire switch assembly instead of just swapping a transceiver.

Linear-drive optics

Traditional optical transceivers contain a chip (a DSP) that cleans up the signal digitally before converting it to light. Linear-drive optics (LPO) removes that chip and passes the signal through with minimal processing. That lowers power and latency, but it only works if the electrical signal arriving at the transceiver is already clean enough.

The industry is still defining the standards for linear-drive at 200G per lane. As of early 2026, this is a work in progress rather than a deployed technology. [14]OIF, "CEI-224G-Linear Project Launch" (2024)https://www.oiforum.com/oif-q2-technical-and-mae-committees-meeting-wraps-with-cei-224g-linear-project-launch-new-cmis-white-papers-and-requirements-for-energy-efficient-interfaces/

200G SerDesCo-packaged opticsLinear-drive optics
What changesLane speed doubles from 100G to 200GOptics move onto the switch packageDSP removed from the transceiver
Power impactHigher per-lane SerDes powerLower optical interconnect powerLower module power if the host channel is clean enough
Status (early 2026)IEEE P802.3dj in development [1]IEEE, "P802.3dj Task Force: 200 Gb/s, 400 Gb/s, 800 Gb/s, and 1.6 Tb/s Ethernet" (accessed March 2026)https://ieee802.org/3/dj/index.htmlBroadcom Bailly platform announced [13]Broadcom, "TH5 51.2T Bailly CPO (Co-Packaged Optics)" (2023)https://docs.broadcom.com/doc/th5-51.2t-bailly-cpoOIF CEI-224G-Linear project launched [14]OIF, "CEI-224G-Linear Project Launch" (2024)https://www.oiforum.com/oif-q2-technical-and-mae-committees-meeting-wraps-with-cei-224g-linear-project-launch-new-cmis-white-papers-and-requirements-for-energy-efficient-interfaces/

References

  1. IEEE, "P802.3dj Task Force: 200 Gb/s, 400 Gb/s, 800 Gb/s, and 1.6 Tb/s Ethernet" (accessed March 2026)
  2. Ethernet Alliance, "2026 Ethernet Roadmap" (2026)
  3. Synopsys, "NRZ to PAM-4: 400G Ethernet Evolution" (accessed March 2026)
  4. OIF, "CEI-224G Framework" (accessed March 2026)
  5. Credo, "HiWire Active Electrical Cables" (accessed March 2026)
  6. Broadcom, "Broadcom Ships Tomahawk 5, Industry's Highest Bandwidth Switch Chip" (2022)
  7. NVIDIA, "DGX A100 User Guide" (accessed March 2026)
  8. NVIDIA, "DGX SuperPOD H100 Reference Architecture" (accessed March 2026)
  9. NVIDIA, "DGX BasePOD B200 Reference Architecture" (accessed March 2026)
  10. NVIDIA, "DGX GB Rack Scale Systems User Guide" (accessed March 2026)
  11. Broadcom, "Third-Generation Co-Packaged Optics (CPO) Technology" (2025)
  12. Corning, "Broadcom TH5 Bailly Co-Packaged Optics System" (accessed March 2026)
  13. Broadcom, "TH5 51.2T Bailly CPO (Co-Packaged Optics)" (2023)
  14. OIF, "CEI-224G-Linear Project Launch" (2024)
  15. NVIDIA, "Inside the NVIDIA Vera Rubin Platform: Six New Chips, One AI Supercomputer" (2026)

Frequently Asked Questions

What is SerDes and why does it matter for data centers?

SerDes (serializer/deserializer) converts parallel data inside a chip to a serial signal for transmission over a wire or fiber. Every link in a data center, from GPU to GPU, server to switch, and switch to switch, uses SerDes at both ends. The SerDes lane speed, multiplied by the number of lanes, sets the total link speed.

What is the difference between copper and optical cables in a data center?

Copper cables carry electrical signals and work for short distances: up to about 3 meters at 400G with passive DACs, or 5-7 meters with active electrical cables. Optical cables carry light and reach from 100 meters to 10+ kilometers. Copper is cheaper and uses less power for short links. Optical is required for anything beyond a few meters at modern speeds.

What is PAM4 and why did data centers switch from NRZ?

PAM4 (Pulse Amplitude Modulation, 4 levels) encodes 2 bits per symbol using 4 voltage levels, compared to NRZ (Non-Return-to-Zero) which encodes 1 bit per symbol using 2 levels. PAM4 doubles the data rate at the same signaling frequency. Data centers adopted PAM4 at 50G per lane and above because NRZ at those speeds requires impractically high bandwidth from the electrical channel.

What networking speeds do A100, H100, and B200 GPU clusters use?

A100 clusters (2020-2022) use 200G InfiniBand or Ethernet per port with ConnectX-6 adapters. H100 clusters (2022-2024) use 400G InfiniBand (NDR) with ConnectX-7. B200 clusters (2024-2026) also use 400G InfiniBand with ConnectX-7 and BlueField-3 adapters. GB200 NVL72 racks use 800G optical modules for external connections.

Residual Value Insurance Solutions for GPUs

Coverage creates a minimum value for what your GPUs are worth at a future date. If they sell below the floor, the policy pays you the difference.

Learn how it works →
How Data Moves in a Data Center | American Compute