Nvidia's Silicon Photonics Switches Enhance Power Efficiency in AI Data Centers
In the era of AI, high-speed optical networking is crucial. Co-packaged optics eliminate the need for pluggable optics, streamlining network design and boosting resiliency.

Figure 1. Nvidia's Silicon Photonics Switches Boost Power Efficiency in AI Data Centers.
At Nvidia's annual GTC event, one major announcement stood out: the introduction of new silicon photonics network switches. These switches integrate network optics directly into the switch using a technique called co-packaged optics (CPO), replacing traditional external pluggable transceivers. While Nvidia hinted at cost savings, the key benefit is a significant reduction in power consumption, alongside an enhancement in network resiliency. Figure 1 shows Nvidia's Silicon Photonics Switches Boost Power Efficiency in AI Data Centers.
Pluggable transceivers have long been the standard for connecting switches with optical cables, modulating light waves to transmit data more efficiently than copper. By integrating this capability into the switch itself, Nvidia eliminates connection points, improving performance and cutting power usage.
To illustrate, a single Nvidia photonics switch replaces 72 traditional transceivers and reduces the number of lasers by 432 per switch. In large-scale AI data centers, which often contain thousands of switches, this could amount to hundreds of thousands, or even millions, of transceivers. Nvidia claims that CPO leads to a 3.5x improvement in power efficiency, 10x greater network resiliency, and 1.3x faster deployment time. Having worked as a network engineer and replaced optics when they fail, these numbers seem plausible, though the 1.3x faster deployment time might be conservative, considering the delicate nature of transceivers and the time-consuming deployment process at scale.
Silicon photonics empowers Nvidia to build networks designed for AI, capable of supporting more users, increasing token counts across data centers, and accelerating time-to-market. While AI infrastructure has typically focused on servers and GPUs, the network has become an integral part of computing systems. As AI transforms data centers into what Nvidia calls “AI factories,” massive-scale connectivity across thousands of GPUs makes high-speed optical networking more crucial than ever.
Nvidia's photonics family of switches includes the Spectrum-X and Quantum-X series, each tailored to specific networking needs. The Spectrum-X series is designed for AI data centers using Ethernet, offering far greater bandwidth than traditional Ethernet setups. Its configurations include 128 ports of 800 Gbps or 512 ports of 200 Gbps, delivering a total bandwidth of 100 Tbps. For larger-scale deployments, Spectrum-X can scale to 512 ports of 800 Gbps or 2,048 ports of 200 Gbps, achieving a massive 400 Tbps. These Ethernet switches are expected to hit the market in 2026.
The Quantum-X series, on the other hand, is built for InfiniBand, a high-speed networking technology widely used for AI model training and inference. With 144 ports of 800 Gbps and 200 Gbps SerDes for enhanced performance, Quantum-X also features liquid cooling for greater energy efficiency and stability under heavy workloads. It’s twice as fast and five times more scalable than previous InfiniBand switches, making it perfect for AI tasks that require high-speed communication between GPUs. Quantum-X switches will be available later this year.
Nvidia’s innovative switch design was developed in collaboration with multiple vendors, who contributed to the creation of lasers, packaging, and other components for the silicon photonics system. This effort also includes hundreds of patents, with Nvidia planning to license these innovations to partners and customers to accelerate scaling.
Key partners in Nvidia’s ecosystem include TSMC, which provides advanced chip fabrication and 3D chip stacking for integrating silicon photonics into Nvidia hardware, as well as Coherent, Eoptolink, Fabrinet, and Innolight, which are involved in transceiver development and manufacturing. Other collaborators include Browave, Corning Incorporated, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries, and TFC Communication.
AI has drastically changed how data centers are designed. During his keynote at GTC, CEO Jensen Huang emphasized that the data center is becoming the “new unit of compute,” meaning that entire data centers must function as a single massive server. This shift has moved compute from being CPU-centric to GPU-centric. Consequently, the network needs to evolve to keep data flowing quickly to the GPUs, as any delays in the network can waste valuable resources. The new co-packaged switches help eliminate external components that previously added network overhead, a minor issue pre-AI, but one that now becomes critical as AI workloads demand faster and more efficient data processing.
Source: NETWORK WORLD
Cite this article:
Priyadharshini S (2025), Nvidia's Silicon Photonics Switches Enhance Power Efficiency in AI Data Centers,AnaTechMaz, pp.118