Cisco Supercharges Silicon One Chip to Boost AI Data Center Connectivity

Priyadharshini S October 11, 2025 | 1:45 PM Technology

Targeted at hyperscalers and large data center operators, Cisco’s 8223 routing system features the latest iteration of its Silicon One portfolio: the P200 programmable, deep-buffer chip. The system supports OSFP (Octal Small Form-Factor Pluggable) and QSFP-DD (Quad Small Form-Factor Pluggable Double Density) optical modules, enabling connectivity for geographically distributed AI clusters.

Figure 1. Cisco Powers Up Silicon One Chip for Next-Gen AI Data Center Networks.

“Power limitations and resiliency needs are driving hyperscalers, neoclouds, and enterprises to adopt distributed AI clusters spanning campus and metro regions,” wrote Rakesh Chopra, Cisco Fellow and SVP for Silicon One, in a blog post. “The Cisco 8223 is designed for large-scale, disaggregated fabrics both within and between data centers, giving customers the ability to scale AI infrastructure with unmatched efficiency and control.” Figure 1 shows Cisco Powers Up Silicon One Chip for Next-Gen AI Data Center Networks.

Cisco’s Silicon One processors are engineered for high network bandwidth and performance, with the flexibility to be customized for either routing or switching using a single chipset—eliminating the need for separate silicon architectures for each network function. At the heart of the Silicon One system is robust support for advanced Ethernet capabilities, including enhanced flow control and intelligent congestion detection and avoidance.

According to Chopra, a single P200-based Cisco 8223 system can handle the traffic previously requiring six 25.6 Tbps fixed systems or a four-slot modular system. Its 3RU, 51.2 Tbps configuration consumes roughly 65% less power than previous generations.

The 8223 supports 64 ports of 800G coherent optics and can process over 20 billion packets per second. Its advanced core buffering manages the massive traffic surges from AI training workloads. The P200 chip allows the router to support a full 512 radix and scale to 13 petabits using a two-layer topology—or up to 3 exabits with a three-layer topology.

While some argue deep buffers introduce jitter due to fill-and-drain cycles, Chopra explained to Network World that the real issue is poor congestion management and load balancing. “AI workloads are deterministic and predictable; flows can be proactively placed across the network to avoid congestion,” he said.

The 8223’s deep-buffer design temporarily stores packets during congestion or traffic bursts, a critical capability for AI networks with high-volume inter-GPU communication. Gurudatt Shenoy, VP of Cisco Provider Connectivity, noted that combined with its high-radix architecture, the 8223 enables more devices to connect directly, reducing latency, saving rack space, and lowering power consumption. The result is a flatter, more efficient network topology optimized for high-bandwidth, low-latency AI workloads.

For software, the 8223 initially supports the Linux Foundation’s SONiC (Software for Open Networking in the Cloud) and Facebook’s FBOSS, with Cisco’s IOS XR coming later. SONiC decouples network software from hardware, running across hundreds of switches and ASICs from multiple vendors while supporting features such as BGP, RDMA, QoS, and Ethernet/IP. Its switch-abstraction interface provides a vendor-independent API for controlling forwarding elements, including ASICs, NPUs, or software switches.

SONiC is emerging as a notable alternative to traditional, less flexible network operating systems. Its modularity, programmability, and cloud-oriented architecture make it an appealing option for enterprises and hyperscalers as cloud networking demands continue to grow.

Chopra emphasized, “Currently, the primary users of this level of capacity are hyperscalers, and the 8000 series is what we sell to them. As larger enterprises and others adopt AI, support for additional operating systems will follow.”

Expanding the Silicon One Portfolio

The 8223 marks the latest upgrade in Cisco’s Silicon One family. In June, Cisco introduced two new programmable Silicon One-based Smart Switches: the C9350 Fixed Access Smart Switches and the C9610 Modular Core, both designed for AI workloads including generative AI, agentic AI, automation, and AR/VR. Earlier this year, Cisco also unveiled a family of data center switches leveraging Silicon One chips integrated with AMD programmable data processing units (DPUs) to offload complex data processing and free up switch resources for AI and other heavy workloads.

Cisco now offers 14 variants of Silicon One ASICs, spanning applications from leaf/top-of-rack campus switching to high-throughput AI-backbone networks. The portfolio underpins a broad range of core switches and routers, including the Nexus Series 8000 for telco and hyperscaler environments, the Catalyst 9500X/9600X enterprise campus switches, and the 8100 series for branch and edge deployments.

Source: NETWORK WORLD

Cite this article:

Priyadharshini S (2025), Cisco Supercharges Silicon One Chip to Boost AI Data Center Connectivity, AnaTechMaz, pp.169

Recent Post

Blog Archive