HomeTechnologyArtificial IntelligenceRedefining Data Infrastructure: Optical Circuit Switches Could Transform AI Data Centers

Redefining Data Infrastructure: Optical Circuit Switches Could Transform AI Data Centers

The surge in demand for large-scale AI training is straining today’s cloud infrastructure, pushing electrical packet switches (EPS) toward their performance and power limits. As GPUs scale into massive clusters to support ever-growing large language models, the need for faster, more efficient data transport is becoming critical. Optical Circuit Switches (OCS) are emerging as a powerful alternative, offering high bandwidth over long distances with far lower energy consumption.

Unlike EPS even those integrated with co-packaged optics OCS relies on all-optical connections to link GPUs through switched ports and optical transceivers. This enables GPU clusters to operate as a unified, high-performance computing fabric while delivering significant efficiency gains.

Applied Ventures recently co-led a Series A funding round for Salience Labs, a startup pioneering OCS solutions based on Semiconductor Optical Amplifier (SOA) technology. Their Photonic Integrated Circuits (PICs) are available in two configurations: a high-radix switch designed for HPC workloads and a lower-radix version optimized for AI data centers. This flexibility allows hyperscalers, GPU makers, and even financial trading firms to balance cost, performance, and scalability.

The urgency of these innovations is underscored by energy trends. The U.S. Energy Information Administration projects data centers will consume 6.6% of U.S. electricity by 2028, more than double the share in 2024. Networking equipment switches, transceivers, and interconnects represents a growing portion of this footprint.

To address this, companies are rethinking chip and system design:

  • Google’s TPU aims for a 10× cost-efficiency advantage over GPUs by tailoring silicon to specific AI tasks.
  • Lumentum projects that without optical efficiency improvements, training GPT-5 could require 122 MW, nearly six times more than GPT-4. Energy-efficient optical interfaces combined with OCS could cut that by 79%, aligning power use with GPT-4 levels.
  • Arista Networks estimates energy-efficient optical modules could save up to 20W per 1,600Gbps module.

By combining scalability with low-latency, long-reach connectivity, OCS technology could reshape how tens or hundreds of GPUs interconnect, enabling them to act as one massive supercomputer while containing the energy surge.

Conclusion:

Optical Circuit Switches are more than an incremental upgrade they represent a fundamental shift toward sustainable high-performance computing. With almost very high bandwidth, low latency, and massive energy savings, OCS will stand tall in next-generation AI data centers so that performance scaling is not done at the unsustainable power cost.

(This article has been adapted and modified from content on Applied Materials.)

Related News

Must Read

Arrow Electronics Launches Web-based “Digital Test Drive” to Streamline Hardware Testing

Arrow Electronics (NYSE: ARW) today announced the launch of...

From Updates to Intelligence: How OTA, Data, and Ethernet Are Reshaping Vehicles

In an exclusive interview with ELE Times, Shrikant Acharya,...

Exploring The Surreality Of High-End Manufacturing On Indian Soil With Sudhir Tangri And Takuya Furata From Keysight

As Keysight explores localization and diversification opportunities through its...

Sasken Announces Hyderabad Center of Excellence to Scale Product Engineering and Digital Innovation

Hyderabad, India: April 16, 2026: Sasken Technologies Ltd. (BSE:...