Artificial Intelligence is driving unprecedented demand for compute — but the real bottleneck is no longer just GPUs, it’s the power and bandwidth required to move data. Over the past five years, compute FLOPs have grown orders of magnitude faster than interconnect speeds, creating an “I/O wall” that threatens AI scalability. This keynote explores how interconnect technology must evolve: from copper limitations to optical highways, from traditional pluggables to co-packaged optics, and from 112G to 448G SerDes and beyond. Drawing on recent demonstrations, industry trends, and Ciena’s innovations, we’ll discuss how to build energy-efficient fabrics that scale AI sustainably — turning today’s data centers into the AI factories of the future.