Over the last three decades, extreme-scale high-performance computing (HPC) systems have evolved from highly-specialized hardware running custom software environments to platforms that are almost entirely composed of commodity components. While some aspects of large-scale HPC systems continue to be enhanced to meet performance and scalability demands, HPC systems have been distinguished by interconnect technology. The emergence of cloud computing, hyperscalers, and the demands of AI/ML workloads has led to the deployment of massive data centers containing systems much larger than the fastest HPC systems. Until recently, these systems were easily differentiated from HPC machines by the use of commodity ethernet networks. However, these worlds are now converging in several important ways. This presentation will describe how interconnect hardware and software for HPC systems has been impacted by this convergence and offer a perspective on future challenges that will need to be addressed.