Session Overview
Operating AI workloads on-premises requires GPU-based computing and high-speed networking for clustering. Since traditional data center infrastructure cannot easily accommodate these requirements, enterprises must transition to a new infrastructure environment. This session will present strategies for building AI infrastructure that starts small and scales to large deployments, drawing on recent global case studies. It will also explore how enterprises can replace InfiniBand with the more familiar Ethernet for clustering.
Key Topics:
Infrastructure requirements for AI workloads
Diverse GPU computing needs and scalability
GPU server clustering strategies
Performance comparisons and case studies of Ethernet-based clustering
Scaling AI Infrastructure: From Getting Started to Cluster Architecture
About Choi Soo-young | Executive Director, Cisco
Choi Soo-young began his career at Cisco as a systems engineer supporting enterprise customers with networking and collaboration solutions. He later specialized in computing and data center infrastructure sales, and now leads the Cloud & AI Infrastructure division at Cisco.