Legacy HPC architectures were designed for a single, large compute cluster, managed by a single job scheduler, with all data stored locally, connected by a dedicated high-performance network. Data had a 1:1 relationship with the compute environment it was attached to, effectively creating data silos with such significant data gravity that it was difficult to move data to additional compute, application, and user environments.
Today, research institutes and enterprises want to use data sets for multiple applications and in different workloads. Now that a single data set is being used with many applications, environments, and locations, new architectural strategies are emerging.
Join this webinar on Thursday, Nov. 2 at 10 a.m. PT / 1 p.m. ET for a discussion with Gary Grider, Leader of the HPC Division at Los Alamos National Laboratory; Senior Analyst Mark Nossokoff of Hyperion Research; and David Flynn, CEO of Hammerspace.
Participants will learn:
- How to unify data created in different clusters, locations, and clouds into a single namespace, enabling automated data placement locally to applications and compute for processing and AI.
- How to unify home directories, scratch, analytics, training, and distribution into a single, high-performance file system
- How the latest advancements in parallel file systems are paving the way for extreme performance in a standards-based deployment
- How to leverage automated data orchestration to overcome previously held notions about data gravity. This ensures data is transparently staged to the correct compute resource when needed to create local proximity to AI models, enabling seamless collaboration across data silos and locations.