Hammerspace Introduces an All-New Storage Architecture | Read the Press Release

Converge HPC, HPDA, and AI Data Architectures in Different Data Centers and Clouds with a Single, High-Performance, Parallel Global File System

Legacy HPC architectures were designed for a single, large compute cluster, managed by a single job scheduler, with all data stored locally, connected by a dedicated high-performance network. Data had a 1:1 relationship with the compute environment it was attached to, effectively creating data silos with such significant data gravity that it was difficult to move data to additional compute, application, and user environments.

Today, research institutes and enterprises want to use data sets for multiple applications and in different workloads. Now that a single data set is being used with many applications, environments, and locations, new architectural strategies are emerging.

Join this webinar on Thursday, Nov. 2 at 10 a.m. PT / 1 p.m. ET for a discussion with Gary Grider, Leader of the HPC Division at Los Alamos National Laboratory; Senior Analyst Mark Nossokoff of Hyperion Research; and David Flynn, CEO of Hammerspace.

Participants will learn:

  • How to unify data created in different clusters, locations, and clouds into a single namespace, enabling automated data placement locally to applications and compute for processing and AI.
  • How to unify home directories, scratch, analytics, training, and distribution into a single, high-performance file system
  • How the latest advancements in parallel file systems are paving the way for extreme performance in a standards-based deployment
  • How to leverage automated data orchestration to overcome previously held notions about data gravity. This ensures data is transparently staged to the correct compute resource when needed to create local proximity to AI models, enabling seamless collaboration across data silos and locations.

Hammerspace Solution: No Compromises

Hammerspace software is designed from the ground up to scale up and out to accommodate even extreme workload requirements, enabling customers to parallelize performance across any vendor storage, network, or cloud resource they prefer. 

At the core is its high-performance Parallel Global File System that can span on-premises and cloud instances to provide a cross-platform global namespace to all users everywhere. It is built with a scale-out architecture that can saturate any network, storage type or interconnect.

With Hammerspace, all users in all locations see the same file system metadata, even for environments spanning multiple on-prem and cloud-based storage and compute environments. And with metadata-driven objective-based policies, data management is automated behind the scenes across silos and locations without interrupting users or applications. Most importantly, it can now keep up with your high-performance workloads.

Hammerspace is designed to help research organizations and other HPC environments create, process, and manage even extreme volumes of HPC data, and to enable collaboration across a decentralized environment.

Hammerspace is a software-defined solution that can run on bare-metal, VMs and in the cloud. Hammerspace clusters can scale out to accommodate extreme performance within the datacenter as well as to span as many as 16 locations concurrently, including on-prem and multiple cloud instances.