High-Performance Computing

Overview

HPC use cases in government, research, and commercial realms all are faced with increasing challenges of extending high-performance workflows beyond a single datacenter for distributed use cases. 

Whether it is the need to bridge multiple data centers, or to bridge on-premises data centers with one or more cloud resources, or even to manage multiple-vendor storage silos within a single data center, each of these use cases creates problems of increased latency for users and complexity for IT staff. Add the requirement to tie together remote data sources and teams, and the problem gets even worse.

High-performance workloads need high-performance infrastructure. But data management point-solutions that are typically used to bridge these silos were never designed to scale-out to HPC performance levels. So instead of bridging the gaps, they become a bottleneck between them. 

This complexity adds friction to user workflows, and stretches IT resources and budgets in multiple areas:

 

 

01. High-Performance Data Access

Data gets trapped in expensive storage silos, because of the complexity and interruption caused by migrating datasets to lower cost storage. This adds to Administrative overhead, and lowers storage utilization efficiency on the most expensive resources in the data center.

02. Copy Sprawl

The proliferation of multiple copies of the same datasets between locations and storage silos adds tremendous waste and expense. The infrastructure cost is too high and the burden on IT too great to keep track of the different data copies.

03. Enabling Seamless Collaboration

High volume datasets must be accessible to users and applications that are not local to the original data source. Increasingly the tools best suited for processing and analysis of raw data are spread across workstations, data centers, and the Cloud. When copies, or manual processes are needed to bridge these gaps, costs and complexity rise dramatically.

Hammerspace Solution: No Compromises

Hammerspace software is designed from the ground up to scale up and out to accommodate even extreme workload requirements, enabling customers to parallelize performance across any vendor storage, network, or cloud resource they prefer. 

At the core is its high-performance Parallel Global File System that can span on-premises and cloud instances to provide a cross-platform global namespace to all users everywhere. It is built with a scale-out architecture that can saturate any network, storage type or interconnect.

With Hammerspace, all users in all locations see the same file system metadata, even for environments spanning multiple on-prem and cloud-based storage and compute environments. And with metadata-driven objective-based policies, data management is automated behind the scenes across silos and locations without interrupting users or applications. Most importantly, it can now keep up with your high-performance workloads.

Hammerspace is designed to help research organizations and other HPC environments create, process, and manage even extreme volumes of HPC data, and to enable collaboration across a decentralized environment.

Hammerspace is a software-defined solution that can run on bare-metal, VMs and in the cloud. Hammerspace clusters can scale out to accommodate extreme performance within the datacenter as well as to span as many as 16 locations concurrently, including on-prem and multiple cloud instances.

The Hammerspace Difference

High-Performance Data Creation

Multi-protocol file access to global data resources, with no compromises on performance. 

Efficient Data Collaboration

Users everywhere get secure, performant access to the same data, wherever it is. No more shuffling copies of entire datasets between locations.

End-to-End Data Management

Global metadata-driven automation enables file-granular data orchestration, and global data services across all storage types and locations, transparently to users.

Hammerspace Solutions For

Computing
Processing
Archiving

Computing

Hammerspace is built on our high-performance, parallel, global file system. This ensures high-performance data capture from large compute clusters and parallel user access for 10s, 100s, or 1000s of users.

Processing

Hammerspace makes it fast and efficient to capture, process, analyze, visualize, and share in a single data environment. It can bridge multiple silos, multiple data centers, and even multiple Cloud regions and vendors. Simplify working with decentralized resources. Applications, users and data services at any location directly work with all resources as though they were local.

Archiving

    An NFS-mounted archive file system provides perhaps the most familiar environment for interacting with archived data in an HPC workflow. Mounted file systems appear as local directories to users and applications and are accessible via standard Linux commands, such as cd, mkdir, chmod, etc. This approach is extremely convenient and requires very limited workflow modification to integrate Hammerspace as the archive file system.