Manage data, not infrastructure.
It’s the only way to scale

Hammerspace separates the control plane (metadata) from the data plane (data) so that data can be managed independently from the infrastructure. By managing metadata separately from data, it becomes possible to make file data virtually appear anywhere without copying it. Meanwhile, the application layer can access data directly from storage, so there is no impact on performance.

Data virtualization is key to overcoming the challenges of storage silos, allowing data to be accessed anywhere on the hybrid multi-cloud, using any storage.

The Hammerspace Technology

To deliver Data-as-a-Service, Hammerspace ties together data virtualization, cloud data management, and metadata services into a single platform. This unified approach makes file data cloud-native, overcoming the challenges of a storage-centric approach to managing data across the hybrid multi-cloud.

Present data at all sites as if it were local, without copying, using an
active-active filesystem namespace that runs on any storage system or cloud.

The data workhorse to move live data instances, supporting NFS, SMB and S3 storage protocols, WAN
optimization, and global deduplication.

A metadata management service driven by Hammerscript to continuously update metadata across the infrastructure and support user-defined metadata.

Continuously optimizing data and the infrastructure, making intelligent
decisions to drive automation.

Universal Global Namespace

A single source of truth for data that stretches across your entire infrastructure. A global namespace virtualizes data to present a unified view of data to application workloads across mixed storage resources. Global data visibility and accessibility make it fast and easy to access data across sites. Data is transferred on-demand when needed and by policy, if desired. With a locally managed namespace available on each site, performance of the data and metadata is maintained without compromise while making the data available across distance.

By managing metadata separately from data, it becomes possible to make file data appear virtually anywhere without copying it. Across sites, data is replicated asynchronously with multi-site collision detection so that data is never lost.

Scale-Out Data Services

Hammerspace scale-out data services (DSX) perform non-disruptive live data mobility between storage resources on-premises and on the cloud. DSX instances scale as necessary to support application workloads and multi-cloud scenarios.

DSX Data Services support NFS, SMB, and S3; deploying advanced data layout techniques to seamlessly move data through the namespace, even during active read/write. WAN optimization keeps network traffic efficient with automatic global deduplication and compression, while data is encrypted end-to-end using military grade and government approved algorithms.

Metadata-Driven Orchestration

Metadata management through a service layer delivers global control of data, providing user-defined and programmable tags and keywords that work with any file system, NAS or object store. Hammerspace scales metadata beyond regular file system standards (POSIX, NFS, and SMB) including telemetry for performance and access (IOPS, throughput, latency), user-defined and analytics harvested metadata.

User-defined extensible metadata allows for rapid prototyping of metadata (keywords and tags) as well as pre-declared entries (labels and attributes). Pre-declared entries can define taxonomies of metadata. For example, a tiger in a picture can automatically be part of the animal family when looking for content in image files.

Through the universal global namespace, users can view, filter and search metadata in-place while navigating the namespace. Rather than relying on filenames to identify data, user-defined metadata rapidly, accurately, and efficiently enables users to find the data they need.

Autonomic Data Management

Global access to billions of files across the hybrid cloud demands a unique approach to data management. A machine learning engine runs a continuous market economy simulation between real data and available infrastructure resources. The model treats storage services as landlords with resources to lease, and data files as tenants who spend limited currency to meet specific needs.

Hammerspace continuously collects performance telemetry from workloads for each file accessed, in the form of metadata. This monitoring provides a rich understanding of how the infrastructure is performing, so Hammerspace can automatically correct for issues before they happen. Real-time decisions for data placement are fully automated by machine learning, balancing performance and cost across the hybrid cloud.

See Hammerspace in Action Today