Hammerspace Introduces an All-New Storage Architecture | Read the Press Release

Parallel NFS Resource Center

 

Parallel NFS

Why Parallel NFS is More Relevant
Now than Ever

High-performance requirements of AI are now mainstream; data orchestration is an absolute requirement across silos, sites, and clouds, and everything is moving to software-defined infrastructure on commodity hardware, and across existing third-party data centers and cloud storage.

Throughout all this disruption, Linux is ubiquitous, and enables a sophisticated, standards-based, open source client that is built-in to every modern Linux kernel. And, NFS is the standard file system interface utilized in compute-intensive environments.

NFSv4.2, using Parallel NFS with Flex Files, solves these problems by providing file access that bridges silos, sites, and clouds, parallel file system performance with no need to install third-party clients and management tools, and avoids the need to rewrite applications to use object storage.

A Standards-based Parallel File System Architecture Using NFS

About Parallel NFS and
Flex Files Layout Type

Parallel NFS (pNFS) was introduced as an optional feature in NFSv4.1 in 2010 and enhanced in later RFCs. pNFS defines an architecture for NFS where metadata and data paths are separate, and clients gain the ability to talk directly to storage, in parallel, once granted access by a metadata server.

To make Parallel NFS more open and add new capabilities for shared storage, Hammerspace engineered the Flex Files layout type in 2018, as described in the IETF RFC 8435. In Flex Files layouts, NFSv3 or later is used as the storage protocol, and also as the control protocol. This means that a high-performance parallel NAS system may now be created using a pNFS metadata server plus any combination of NFS storage servers.

Additional Enhancements and Fixes in NFSv4.2 Include:

  • NElimination of excess protocol chatter using compound operations (versus serialized), and caching and delegations that include client-side timestamp generation that eliminates the need to go to the server. These enhancements eliminate 80% of NFSv3’s GETATTR traffic.
  • NMultiple parallel network connections between client and server, including optional support for RDMA to avoid TCP stack performance limitations.
  • NAbility to write to multiple storage nodes synchronously using striping or mirroring to build highly reliable, highly available systems from unreliable storage nodes, and to distribute even a single file access across multiple back-end NFS v3 storage nodes
  • NAbility to move data while it is being accessed without interruption
  • NFile-granular access and performance telemetry gathering and reporting
  • NAbility to serve SMB over NFS for mapping of Active Directory principals and ACLs over the NFS protocol.

Standards Based

Built into the Linux kernel. No special software required.

*

Parallel by Nature

Provides for multiple parallel network connections between client and server. Write to multiple storage nodes synchronously.

Fast & Efficient

Eliminates excess protocol chatter to reduce network traffic. Files can be moved while they are being accessed without interruption.

Flexible Networking Options

TCP and RDMA transport are supported for data using Ethernet or InfiniBand.

Learn More About Our Technology

In the News

pNFS Performance and New Possibilities

View Now>

Video

Standards-based Parallel Global File System

View Now >

Video

The Need for NFS-SSD the Ethernet Direct Attached SSD

View Now >

Report

Overcoming
Performance
Bottlenecks 

View Report>

Tech Note

NFS v4.2 Technical Note Document

View Document>