The Global Data Challenge
A Global Data Environment (GDE) is defined as a means by which users and applications can get the experience of ‘local’ file access to data that may be stored in a decentralized cloud across widely distributed and otherwise incompatible storage types/locations, while also enabling IT administrators to automate global control for data services across them all.
The fact is, whether intentional or not most organizations with unstructured data already have global data requirements to some degree. In fact, increasingly enterprises manually manage many of the attributes of a global data environment today, often with multiple point solutions, and a lot of effort, costs, and complexity by IT staff.
These increasingly decentralized data environments may be caused by workers who must now access their files from remote locations. Or the fact that most data centers have more than one class of storage, with storage silos being created when new storage types are added with different performance or cost profiles. Or maybe backups or archives are being pushed to the cloud or object stores or other cool/cold tiers at a DR site.
The Problem
The issue is that despite innovation in many other areas of the storage industry, key file system metadata remains trapped in each storage vendor’s proprietary platform. Such file system metadata is actually what users and applications see when they access their files. In other words, it is an abstraction layer that organizes the raw bits on disks or other storage media into the file and folder structures that people can understand and use. Without the file system, data is just a bunch of formless ones and zeros on storage devices.
The problem is if a device fills up, or the files need to be moved to another storage type or location, both the data (file essence) and its file system metadata must be also copied and written to another file system on another device. To access this new file copy, users have to remount the new file system to find the second copy of the file metadata and file essence.
File system fragmentation at the storage layer today is much like when file systems were trapped in the Operating System layer of PCs in the 1990s, and we had to copy a file (and its metadata) onto a floppy disk to share with someone.

Here’s the rub: The traditional paradigm of the file system trapped in vendor storage platforms is inconvenient within silos of a single data center. But the increasing migration to the cloud has dramatically compounded the problem, since it is typically difficult for enterprises with large volumes of unstructured data to move all of their files entirely to the cloud. Local file access via standard NFS or SMB protocols is still a requirement for many classes of data, particularly large files and compute-intensive workloads. So the cloud era has resulted in silos going global as well.
This reality led to more point solutions, cloud gateways, supplemental data management tools with vendor-locked symbolic links or other complex ways to often manually orchestrate copies back and forth between on-premises and the cloud, and across ever greater vendor silos and distances.
There Had To Be a Better Way
Hammerspace has spent years of development and done the heavy lifting needed to completely reimagine from first principles the way file systems need to work in a decentralized, multi-vendor environment. Starting from the premise that since data is accessed and stored globally across a myriad of vendor storage choices, shouldn’t those data and storage resources be managed globally as well?
Hammerspace accomplishes this by elevating the file system out of the infrastructure layer to provide seamless global file access via standard network protocols to users across any storage type from any vendor, as well as across multiple locations, including one or more cloud providers and regions. A ‘global’ solution that only works within a single vendor’s storage ecosystem is just a silo by another name, so cross-platform compatibility is essential.
When a user or application needs to access data, Hammerspace presents them with unified file access via standard network shares from a high-performance Parallel Global File system that spans all storage types and locations, including cloud. It is as though all users everywhere were accessing their files on a local NAS, even if those files were on remote storage resources somewhere else.
This is not shuffling file copies across storage types and locations, which adds confusion to users, and creates headaches for IT admins. This is enabling users to access the same files globally, via a universal metadata control plane that intelligently bridges the underlying physical storage from any vendor, and across multiple locations including cloud vendors and regions.

Unprecedented Flexibility
Bridging the asynchronous distance gap between locations and/or between on-premises and cloud with a high-performance Parallel Global File System enables customers to rapidly ramp up or down resources anywhere, without interrupting users or their applications. And they can do so without the penalty associated with re-tooling fixed on-premises infrastructures or managing access and data orchestration by point solutions across the incompatible silos. All users are able to collaborate across distributed resources, as though everyone were in the same office next to a single datacenter, even as background data placement actions are happening on live systems.
Reduce Complexity And Costs
This capability also has the benefit of dramatically reducing complexity for IT administration in multi-siloed environments, since file-granular data orchestration and data services in Hammerspace become back-end functions, and are completely transparent to users and applications.
In Hammerspace, users and IT Admins can establish objective-based policies based upon declarative business rules to ensure global file-granular control over data management and file protection services. Without Hammerspace, such tasks are typically manually managed by numerous point solutions in today’s siloed environments.
When data needs to be physically moved for whatever reason, Hammerspace transparently stages only the subset of files that are needed between storage resources based upon the metadata. From the user’s perspective, the file is still accessible at the same share, in the same file system with no change, since they’re interacting with the shared Parallel Global File System across all resources and locations.
This capability eliminates the problem of redundant copies, manual replication, or fragmented data protection strategies and other symptoms of data and storage sprawl. All data services are built into Hammerspace software to automate processes for IT Administrators. This also reduces the number of software applications, manual processes, and point solutions that are required to manage a multi-silo data environment.

Building Your Own Global Data Environment
Hammerspace is a software solution that empowers customers to create their own Global Data Environment, leveraging any combination of their existing storage resources with new and/or cloud-based storage. In this way, organizations can solve today’s challenges for distributed data and remote workers. It is a software-defined solution that may be deployed on commodity bare metal servers, or any virtual environment, and in the Cloud. It supports virtually all storage types from any vendor, including most public and private Cloud solutions.
To keep up with the reality of decentralization, a new global paradigm was necessary that effectively bridged the gaps between on-premises silos and the decentralized cloud. Such a solution required new technology and a revolutionary approach to lift the file system out of the infrastructure layer and thus enable the next wave of decentralization of business in a global economy. It is a revolution as important as when NAS vendors lifted the file system out of the operating system in the 1990s.
By Floyd Christofferson, VP Product Marketing