Hammerspace Introduces an All-New Storage Architecture | Read the Press Release

Hammerspace 2023 – Committed Progress Towards the MovieLabs 2030 Vision

With Global Data Access and Orchestration That Bridges Any Storage and Cloud, Hammerspace Enables Creatives to Begin Benefiting Today     

In 2019, when MovieLabs released the 10 Principles of the 2030 Vision for media creation, the member studios (Paramount Pictures, Sony Pictures Entertainment, Universal Studios, Walt Disney Pictures and Television, and Warner Bros. Entertainment) put forth a bold challenge that asked technologists to align future product development with a set of principles to streamline creative workflows across increasingly distributed and often siloed infrastructure choices. 

While the 10 Principles are clearly aspirational targets for the future, the needs they address are very real problems today. Studios need budget-efficient and dynamic environments to meet the demands of creative production.  Creative production today has become bogged down when workflows are fragmented across vendor silos or between one or more cloud providers and locations. In addition, productions increasingly need the ability to rapidly activate remote creative talent, without the delays and logistical headaches typically needed to move data and hardware around. 

The dominant themes of the 2030 Principles include multiple complementary ways to keep creatives focused on their workflows, rather than on forcing them to continue wasting time and resources to bridge the gaps between silos, locations, and technologies while maintaining security of their content.

Step Into the Future, Starting Today

Hammerspace software demonstrates a strategic commitment to the MovieLabs 2030 Principles, providing customers today with global file access and automated non-disruptive data orchestration between storage types from any vendor at the edge, in data centers, and in the cloud. As a vendor-neutral solution, Hammerspace enables studios to begin immediately realizing some of the advantages outlined in the 2030 Principles, even within their existing infrastructures today. 

Just such an example is being demonstrated in the Hammerspace booth at NAB-2023, where we leverage several of the 2030 Principles to enable cloud-based finishing workflows in partnership with AWS, Autodesk Flame, HP Anyware, and Track-IT. In the NAB demonstration Hammerspace enables Flame artists to work in real-time on projects that span multiple AWS cloud instances and regions, providing online access to both local and remote users. In this demo we show how Hammerspace is making progress towards addressing the 2030 vision by enabling customers to take advantage of several of the 10 Principles. 

Diagram of the NAB Demo showing remote Flame artists collaborating via HP Anyware to live instances in AWS East & West, tied together in a global data environment with Hammerspace.

Anything, Anywhere, All the Time 

The key innovation that enables Hammerspace to address many elements of the 2030 Vision is its development of a vendor-neutral global file system which elevates file access above the infrastructure layer. This global reach enables it to seamlessly span any storage type and location, and provides the foundation for its powerful global data orchestration system. 

By assimilating file system metadata from data-in-place on existing storage, Hammerspace creates a high-performance Parallel Global File System that bridges existing and/or new storage silos, sites, and clouds in a cross-platform global namespace. It then puts the unified namespace to work through automated data orchestration, which moves data to the appropriate location or storage type based on business objectives. Such objective-based policies may be triggered by custom metadata to automate data placement for workflows, or be driven by performance, cost, redundancy, or locality. This data orchestration is transparent to users as a background operation. 

Hammerspace elevates the file system above infrastructure silos to enable data-centric workflows that can bridge existing and/or new storage silos, sites, and clouds from any vendor in a cross-platform global namespace.

With Hammerspace, users and applications that may be located anywhere geographically are accessing the same global file system for media elements that may be on any on-prem or cloud-based storage anywhere else. This is not shuffling file copies from one repository to another, but rather all users everywhere seeing the same files globally on their desktop via this universal metadata layer, regardless of which storage type or location the files are in today or move to in the future.   

The elevation of the file system out of the infrastructure layer effectively brings applications directly to the media, which is called out in Principle 2. In this way, a fully cloud-based workflow (Principle 1) is achieved without needing to shuffle data copies around or interrupt users to provision new projects or workflows with different requirements. 

Putting the Principles into Action 

In the NAB demo, Hammerspace illustrates both of these first two Principles in action, in addition to several other of the Principles that come along for the ride, by default. 

The Hammerspace Parallel Global File System is accessible to authorized users from anywhere and is universal across all storage types and locations. This enables Hammerspace customers to globally reference and track media elements at a file-granular level (enabling Principle 8). It also includes the ability to leverage any custom metadata on the fly, which can be automatically applied, and which persists regardless which storage the data moves to over time. Global, actionable metadata enables the spirit of a “universal linking system” called for in this Principle. Such custom metadata bridges all silos, and can complement dedicated MAM systems and workflow schedulers. This enables workflow automation to be granular and global, regardless of which storage type or location files may need to be on today or move to in the future. 

This data-centric approach enables workflows to benefit from real-time iteration and feedback (Principle 10) and can use standard file protocols to dynamically provide access to subsets of data anywhere via open interfaces (Principle 9). No proprietary client software, agents, hooks, symbolic links or other tricks are required.  Users and applications simply see their files in the shared file system on their existing computer and applications, as though everything everywhere were on a giant global NAS platform.

Finally, all of these metadata-driven capabilities can be used to automate data propagation from this shared multi-platform resource (Principle 3) as a “publish” function. By elevating the file system above the proprietary infrastructure layer into user/application space, Hammerspace enables global workflows to reference a single source of truth, not forked copies locked away in silos. Such functionality can be orchestrated by Hammerspace, but is also complementary with other workflow managers such as AutoDesk Shotgrid, or other MAM solutions who have direct access to the global file system.

In this way, Hammerspace enables customers to adapt rapidly and dynamically to evolving workflow requirements for repurposing or distributing any media asset via this universal metadata layer referencing files that may be anywhere.  

In a world where studios and productions are pushed to achieve ever greater productivity with tight budgets and even tighter timelines, the 10 Principles became a north star the industry could rally behind to begin creating order from technology chaos in ways that would benefit productions large and small. 

Visit the Hammerspace booth at NAB, or contact us for a personal demo to see how you can begin applying these principles into your workflows, to help you shift your productions towards the future, today. 

About the Author

Floyd is Vice President of Product Marketing for Hammerspacer. He has been involved with data management and storage for more than 25 years, focused on the methods and technologies needed to manage extreme volumes of data to keep up with the needs of modern, distributed storage resources and workflows.