Hammerspace Introduces an All-New Storage Architecture | Read the Press Release

Is 2024 the Year of the Enterprise LLM?

By Eric Bassier, Senior Director of Solution Marketing

2023 was a big year for AI. ChatGPT became the fastest growing consumer app in history (no, it did not write this blog for me), and other large language models such as Meta’s Llama 2 and Google’s PaLM 2 are now broadly available to the developer community. These are part of a broader set of generative AI models that receive text (and other forms of input), then can generate text, images, audio or video depending on what they were designed to do. 

Emerging Trends: Multi-modal AI and Enterprise LLMs

One of the next big phases of development will be the rise and broader availability of multi-modal generative AI models, meaning models that can receive text, audio, imagery and even video as INPUTS, and then generate content and interact with the human world in new ways. Intuitively, this will mean that these models will be accessing and producing a large amount of unstructured data – text files, image files, video files, objects stored in the cloud – and more.

Another big trend will be the rise of enterprise LLMs. An example of this from last year was Bloomberg GPT – a large language model developed by Bloomberg specifically trained on all of their financial data. And if you are watching commercials during NFL games or the leadup to this week’s college football championship, you are seeing this trend in real time as most companies are advertising AI assistants, AI bots to help you plan travel, AI to increase developer productivity and much more. 

Enterprise AI: Investment and Implementation Roadmap

Most enterprises are making big investments in AI, and breaking this down, one possible roadmap goes like this:

  • Find an existing open source model that fits your needs. This is just the starting point.
  • TRAIN that model on your data to make it much more valuable to your business
  • Deploy the model in a way that your employees can interact with on a day-to-day basis.

Of course I am oversimplifying the process, but one thing that is certain is that AI models must be trained with large quantities of data to be most accurate. The more data that is available to them, the more accurate the results will be from the outset. But if you are like most enterprises, your data is “trapped” in various data silos across your infrastructure, which is a barrier to a successful AI strategy. 

Overcoming Data Silos: Hammerspace’s Unified AI Data Solutions

AI models limited to using data from a single storage silo will be at a major disadvantage compared with those that can access a wide range of data sets stored across multiple storage platforms and likely multiple geographic locations. The AI data pipeline needs to be generated from data created and stored in edge devices, data centers, and the cloud.

That’s where we come in. Hammerspace integrates existing data sets, cloud instances, and any new infrastructure into a unified global data environment that scales as AI workloads evolve. Hammerspace unifies siloed storage types as well as multiple geographic locations. Doing so enables systems to place, present, and preserve data for access by the AI and ML models wherever the data is and wherever the models are run. This gives your AI model access to more data (for better training results), and can improve resource utilization for any xPU-intensive workload. 

Hammerspace’s Revolutionary Architecture for Hyperscale LLM Training

In November, Hammerspace introduced a reference architecture for LLM training at hyperscale, which is based on work we are doing with some of our clients. This architecture is the only solution in the world that enables AI technologists to design a unified data architecture that delivers the performance of a super computing-class parallel file system coupled with the ease of application and research access to standard NFS.

By leveraging training data wherever it might be stored, Hammerspace streamlines AI workloads by minimizing the need to copy and move files into a consolidated new repository. This approach reduces overhead, as well as the risk of introducing errors and inaccuracies in LLMs. At the application level, data is accessed through a standard NFS file interface to ensure direct access to files in the standard format applications are typically designed for. 

So if your organization has an initiative to build and train your own LLM – reach out and let’s see how we can help.

About the Author

Eric is the Senior Director, Solution Marketing and Sales Enablement, at Hammerspace. He is an innovative product leader with extensive experience launching and evangelizing products, driving go-to-market strategies, and creating compelling content to drive customer engagement and growth.