Rechercher
Contactez-nous Suivez-nous sur Twitter En francais English Language
 

Freely subscribe to our NEWSLETTER

Newsletter FR

Newsletter EN

Vulnérabilités

Unsubscribe

2020 Storage Predictions: New ‘production’ AI workloads to drive the enterprise away from public cloud and TCO emerges as a crucial factor in HPC

December 2019 by Panasas Senior Software Architect Curtis Anderson

Panasas® released its industry predictions for 2020.

Please attribute all quotes in this statement to Panasas Senior Software Architect Curtis Anderson:

Prediction #1: The importance of Total Cost of Ownership (TCO): HPC storage solutions must deliver value far beyond the initial purchase price.

“As the requirements for HPC storage systems are becoming more diverse with the addition of new workloads such as Artificial Intelligence (AI), there is an increasing need to start looking at the overall impact on the organisation of the ongoing cost of operations, user productivity and the time to quality outcomes. In addition to evaluating the price/performance ratio, buyers will need to start paying close attention to a range of purchasing considerations that go beyond the initial investment. Those include the cost of unplanned downtime in terms of application user productivity, the cost of complexity and the headcount required to manage it, and the need for responsive support for mission-critical infrastructure such as your storage.”

Prediction #2: As Enterprise’s AI projects graduate from “exploratory” to
“production” they will leave the public clouds for less costly on-premises solutions, funding a boom in HPC infrastructure build-out, but the requirements for that infrastructure will have changed based upon their cloud experience.

“Public clouds are great for learning and experimentation, but not for high-utilisation production operations. Public clouds will, however, have a large influence on the next generation of on-premise infrastructure that is built. The need for the lowest time-to-solution, quickly taking action based upon the insights that AI can give you, drives AI to push the underlying hardware (e.g: GPUs and storage) as hard as it can go. But the simple truth is that the cost of a dedicated resource in a public cloud is higher than the cost of owning that resource. Another simple truth is that the value of AI is the computer deriving information that you can act upon from mountains of data. Add in the fact that AI has an insatiable need for growth of training data, and that public clouds have never-ending charges for data storage, and the costs climb. Put those simple facts together and it’s clear that production AI will be less costly if it is performed on-premises. The industry has become used to the extreme flexibility and simplicity of management that public clouds provide, and they will want to retain those characteristics in their on-premise solutions at the lower cost it provides.”


See previous articles

    

See next articles


Your podcast Here

New, you can have your Podcast here. Contact us for more information ask:
Marc Brami
Phone: +33 1 40 92 05 55
Mail: ipsimp@free.fr

All new podcasts