Smart software defined data centers demand optimized storage

When IT professionals talk about software defined data centers, the emphasis is typically on virtual servers — how they’re created, provisioned and maintained. But a truly smart software defined data center should be optimized as comprehensively as possible, across all resources. And among those, one of the most critical is storage.

So how does the software defined data center pitch usually go?

“Fluid allocation of resources in real time to fulfill changing business workloads.”

In the case of storage, making that fluid allocation happen means taking into account many factors. A short list would include:

  • Virtualizing all storage
  • Tracking the costs of storage utilization over time
  • Determining the speed of reading and writing data to and from different storage tiers
  • Reducing wasted storage as completely as possible, c
  • Continually assessing how storage is used by different services and adapting in parallel
  • Performing capacity management and planning tasks
  • And of course, reducing as much as possible total operational costs and management complexity

That’s quite a set of goals. And in software defined storage (SDS), those goals must mostly be accomplished automatically, for the data center to be as agile as possible in scaling to unexpected requirements. For any cloud model — public, private or hybrid — these are major factors already, and they’re becoming more important by the day.

As data volumes continue to scale up dramatically (and particularly with the advent of big data solutions and strategies that depend on having huge data volumes to work), software defined environments (SDE) require the smartest, most efficient and most cost-effective utilization of storage.

So the simple reality is that all SDE-engaged organizations need to be thinking now about how best to optimize storage. Fortunately, SDE service providers and solution providers are familiar with that reality and are responding in kind with the necessary capabilities.

Keeping storage costs low and data accessibility high

An essential and basic example, of course, is virtualizing storage, so that it’s not tied to any particular hardware, but can instead be logically pooled and then doled out to any service that needs it in real time. Then, when the service has finished using it, that storage can be returned to the general pool.

Policy-based file management is also important because it allows the cloud to distribute files based on their business priority, geographical need and many other traits. This way, they’re not only more accessible more quickly to an organization’s entire team, but also at a lower cost because those files can be stored on appropriate storage tiers (in that not all files demand solid-state drives).

I mentioned escalating data volumes before and how they’re escalating faster than ever. Software defined storage is an attractive solution to that problem — organizations can pick a trusted SDS provider, then leverage its storage capabilities, trusting those capabilities to scale as needed. Files can then be accessed anywhere, anytime, from any device smart enough to run a standard browser.

Experience to support your path to Software Defined Storage

IBM’s heritage in innovations like virtualization and partitioning — the foundations of cloud and software defined — is well known. IBM Software Defined Storage solutions answer fundamental shifts we are seeing with our clients:

  1. The increasing challenges organizations are facing with new big data analytics, new application workloads, like mobile and social
  2. The increasing adoption of our advanced virtualization and automation technologies that were born in high-performance computing to meet these new challenges.

IBM Software Defined Storage solutions are helping organizations across industries tackle these next generation applications, as well as traditional data-driven applications, to achieve dramatically faster performance and cost efficiency.

IBM is exceptionally well positioned to help SDS-focused organizations put a check in every one of those boxes. We’ve been a world leader in software defined storage, and nobody can match IBM’s proven expertise and experience in solving the complexities in storage of all kinds. One of the proven capabilities is IBM Elastic Storage.

Delivered in software and used on standard hardware, IBM Elastic Storage provides global shared access to data with extreme scalability, flash-accelerated performance and automatic policy-based storage tiering from flash through disk to tape. It provides simplified data management and integrated information lifecycle tools capable of managing petabytes of data and billions of files in order to arrest the growing cost of managing ever growing amounts of data. It integrates with OpenStack Cinder — the block storage project — providing virtual storage services to the cloud regardless of client choice in underlying physical storage hardware.

In addition, the newly enhanced global file system can also present its capacity as an OpenStack Swift object store. Clients will be able to install and configure the OpenStack Swift with the new software and use it, for example, to archive large objects such as medical images or video.

IBM Software Defined Storage also advances storage virtualization for traditional applications. With SAN Volume Controller (SVC), data can be shared across more than 250 different IBM and non-IBM systems in an easy-to-use pool of storage with central data management capabilities.

What’s your SDS plan? Share your thoughts with us on Twitter @IBMSDE.

Dr. Krishna Nathan is Vice President of Object and File Storage for IBM Systems and Technology Group. In this role he is responsible for the strategy and execution of IBM’s object and file storage activities.

Want to hear more from IBM?