Optimizing SSDs with Software Defined Flash, Part 1

Author: Rahul Advani

With the rise of big data applications like in-memory analytics and database processing where performance is a key consideration, enterprise Solid-State Drive (SSD) use is growing rapidly. IDC forecasts the enterprise SSD segment to be a $5.6 billion market by 20151.  In many cases, SSDs are used as the highest level of a multi-tier storage system, but there is also a trend towards all-SSD storage arrays  as price performance metrics, including dollar per IOP ($/IOP) and dollar per workload ($/workload) make it an attractive option.

Flash-based SSDs are not only growing as a percentage of all storage in the enterprise, but they are also almost always the critical storage component to ensure a superior end-user experience using caching or tiering of storage.  The one constant constraint to the further use of NAND-based SSDs is cost, so it makes sense that the SSD industry is focused on technology re-use as a means to deliver cost-effective solutions that meet customers’ needs and increase adoption.

If you take the Serial Attached SCSi (SAS) market as an example, there are three distinct SSD usage models that are commonly measured in Random Fills Per Day (RFPD) for 5 years, or filling an entire drive xx times every day for 5 years.  There are the read intensive workloads at 1-3 RFPD, mixed workload at 5-10 RFPD and write intensive at 20+ RFPD.

Adding to the complexity, different customer bases like enterprise and hyperscale data centers have different requirements for application optimizations and scale for which SSDs are used in their infrastructure.  These differences in requirement show up typically in terms of number of years of service required, performance, power and sensitivity to corner cases in validation.

The dilemma for the SSD makers is how do you meet these disparate needs and yet offer cost-effective solutions to end users. Software Defined Storage In enterprise applications, software defined storage has many different definitions and interpretations, from virtualized pools of storage, to storage as a service. I will stick to the application of software and firmware in flash-based storage SSDs to help address the varied applications from cold storage to high performance SSDs and caching cost effectively. There are a few key reasons why the industry prefers this approach:

  1. As the risk and cost associated with controller developments have risen, the concept of using software to generate optimizations is not only becoming popular, it’s a necessity. Controller developments typically amount to several tens of millions of dollars for the silicon alone, and they often require several revisions to the silicon, which adds to the cost and risk of errors.
  2. The personnel skillset required for high-speed design and specific protocol optimizations (SAS or NVMe) are not easy to find. Thus, software-defined flash, using firmware that has traditionally been deployed to address bugs found in the silicon, is increasingly being used to optimize solutions for different usage models in the industry.
  3. Product validation costs can also be substantial and cycles long for enterprise SSDs, so time-to-market solutions also leverage silicon and firmware re-use as extensively as feasible.

My next post will get into the specifics of how a well-planned, flexible silicon architecture is the most cost effective way to leverage software defined flash to design SSDs for a wide range of application requirements.

This post was also published on Converge! Network Digest.

Source: 1. Worldwide Solid State Drive 2014–2018 Forecast and Analysis: The Need for Speed Grows, Doc # 248727, June 2014

This entry was posted by Carol Whitmarsh on at and is filed under Flash/NVM. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Leave a Reply

You must be logged in to post a comment.