Switchtec PFX Gen3 PCIe fanout switches provide the industry’s highest-density, lowest power PCIe switch for data center, communications, workstation, and video production applications. With simple hardware configuration, Read more »
PCI Express (PCIe) is a widely deployed bus interconnect interface that is commonly used in server platforms. Increasingly, it is also used as Read more »
8, 16 AND 32 CHANNEL PCI EXPRESS® FLASH CONTROLLERS
The Microsemi Flashtec second-generation NVMe controller family enables the world’s leading enterprises and data centers to realize the span of highest performing SSDs to highest capacity Read more »
Update – 26 Nov 2015 – Well things can move very fast in the Linux world when they want to! Since I wrote this article an improved, but still pre-production, version of the polling code for the block layer and NVMe driver have made it into the Linux kernel and will go mainline in 4.4. There is a really nice overview of how it works here and Jens’ patch-set comments and some of his testing results can be found here. It is worth stressing that the results we present should only improve as the polling mode evolves. Stay tuned for updated performance results in due course!
Introduction
I love SSDs! They have transformed the data center by providing high-performance, low latency access to storage. Low latency is transforming the data center stack. I will be digging into latency in my next few blog posts, starting with Driver latency here.
Last week I had the opportunity to attend and present at SNIA’s Storage Developer Conference (SDC). This great technical conference is organized by storage developers for storage developers. SNIA asked me for a commendation for this conference and my reply was:
Flash Memory Summit just wrapped up for 2015 and it was an #awesome one for PMC. The PMC team pulled out all the stops to cement our position as the enabler for performance storage solutions. Here is my list of top five #awesome moments for PMC at FMS 2015.Read more »
“May you live in interesting times.” If you haven’t noticed, IT is already there, with cloud, mobile, big data, in-memory, NoSQL, Remote Direct Memory Access (RDMA), Shingled Magnetic Recording (SMR) drives, Non-Volatile Memory (NVM), and the list goes on. Our landscape is undergoing massive disruption as new technologies and techniques enable customers to do more with less. This disruption is a threat to existing businesses, and an opportunity for the fast, the focused, and the bold.
In mid-May I headed to Beijing to attend Memblaze’s launch of their new Solid State Drives (SSDs) based on the PMC Flashtec™ controller. I was very happy to make the trip because we have worked closely with Memblaze over the past 18 months as they transferred their SSDs from the FPGA-based PBlaze3 to the PMC Flashtec based PBlaze4. There has been a lot of interest within China around these Memblaze SSDs and NVM Express™ (NVMe™), and I wanted to go there and see for myself.
March 17-19, 2015 marked the first ever OpenPOWER Summit, which was held at the San Jose Convention Center. This was an opportunity for the 110+ members of OpenPOWER to get together and showcase the progress to-date in establishing an open CPU/server framework around the Power8 processor and sub-systems. At the same time, this was a chance for non-OpenPOWER companies to learn more about what members are trying to achieve and determine how best to work with this ecosystem going forward.
Recall from my last blog that I have a mantra – Let’s stop thinking about NVM as fast storage and start thinking about it as (slow) memory! Well it seems I am not alone in that thinking, as this premise was very well represented at the NVM Workshop organized by two of the research groups at UCSD.
PMC had the pleasure of being a Platinum Sponsor at this year’s event and I always consider this to be the technical counter-point to Flash Memory Summit. The event is a lot smaller than FMS but more technically orientated with a nice mix of industrial and academic speakers. I have been attending for three years now and always find it a great place to catch up on people’s research and meet graduate students (who might be persuaded to come work for PMC).
We may not think about them often, but technical standards impact most aspects of our daily lives. Gasoline refined in California will work in a car engine built in Japan. Your phone charger plugs into an outlet in your office just as easily as into an outlet in a hotel room. And your bank card works in ATMs in Texas, Ohio, Florida, or wherever.
The storage industry has its own set of technical standards, and PMC is actively involved in the initiatives and communities that lead to the development of standardized feature-sets. Additionally, we regularly participate in test events to prove out our latest silicon, boards, and firmware to ensure proper protocol operation in open environments.
Our leadership and input have led to cutting-edge PMC innovations, better interoperability with other vendors and happy customers.
On January 20, 2015, I had the pleasure of attending the SNIA NVM Summit in San Jose, California. This was a great event, and kudos to SNIA and Intel for putting it together. The event was very well attended by a who’s who of the NVM world. There were a lot of great panels and presentations covering three main themes:
1. NVDIMMs. NVDIMMs have been around for some time but have had issues with motherboard compatibility and OS support. It is clear that, while issues still remain, support is improving. I will be doing a more in-depth blog on NVDIMMs soon, where I will delve into them in a lot more detail and compare them to current alternatives, such as our Flashtec NVRAM card.
In my last blog post I introduced a project called Donard that implements peer-2-peer (p2p) data transfers between NVM Express devices and PCIe GPU cards. We showed how the p2p transfers could improve performance and offload the CPU, which can save power when compared to classical non-p2p transfers.
In this article we extend Donard to add RDMA-capable Network Interface Cards (NICs) into the mix of PCIe devices that can talk in a p2p fashion. This is very important as it brings the third part of what I call the PCIe trifecta to the Donard program. The three elements of the trifecta being storage, compute and networking.
In my last post, I talked about the increasing use of enterprise Solid-State Drives (SSDs) and the many different requirements they must be tuned for based on data center application needs. The dilemma for the SSD makers is how to meet these disparate needs while still offering affordable solutions to end users. Supporting these disparate requirements that span cold storage to high-performance SSDs for database applications cost-effectively requires a well-planned, flexible silicon architecture that will allow for software defined solutions. These solutions need to support software optimizations based around (to name a few):
Different densities and over-provisioning NAND levels
Different types of NAND (SLC/MLC/TLC) at different nodes
Different power envelopes
Different amounts of DRAM
Often need to support Toggle and ONFI, in order to maintain flexibility of NAND use
With the rise of big data applications like in-memory analytics and database processing where performance is a key consideration, enterprise Solid-State Drive (SSD) use is growing rapidly. IDC forecasts the enterprise SSD segment to be a $5.6 billion market by 20151. In many cases, SSDs are used as the highest level of a multi-tier storage system, but there is also a trend towards all-SSD storage arrays as price performance metrics, including dollar per IOP ($/IOP) and dollar per workload ($/workload) make it an attractive option.
Last month I attended Flash Memory Summit (FMS) 2014 in Santa Clara, California. FMS is probably the biggest conference and exposition of NVM technology. It combines technical tracks with a huge exposition and is a great place to catch up and hobnob with like-minded experts in NVM.
PMC was very well represented at FMS. We presented eight technical papers, gave a keynote speech and launched our Flashtec NVRAM product. I gave a talk entitled “Accelerating Data Centers Using NVMe and CUDA” which is based on a PMC CTO project codenamed Donard. In this blog post I want to dig a little deeper into the paper I presented and some of the implications of this for acceleration in data center (DC) environments.
NVM Express (NVMe) is the scalable host controller interface designed for PCI Express®(PCIe®)-based solid state drives and defines the host driver interface. PMC has contributed to the NVMe specification since its inception and continues to work with industry leaders to create a robust NVMe driver ecosystem.
PMC helped drive the initial development of the first NVMe Open Source Windows driver with key partners in 2011. The first major release of this driver was completed in Q2 2012. PMC continues to chair this working group, which has since accomplished four major releases of the Windows driver. The next release, version 1.4, is scheduled for Q4 2014 with the major focus on stability and ensuring certification with the Windows Hardware Certification Kit (HCK), which will enable this driver to be digitally signed by WHQL. The release package may be downloaded from https://www.openfabrics.org/index.php/developer-tools/nvme-windows-development.htmlRead more »
In my last post, I talked about how we can control the parameters of Low-Density Parity-Check (LDPC) error correction codes in order to manage the latency associated with reads from a Solid-State Drive (SSD). However, we only looked at the iterations associated with a single decode of the LDPC codeword. In this post, we will take a look at what happens when that initial decode fails and how soft-information can be used to recover the data on the SSD.
In my last post I talked about the transition to Low-Density Parity-Check (LDPC) Error Correction Codes (ECCs) in enterprise SSD controllers. I hinted that this transition has some interesting implications for the latency of next-generation SSD controllers and I wanted to expand on that topic in this post.
The latency associated with LDPC ECC in SSDs comes from three main sources:
The LDPC encoding process.
The LDPC decoding associated with the first read of the data on the NAND flash.
The LDPC decoding associated with subsequent reads of the data on the NAND flash. Read more »
I’m sure many of you reading this blog are aware there is a transition occurring in terms of the type of Error Correction Codes (ECCs) being used inside SSD controller chips. Traditionally Bose-Chaudhuri-Hocquenghem (BCH) were used, and they were more than adequate for large geometry NAND flash. However, the demand for cheaper and denser NAND flash means that BCH is no longer adequate and, in the search for alternatives, most of us are settling on Low Density Parity Check (LDPC) codes.
In this post, I want to talk a little about what this transition means and some implications it has for something we at PMC term Software Defined Flash. For more background on what an LDPC code is, check out Kent Smith’s great post.