Data centers have the formidable task of improving operating efficiency and maximizing their IT investments in hardware infrastructure in the face of evolving and varied application requirements. Last year alone, data centers worldwide spent well over a $140B on server and storage infrastructure, yet still do not operate at peak efficiency. Read more »
Flexible Ethernet, or FlexE, originated to solve challenges and inefficiencies in today’s packet and transport networks by providing flexibility and higher capacity, while also enabling the optimal usage of new technologies such as highly flexible coherent optical transmission. Read more »
According to IDC’s recently released predictions – by 2022, 50% of servers will encrypt data at rest and in-motion. Data security has become one of the highest priorities for data centers and cloud computing environments as they seek to safeguard customer information, classified company documentation and communications, Read more »
From enterprise servers using internalized data storage, to data centers using external data storage, the Microsemi Adaptec HBA 1100 family provides a robust, stable, and scalable solution that can handle the toughest system workloads and configurations.Read more »
Hardly a week goes by without a new headline disclosing the exposure of sensitive personal or consumer financial data by the institutions that are trusted to protect it. It’s a global narrative fueled by a talent pool of hackers and other data breach opportunists whose skills Read more »
The Microsemi Adaptec SmartHBA 2100-24i host bus adapter offers maximum connectivity and maximum performance for data centers and servers demanding high bandwidth and I/O connectivity, low power consumption, ultimate levels of performance, and integrated RAID support.Read more »
Today we have the release of the Microchip Microsemi Adaptec SmartRAID 3162-8i RAID adapters. We are purposefully calling these “Microchip Microsemi Adaptec” since it shows the lineage of the products. Read more »
Microsemi, recently acquired by Microchip Technology Inc., is announcing two new additions to its line of Adaptec Smart Storage 12Gbps SAS/SATA adapters. The SmartRAID 3162-8i now includes Read more »
Data security has become one of the highest priorities for data centers and cloud computing environments as they seek to safeguard customer information, classified company documentation and communications, financial records, Read more »
New Family Addresses High Bandwidth Storage Needs to Unlock Full Performance Capabilities of PCIe Gen 4-Capable Systems
Microsemi has announced its new SXP 24G family of devices, the industry’s first 24G SAS (SAS-4) expanders for server and networked storage. Read more »
At Flash Memory Summit (FMS), be sure to attend Microsemi’s VP of Marketing/Applications Engineering, Andrew Dieckmann’s Keynote presentation on how to Empowering the Gen-4 Storage Transition. The keynote is Wednesday, August 8th, 2:40 – 3:10 pm.Read more »
Microsemi has announced its new SmartROC 3200 and SmartIOC 2200 16 nanometer (nm) storage controllers, as well as associated firmware and software development tools. Read more »
The next generation of Serial Attached SCSI – 24G SAS or SAS-4 technology is on track for a commercial launch in 2018. This revision of the standard includes Read more »
The newly announced Flashtec™ NVMe 3016 Gen 4 PCIe controller is now sampling to early adopter customers. As the industry’s first enterprise controller of its kind, the NVMe 3016 addresses market demand Read more »
Switchtec™ Gen 4 PCIe switches, enable customers to build next-generation high performance, low latency interconnect solutions in high growth markets including machine learning, data center servers and storage equipment. Read more »
Enabling the onboard cache on a RAID adapter card significantly enhances performance – especially in RAID 5 and RAID 6 scenarios – by accommodating both options read caching and write back caching of data. Read more »
In part 1 of these articles, published yesterday, I talked about how safeguarding network infrastructure and storage systems is more critical than ever. In this article, I’m going to talk about solutions.
Update – 26 Nov 2015 – Well things can move very fast in the Linux world when they want to! Since I wrote this article an improved, but still pre-production, version of the polling code for the block layer and NVMe driver have made it into the Linux kernel and will go mainline in 4.4. There is a really nice overview of how it works here and Jens’ patch-set comments and some of his testing results can be found here. It is worth stressing that the results we present should only improve as the polling mode evolves. Stay tuned for updated performance results in due course!
Introduction
I love SSDs! They have transformed the data center by providing high-performance, low latency access to storage. Low latency is transforming the data center stack. I will be digging into latency in my next few blog posts, starting with Driver latency here.
Last week I had the opportunity to attend and present at SNIA’s Storage Developer Conference (SDC). This great technical conference is organized by storage developers for storage developers. SNIA asked me for a commendation for this conference and my reply was:
Flash Memory Summit just wrapped up for 2015 and it was an #awesome one for PMC. The PMC team pulled out all the stops to cement our position as the enabler for performance storage solutions. Here is my list of top five #awesome moments for PMC at FMS 2015.Read more »
Drive vendors continue to innovate, and one interesting new concept is the Helium hard disc drive, announced by HGST, a Western Digital Company. The important thing to note about Helium drive technology is that the drives are filled with Helium, allowing denser and more efficient drive designs. Helium is much lighter than air, so it imposes less drag on spinning disks, allowing disk platters to be stacked closer together. For enterprise applications, this allows more capacity without adding a more complex implementation via Shingled Magnetic Recording (SMR). This provides an advantage of increased capacity that can be added into an enterprise ecosystem without any extra work or driver support. Additionally, this advancement will lead to even more capacity with the upcoming integration of SMR.
In previous blog posts I have discussed Project Donard, which implements PCIe peer-to-peer transfers between NVM Express (NVMe) SSDs and GPUs, as wells as NVMe SSDs and Remote Direct Memory Access (RDMA) NICs. I am super-excited to announce that at Flash Memory Summit 2015 (FMS) we have been working with Mellanox, a pioneer of RDMA, to take this work to the next level! This blog post will dig a little deeper into what we are demoing at FMS, August 11-13, and how NVM Express + RDMA = AWESOME!
A key tenant of cloud computing is that it’s an easy-to-deploy and manage infrastructure. Automating the management lowers costs and complexity when new infrastructure is deployed or changes need to take place. Historically, the industry has deployed armies of administrators across the deployment lifecycle to make that happen, but that’s no longer practical or even necessary.
Many of the customers I talk to have a fixation on port numbers. I find it a bit unusual because we tend not to have the same sort of fixation on anything else we do in life. The benefits of more ports seem obvious—my fellow blogger Dave Berry writes about how performance and capacity help data centers do more with less—but when it comes to RAID controllers or HBAs, we definitely have a fixation.
For example: I have 2 x SSD – hmm, you don’t have a 2-port controller so I’ll look at 4-port instead. I’m not interested in looking at 16-port controllers because I’m fixated on the number of drives I currently have. That’s a shame because in fact it’s the 16-port controller that you need, whether you currently realize it or not.
Our Adaptec team in Germany has put together a vendor lab where vendors, such as hard drive and SSD manufacturers, can bring their gear and test against our products. While we have validation testing going on all the time in other PMC centers, having the ability for a vendor to sit and play with the combination of our gear and theirs is getting people pretty excited.
March 17-19, 2015 marked the first ever OpenPOWER Summit, which was held at the San Jose Convention Center. This was an opportunity for the 110+ members of OpenPOWER to get together and showcase the progress to-date in establishing an open CPU/server framework around the Power8 processor and sub-systems. At the same time, this was a chance for non-OpenPOWER companies to learn more about what members are trying to achieve and determine how best to work with this ecosystem going forward.
We may not think about them often, but technical standards impact most aspects of our daily lives. Gasoline refined in California will work in a car engine built in Japan. Your phone charger plugs into an outlet in your office just as easily as into an outlet in a hotel room. And your bank card works in ATMs in Texas, Ohio, Florida, or wherever.
The storage industry has its own set of technical standards, and PMC is actively involved in the initiatives and communities that lead to the development of standardized feature-sets. Additionally, we regularly participate in test events to prove out our latest silicon, boards, and firmware to ensure proper protocol operation in open environments.
Our leadership and input have led to cutting-edge PMC innovations, better interoperability with other vendors and happy customers.
Guest Author: Per Brashers, Founder, Yttibrium LLC A consultancy focused on efficient storage and computing
Cold Storage Demystified
Most of the Internet Moguls have been asking for a longer service life. This may be a MTBF of 3,000,000 going to 4,000,000 or a warrantee of 7 years. Either way there are some tricky bits to deal with when talking about an archive class hard drive. The point of this article is to understand what are the top factors that are important to a ColdStorage system.
If you just want to jump to the conclusions, there is a section at the end for that, as well as the asks for both providers and consumers.
While still at Facebook, I coined a phrase, ColdStorage. I also regret coining the term ColdFlash, but more on that later..
The traction being gained by the open source movement is evident here. Just as individuals are embracing open source as a chance to participate in a revolution, so are leading companies like Google, Rackspace and even tech companies from China. The promise of cross-company innovation and the ability to influence the direction that the movement will take are opportunities that are too valuable to pass up.
iSCSI is a great technology. It gives you the ability to create SANs very cheaply and easily, without having to become a guru in Fibre Channel or put yourself into deep debt buying all the Fibre equipment. By using easily available networking equipment, you can add storage to existing boxes, even if you want to go crazy and do shared access, clustering or other high-end features.
A lot of vendors provide basically free iSCSI targets (there’s even one in Windows Server these days), and almost every OS has a free software initiator to connect to those targets. Yes, we can bang on about whether software or hardware initiators are better, but software initiators are free and work so well that most iSCSI hardware initiator vendors have stopped bothering.
PMC recently joined Canonical’s Ubuntu OpenStack Interoperability Lab (OIL), an integration lab where Canonical tests and validates software with multiple versions of Ubuntu OpenStack on different hardware configurations.
When you’re in the business of high-density, high-performance I/O connectivity like PMC is, OIL is an important place to be. Canonical’s Ubuntu operating system and Ubuntu OpenStack are the most popular operating platforms for cloud and scale-out computing. Ubuntu is the hyperscale OS natively powering scale-out workloads on a new wave of low-cost, ultra-dense server hardware based on x86, ARM, and OpenPower processors.
I would probably need a data center to help me keep track of how many data centers I’ve visited across the globe. But no matter where they’re located, or what market they serve, they all have a common mission: to do more with less at the highest performance.
Businesses and consumer demand for fast, reliable and secure access to data and content is skyrocketing, forcing data centers to add more and more storage capacity and maintain the high performance that their customers expect.
Big Block Bypass Mode Improves QoS for Scale out Storage
After commuting in Silicon Valley for a few decades now, I can really appreciate all of the techniques that the IT industry has invented to improve Quality of Service (QoS) for data center infrastructure. After all, in the data center, like on our roadways, ever-increasing scale and density are the mother of all invention!
As an early Hybrid Toyota Prius adopter, I was able to use the carpool lane and laughed at everyone else while flying by and getting to work three times faster due to my privileged driving and the improved the overall QoS I was experiencing. Rules change with technology advancements; now I watch the new electric cars cruise by in my former spot, and it’s painfully clear that big trucks cause the bottlenecks, so maybe the solutions is to let them have the car pool lane!
In 2011, Andy Bechtolsheim highlighted the need to think about TCO with a famous economics quote, “Over the long term, absent of other barriers, economics always win.” Andy, of course, is a Silicon Valley legend, and talking economics provided insight into his total approach to innovation. The analogy clearly applies in the data center for storage as the explosion of Big Data arrives.
Data centers deploying storage at scale for workloads like Hadoop Big Data Analytics or OpenStack Infrastructure as a Service (IaaS) need extreme capacity and performance/$ that a common 8-port HBA or adapter just can’t deliver. To get there, you need to redefine how a rack can be architected.