Demartek Comments on Storage Networking World Spring 2011
13 April 2011
By Dennis Martin, Demartek President
The Spring 2011 Storage Networking World (SNW) conference held last week in Santa Clara, California was a great conference for us. Yes, I did say us, as Demartek’s fourth employee, Jeff Giedt attended SNW with me. He sat in on some of the meetings, attended several sessions, and had a chance to meet many people.
Deep Dive on SSDs Session
I had the great pleasure of giving three sessions at this SNW conference (slides from these sessions posted here). The most well-attended session was my “Deep Dive on SSDs”, which was held bright and early in the first time slot on Monday morning. At least half of the attendees in this session indicated that this was their first SNW conference. We focused on NAND flash and discussed the physics of NAND flash, SLC and MLC and things such as power and cooling. We discussed various form factors for SSD technology; including disk drive form factor, PCIe cards, and external accelerators. We then went into the two basic forms of data placement on SSD technology, caching and primary storage, where I discussed the advantages of each. For primary storage, I included the need for automated storage tiering. I then showed some performance statistics from tests that we ran in our own lab.
There was strong interest in SSDs and there were many great questions from the audience, primarily administrators or managers of administrators. Most people understand the performance benefits of SSDs, but several expressed concerns about the life expectancy of NAND-Flash. My response is that as an end-user, you can look for two things from enterprise SSD vendors: the warranty period and some sort of Terabytes (TB) written per day metric. Enterprise SSDs should have the same warranties as enterprise spinning disk drives. Be careful of SSD vendors who have a lot of caveats and conditions on either of these two items. Predicting NAND-Flash life expectancy is of great concern and I expect to write a report on this topic soon.
I invite you to check our SSD zone at /SSD.html to see some of our public test results, presentations and articles relating to SSD technology.
Unified Storage Networking and FCoE Session
My second session was on Unified Storage Networking, also held on Monday morning. We discussed Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE). Although there were a fair number of questions from the audience, the interest level wasn’t as high as it was in the SSD session. Many in the audience seemed to be trying to understand the technology rather than eager to embrace it, and for good reason. As I’ve said in the past, unified storage networking, 10GbE and FCoE are a “slow burn” and are to be considered for long term data center planning, not something everybody will implement immediately.
As part of this presentation, I addressed the organizational and political issues within IT shops regarding unified storage networking, and for some, this is a much larger issue than the technology. Several in the audience expressed concern about storage administrators and network administrators working together, given their significantly different operating styles and agendas. In my mind, this is the biggest issue facing unified storage networking and FCoE. My recommendation is that administrators from each camp who decide to learn at least a little about the other area will have the best job prospects over the long-term.
We have several public FCoE test results from our lab in our FCoE zone at www.demartek.com/FCoE showing various configurations, performance, etc.
I/O Virtualization — The Next Virtualization Frontier Session
My third session, held on Tuesday morning, was about I/O virtualization. This includes technologies such as Single-Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV). There seemed to be a fair amount of interest in this topic, and it appeared to me that many in the audience understood the general principle. I mentioned that many of the adapter vendors were already supporting some forms of SR-IOV. One person asked if I could mention which adapters supported SR-IOV. I responded by asking for the audience to shout out names of their favorite NIC, FC HBA and RAID controller vendors, and I would indicate which of them have something in this space. Every name that was mentioned from the audience either has at least one adapter now that supports SR-IOV or has announced plans to support it. I also mentioned that we are working with some of the start-up companies building the PCIe bus extension units that can take advantage of I/O virtualization and allow sharing of adapters. Now if we can just get the hypervisor vendors to get on board with this…
Demartek Storage Interface Comparison
I mentioned a resource on our web site that seemed to generate a fair amount of interest to many of the audience members for all the sessions I gave, and that is the Demartek Storage Interface Comparison. We recently added connector types such as SFP, SFP+, mini-SFP, QSFP, etc. to this page. This is a reference page designed to put a lot of useful information about storage networking technologies all on one page.
Audience Storage Configurations
I asked the audience members of all my sessions a few questions about their environments and received a wide variety of answers. Several said that their block storage volume sizes are 2TB each. I heard “millions” and one person said “billions” when asked about the number of files per volume that they store on their file servers. When I asked how many virtual machines (guests) they typically run per physical server, I heard many in the eight-to-fifteen range, some said in the twenties and one said that they run fifty guests per physical server. When I asked how many IOPS per virtual machine they used, very few had measured this and had an answer.
Vendor Products at SNW Spring 2011
In addition to the sessions I gave, I spent a fair amount of time with several of the companies who attended SNW. The two biggest topics at SNW Spring 2011 were SSDs and Cloud Computing. Here’s a rundown of some of the technologies that I saw at this event. I also met with some start-up companies who have some very interesting technology but aren’t quite ready to unveil things just yet, so stay tuned.
Among the SSD vendors, I spent time with Avere Systems, Dataram, Fusion-IO, Marvell, Micron, Nimbus Data, Pliant Technologies, Texas Memory Systems, Viking Modular, Virident, and others discussing and examining their SSD solutions. We have already had some of the products these companies make in our lab and expect to test others.
Avere Systems announced two new models of their FXT series scale-out NAS accelerator appliances that have faster processors and increased DRAM capacity. Just last month they announced their global namespace functionality across their entire FXT product family. Avere scale-out NAS appliances provide automated dynamic storage tiering for NAS workloads by providing DRAM, SSD, high-speed and lower-speed disk drive tiers within their appliance that sits in front of other NAS storage solutions.
Dataram, who built the first commercial SSD in 1976, has updated their very nice XcelaSAN product by increasing the cache to 256GB, which caches both reads and writes. This product deploys as a separate accelerator appliance in the Fibre Channel SAN that accelerates all I/O to the SAN storage. The XcelaSAN has high availability by mirroring to a peer unit for data protection and availability. Dataram also recently added SNMP MIB support and advanced performance reporting statistics. Deployment is fairly easy and the benefits are immediate.
Fusion-IO had their big video streaming demonstration in the expo hall once again, but this time they streamed 4500 videos at the same time. They used their ioDrive Octal that has 5.12TB of flash storage mounted on a single PCIe card in one server, connected via InfiniBand to twenty servers in order to keep the servers busy and to render all 4500 separate video streams. Separately, Steve Wozniak, Chief Scientist at Fusion-IO and Apple Co-Founder, gave a great keynote address, discussing innovators within companies and his interest in teaching children, especially 5th graders. He spent several minutes answering questions from the audience on a variety of topics.
Marvell showed their DragonFly Virtual Storage Accelerator (VSA) PCIe adapter. It fits in a server and performs read and write caching by using the onboard NVRAM and a backend connection to your choice of SSD via SAS cables connected to the card. It has its own HyperScale caching technology that sequentializes a high volume of random writes on NVRAM and manages asynchronous write-backs from the SSD level 2 buffer to the back-end storage. It also performs a number of advanced functions such as synchronous HA mirroring and works with multiple storage protocols, including all DAS, NAS and SAN technologies. General availability is expected by late 2011 or early 2012.
Micron was showing their P300 6Gb/s SATA interface SSDs and a new PCIe solid-state storage solution known as the P320h. Their PCIe card has not yet been released, but they had one in their display case. The sticker showed 512GB of total NAND flash on the P320h and looks to be an interesting product that will compete with the other PCIe SSDs in the market.
Nimbus Data was showing their S-class storage system with up to 250TB of enterprise flash storage inside. This is a storage system that has no spinning hard drives but is an all-Flash solution. This storage system supports iSCSI, NFS and CIFS and has several advanced storage features and functions. Its Halo operating system includes advanced features such as de-duplication, replication, snapshots, thin provisioning and high availability features.
Pliant Technologies makes some very nice 3.5-inch and 2.5-inch 3Gb/s SAS SSDs, and we’ve been using them in our lab for a while. Pliant offers a 5-year warranty and stable performance. We’re expecting 6Gb/s SAS SSDs from them. Stay tuned…
Texas Memory Systems (TMS) has been focusing on bandwidth intensive applications with their SSD solutions. There are a broad number of applications that can take advantage of higher bandwidth if it were available, and I personally expect the number of these applications to increase over time. These include database analytic systems doing large database scans such as business intelligence applications. HPC applications can also use all available storage bandwidth and some of these treat SSD storage as an extension of main memory. Last month, TMS released the RamSan-640, which delivers 8GB/s bandwidth in a 4u MIL spec chassis and has the removable Flash Brick modules that can be carried individually in less than ideal transportation conditions.
Viking Modular was showing their SATADIMM enterprise SSDs that fit into a DDR3 DIMM slot in a server. The 3Gb/s SLC versions support up to 200GB on one DIMM and the 3Gb/s eMLC versions support up to 400GB on one DIMM. The 6Gb/s SLC versions support up to 400GB and the 6Gb/s eMLC versions support up to 480GB of capacity. Each SSD DIMM is powered by the DIMM slot and has a SATA interface plug on top. Viking Modular’s idea is that there are typically open DDR3 DIMM slots in many servers and that these are a great place to add SSD capacity in unused but existing space. Because SATA and SAS use the same connectors, these SSD modules could also be used with SAS storage systems.
Virident was showing their tachIOn PCIe SSD that has individual NAND flash upgrade modules and claims very high rates of sustained performance and life expectancy. Their PCIe SSD supports up to 800GB of capacity and they say that they can get 24 years of life at 5TB/day of writes. Unique to Virident's implementation is their on-board, hardware-supported, flash-aware RAID across their replaceable flash modules. Virident seems to be addressing one of the main concerns of enterprise SSD life expectancy with their replaceable NAND flash modules. We are used to replacing individual disk drives, especially when they are part of a large enterprise disk array, so why not apply the same principle to NAND-flash modules?
I did meet with one cloud vendor, Scality, who makes a software solution for public or private clouds. Their solution is interesting in that it is designed to be distributed across many independent components so that any individual failure does not cause failure of the whole system, is self-healing and uses standard hardware components that can be purchased from multiple vendors. The design is such that capacity or performance can be increased in small increments and that components can be replaced or upgraded without requiring any data migration. They store objects using a key-value store technique that is stateless and allows for effectively infinite capacity. Their pricing model is a simple 5 cents per Gigabyte per month. I like their ability to replace or upgrade components as needed without data migration.