Speeding up writes for ScaleIO with DRAM

In my previous post “Automatic storage tiering with ScaleIO” I described how I used Windows Storage Spaces to add ScaleIO SSD write cache and automatic tiering. But sometimes there is no SSDs available. In this case it is possible to use software that can use DRAM as read and/or write cache – Romex PrimoCache (homepage) and SuperSpeed SuperCache (homepage). Adding even a small amount of DRAM write cache will turn random IO to more sequential IO and with this it will increase the performance of a spinning disk.

Introducing a volatile DRAM as write IO destination should include some careful planning as it increases risk of loosing data. As ScaleIO writes data into two fault domains it is important to minimize chances of simultaneous failures in multiple fault domains. Things to consider – dual power supplies, battery backup, different blade enclosures, different racks and even different server rooms.

In my test I used PrimoCache – 1GB of DRAM write only cache with 5 second deferred write. Deferred write is the key option here – it allows data to reside in the memory for 5 seconds before it is flushed to disk. The deferred write time is configurable from 1 second up to infinity.

PrimoCache Write Cache

 

With DRAM write cache in front of spinning disk random IO performance increases significantly as IO is captured in to DRAM and then flushed to disk as sequential IO. From the screenshot below it is visible how PrimoCache flushes writes every 5 seconds to disk. Device Detail page in ScaleIO show that average write latency is about half what it is for other two tiered SSD based ScaleIO SDS nodes. Additional option is to add DRAM write cache with deferred write also in front of SSD based solutions to speed up write IO and reduce wear on SSD disks.

Disk transfer with PrimoCache

 

Since ScaleIO is software only it allows many different configurations to be combined into single cluster. I have mixed different hardware vendors, hardware generations and operating systems together into single ScaleIO cluster. I recommend everyone try to ScaleIO who is interested of hyper-converged solutions.

Related posts

Automatic storage tiering with ScaleIO
PrimoCache – Disk caching software for Windows

Advertisements

Automatic storage tiering with ScaleIO

Since EMC ScaleIO natively doesn’t have automatic storage tiering I decided to try a solution with Windows Server 2012 R2 Storage Spaces tiering and ScaleIO.

I have 3 servers:

  • with bunch of local disks – two of the servers have also SSD disks, one has only 10k SAS disks.
  • all running VMWare ESXi 5.5
  • all disks have VMFS datastore on them

On each of the servers I installed Windows Server 2012 R2 VM to be used as ScaleIO SDS server. On the servers with SSD disks I created a Storage Spaces pool with one 200GB SSD and two 1TB HDDs. Into that pool I created initially one virtual disk – 100GB SSD and 1TB HDD with 1GB Write-Back Cache which I use for ScaleIO SDS.

Storage Pool

 

With this setup most writes will always land on SSD disk and hot blocks eventually will be tiered to SSD giving the solution much better overall performance. I included a screenshot from perfmon to show the how the SSD disk (disk 1) is serving all the IO for the ScaleIO disk E (disk 4). 100% SDD hit rate means that my current working set is smaller than my SSD tier. When checking Device Details in ScaleIO both of these servers have read and write latency below 1 ms.

Storage Pool Performance

 

As you may have notices one my servers did not have any SSD disks. I will soon write how I increased performance of that server.

Related ScaleIO posts

Using a file as a device in ScaleIO

Speeding up writes for ScaleIO with DRAM