Automatic storage tiering with ScaleIO

Since EMC ScaleIO natively doesn’t have automatic storage tiering I decided to try a solution with Windows Server 2012 R2 Storage Spaces tiering and ScaleIO.

I have 3 servers:

  • with bunch of local disks – two of the servers have also SSD disks, one has only 10k SAS disks.
  • all running VMWare ESXi 5.5
  • all disks have VMFS datastore on them

On each of the servers I installed Windows Server 2012 R2 VM to be used as ScaleIO SDS server. On the servers with SSD disks I created a Storage Spaces pool with one 200GB SSD and two 1TB HDDs. Into that pool I created initially one virtual disk – 100GB SSD and 1TB HDD with 1GB Write-Back Cache which I use for ScaleIO SDS.

Storage Pool


With this setup most writes will always land on SSD disk and hot blocks eventually will be tiered to SSD giving the solution much better overall performance. I included a screenshot from perfmon to show the how the SSD disk (disk 1) is serving all the IO for the ScaleIO disk E (disk 4). 100% SDD hit rate means that my current working set is smaller than my SSD tier. When checking Device Details in ScaleIO both of these servers have read and write latency below 1 ms.

Storage Pool Performance


As you may have notices one my servers did not have any SSD disks. I will soon write how I increased performance of that server.

Related ScaleIO posts

Using a file as a device in ScaleIO

Speeding up writes for ScaleIO with DRAM


One thought on “Automatic storage tiering with ScaleIO

  1. Pingback: Deduplication and compression with ScaleIO | Kalle's playground

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s