PrimoCache – Disk caching software for Windows

PrimoCache is caching software for Windows servers and workstations that can use DRAM and/or SSD to speed up both read and write opeartions. PrimoCache has two levels of cache – level-1 is DRAM and level-2 is SSD. A 90-day trial version can be downloaded from Romex Software website.

Main Features (source Romex Software website)

  • Supports physical memory, solid-state drives and flash drives as cache storage.
  • Implements a two-level caching architecture.
  • Supports persistent level-2 caching.
  • Supports Write-Through and Write-Deferred caching modes.
  • Supports TRIM command.
  • Supports OS Invisible Memory.
  • Implements an intelligent and self-tuning cache replacement algorithm.
  • Supports caching strategies: Read/Write Caching, Read-Only Caching and Write-Only Caching.
  • Supports performance statistics and monitor.
  • Supports caching for multiple volumes
  • Supports caching for volumes with proprietary file system.
  • Supports caching for volumes on basic and dynamic disks.
  • Supports plug and play.
  • Supports command-line interface.

More info: http://www.romexsoftware.com/en-us/primo-cache/index.html

Testing and results

For testing I selected a virtual machine which is running some management software with integrated mysql db server. For some reason mysql was causing heavy read IO – 2000 to 3000 iops. I created a 4GB read-only cache. After 30 minutes cache was warmed and mysql was no longer stressing the storage array. Cache hit rate is almost 100% with 1,76GB of cache still free.

PrimoCache application

PrimoCache application

From VMware performance charts it is visible that read IO dropped almost to zero.

Read IO before and after enabling cache

Read IO before and after enabling cache

Read IOPS also disappeared.

Read IOPS before and after enabling cache

Read IOPS before and after enabling cache

Other use case

Write-deferred cache will turn small random IOs into large sequential IOs. This caching option could potentially speed up non-ssd disks which are not that great for small random writes.

Licensing

PrimoCache is licensed per PC/server. Non-commercial license starts at $29.95, business license starts at $119. License includes lifetime free updates and technical support.

Other similar solutions

SuperSpeed SuperCache Express – http://www.superspeed.com/servers/supercache.php
HGST ServerCache – http://www.hgst.com/software/HGST-server-cache

Conclusion

Typically in virtualized environment one would look for a solution in hypervisor layer. But this typically means you need to license several hosts and probably equip them with SSD disks. All of this adds up to cost. This cost might not make sense if you only have one or two machines that would benefit from the solution. Here’s where PrimoCache or other similar solutions might help.

Advertisements

VMware Site Recovery Manager: Error – Failed to recover datastore … – updated quickfix

During a recent Vmware SRM (version 5.5) planned fail over I received following error during step 8 : Error – Failed to recover datastore ‘<datastorename>’. VMFS volume residing on recovered devices ‘”<device_wwn>”‘ cannot be found. Datastore was from EMC VNX replicated with EMC RecoverPoint.

I have found another way:

  1. Detach the LUN from all recovery hosts (Configuration – > Storage Adapters -> select HBA -> Select LUN -> right click -> Detach)
  2. Rerun the recovery plan.

I have not investigated this error properly so I don’t know what causes it but a quick fix for this error is:

1) Rescan all hosts in the cluster
2) Select one host – go to Configuration -> Storage  -> Add Storage
3) Select Disk/LUN -> your LUN should appear on the list -> select it -> Next
4) You should be presented with mounting options -> select “Assign a new signature”
5) After finishing the wizard your datastore should appear in the datastore list.
6) Rescan storage on all hosts in the cluster
7) When LUN has “recovered” on all hosts rerun the recovery plan.

 

Storage reclamation – part 4 – Zero fill and array level reclamation

One way to reclaim storage space is overwrite the dead space with zeros and then get rid of them.

Writing Zeros

One thing to keep in mind that when you write zeros to the disk/LUN it will grow to full size.

Windows

Sdelete – free tool from Microsoft that can be used to write zeros to disks.
Command: sdelete -z E:\

Raxco PerfectDisk – a commercial tool to intelligently overwrite dead space with zeros.

Condusiv V-locity – a commercial tool to overwrite dead space with zeros.

Linux

dd – is a command-line utility for Unix and Unix-like operating systems which can be used to copy disks and files.
Command: dd if=/dev/zero of=/home/zerofile.0 bs=1M

Getting rid of zeros in VMware level

Zeros can be removed using Storage vMotion. When performing Storage vMotion thin must be selected and the source and destination datastores have to have different block size. I have used following combinations:
VMFS5 -> select thin -> VMFS3 (8MB block) -> VMFS5
VMFS5 -> select thin -> NFS -> VMFS5

Reclaim zeros in array level

Different arrays support zero space reclamation in different ways. Check your vendor documents how exactly accomplish this.

EMC VMAX

Command can be executed to reclaim zero space from a LUN.
Solution Enabler command example: symconfigure -cmd “start free on tdev <TDEVID> start_cyl=0 end_cyl=last_cyl type=reclaim;” commit -sid <SID>

Hitachi HUSVM

Zero space from LUN can be reclaimed by running “Reclaim Zero Pages” from Hitachi Command Suite.

EMC VNX

Performing LUN migration will reclaim zero space from LUN. In addition compressing and then uncompressing the LUN will also discard zeros.

Other arrays

Any array that supports inline compression and/or deduplication will probably reclaim any zero space during write operation.

Other posts in this series:

Storage reclamation – part 1 – VMWare vSphere

Storage reclamation – part 2 – Windows

Storage reclamation – part 3  Linux

Red Hat Summit 2014 presentations

I went through some Red Hat Summit 2014 presentations and found few interesting things. Presentations are available at Red Hat website – https://www.redhat.com/summit/2014/presentations/.

Linux Containers in RHEL 7 – Key Takeways (Link to presentation)

  • Application isolation mechanism for Light-weight multi-tenancy
  • Application centric packaging w/ Docker image-based containers
  • Linux Containers Productization
    • Key kernel enablers – full support in RHEL 7 GA
    • Docker 1.0 – shipped with RHEL 7 GA
  • Linux Container Certification
  • Red Hat and Docker partnership to build enterprise grade Docker
    containers

RHEL roadmap (Link to presentation)

Theoretical Limits on X86_64

  • Logical CPU – maximum 5120 logical CPUs
  • Memory – maximum 64T

RHEL 7 will support XFS, ext4, 3, 2, NFS, and GFS2

  • Maximum supported filesystem sizes increase
    • XFS 100TB -> 500TB
    • ext4 16TB -> 50TB
  • btrfs is a technology preview feature in RHEL 7

Red Hat Enterprise Linux 7 has XFS as the new default file
system

  • XFS will be the default for boot, root and user data partitions on all
    supported architectures
  • Included without additional charge as part of RHEL 7 subscription

RHEL 7 Storage Enhancements

  • New protocols and driver support
    • Shipping NVMe driver for standard PCI-e SSD’s
    • Support for 16Gb/s FC and 12Gb/s SAS-3
    • Linux-IO SCSI Target (LIO)
    • User-specified action on SCSI events, e.g. LUN create/delete, thin provisioning threshold reached, parameter change.
  • LVM
    • RAID, thin provisioning and snapshot enhancements
    • Tiered storage, using LVM/DM cache, in technology preview

 Red Hat Enterprise Virtualization Hypervisor roadmap (Link to presentation)

Performance: Windows Guest Improvements

  • Make Windows guests think they are running on Hyper-V

Scalability: Large Guests

  • Host: 160 cores; 4TiB RAM
  • Virtual Machine CPU Limit : 160 vCPUs
  • RHEL6 4000GiB guest RAM
  • RHEL7 4 TiB guest RAM