Exploring how DataCore SANsymphony-V leverages DRAM and Flash differently


I want to differentiate between two very important types of high speed storage devices within a DataCore SANsymphony-V node. The first is DRAM, or more generally, memory. The second is a solid-state disk or Flash device (referred to as SSD/Flash from this point forward). At first glance these devices may appear to have overlapping purposes. While they both serve to increase the overall performance within the system, they go about this in very different ways, each one providing benefits based on their design and fabrication.

DRAM vs. SSD/Flash

In general, cache is any storage medium that is used to reduce the amount of time required to move data in and out of a system. The movement of data blocks between application servers and the storage system is referred to as input/output (I/O). With this definition, DRAM and SSD/Flash could both fulfill this requirement. So what are the differences then? Simply put:

  1. DRAM is orders-of-magnitude faster then SSD/Flash.
  2. DRAM is volatile and SSD/Flash is non-volatile (DRAM requires power to retain the information stored therein, SSD/Flash does not).
  3. DRAM doesn’t have wearing or write amplification issues like SSD/Flash. DRAM’s lifespan is much greater than that of SSD/Flash.

Due to these significant differences, it is very important to leverage these technologies for tasks best suited for what they were built for.

DataCore High Speed Cache (HSC)

SANsymphony-V leverages main system board memory (RAM) as HSC in order to provide performance critical caching and I/O optimization functions. The RAM cache is extremely low latency, with a response time measured in nanoseconds, and provides four significant functions: write buffering, write coalescing, read pre-fetch, and read cache.

Write Buffering: The purpose of write buffering is to reduce the amount of time it takes to receive the I/O from the application server and acknowledge back that the I/O has been received. This type of buffering is also known as a Speed Matching Buffer. This eliminates any delay the application server may have experienced if it was writing to the back-end storage directly.

Write Coalescing: Under certain conditions, coalescing will occur whereby write I/Os are optimized into larger blocks before destaging to the back-end storage device. Destaging with larger blocks is a more efficient transfer method than with smaller blocks.

Read Pre-Fetching: During a read request from the back-end storage system, SANsymphony-V may also request adjacent logical blocks thereby by increasing the chances of near-future read cache hits against the related data set.

Read Caching: Either during a read pre-fetch or a write buffering operation, data will be staged in cache in order to reduce the amount of time required to retrieve the desired data blocks again in the future. By caching as much data as possible in memory (v9.x supports up to 1TB per node – a true mega-cache), the chances of a cache hit (finding the requested data block in cache) are greatly increased and therefore the overall application performance is significantly improved.

SSD/Flash Disk Devices

In the world of SANsymphony-V, SSD’s exist as persistent disk devices within a disk pool, similar to any other disk device, except with outstanding relative performance properties. SSD has very low latency, with a response time measured in microseconds. This allows SANsymphony-V to leverage the SSD device for acceleration AND storage capacity. This is notable since many storage platforms available today allocate the SSD device as a global cache device, thus preventing it from contributing to the total usable storage capacity.

SSD serves to enhance the write-coalescing/destaging and read pre-fetching operations performed by DataCore’s high-speed cache functions by providing a much faster downstream device, orders-of-magnitude faster than spinning disk. Traditional spinning disks have a response time measured in milliseconds.

In this scenario where SSD is participating within a disk pool that has auto-tiering enabled, the recommended amount of SSD really depends on how much high intensity (or hot) data you have. Some say 10% of total storage capacity, others say replace it all with SSD. Although you could certainly replace all your storage with SSD, most environments do not require this. This is because a very small percentage of the live production data is accessed with any notable frequency.

I would recommend estimating what the total amount of hot data is on your platform, add 20% for growth purposes, and use this value as the starting amount of SSD in the pool. Once the SSD is in the pool along with your traditional spinning disk, you can use the heat maps feature to really see how much hot data you have. SANsymphony-V will move all the hot data to the faster SSD. This is a perpetual process, but if you find that you have completely filled tier 1 with hot data and hot data remains on your lower tiers, then you may need more SSD in the pool.

You can see in the heat map shown above that the 10% value works very well. Tier 1 is not completely filled with hot-spots and there are no hot-spots in tier 2. This shows a good balance between SSD and spinning disk.

Conclusion: Importance of High-Speed Cache

Besides accelerating disk I/O within the system, HSC also increases system-wide reliability by reducing stress on back-end disks (whether spinning disk or SSD) thus decreasing hardware failures, eliminates the need for more drive spindles or lots of expensive SSD to achieve higher overall storage system performance, decreases platform footprint, and lowers power requirements. All of these benefits result in lower operating costs and higher application performance and reliability.

For more information about SANsymphony-V, please visit DataCore Software.