Isaac Newton, SPC-1, and The Real World

Definitions

The SPC-1 is an industry recognized storage performance benchmark developed by the Storage Performance Council to objectively and consistently measure storage system performance under real-world high-intensity workloads (principally, OLTP database workloads).

Introduction

Over the last four years, the storage industry has transformed at an amazing rate. It seemed almost every other week, another software-defined storage startup emerged. On the surface this appears great, right? Lots of competition, lots of choice, etc. However, with all of this also comes lots of confusion and disappointment. What is actually new with all of these developments? Are there truly pioneers out there taking us into new and uncharted territory? Let’s go exploring!

Isaac Newton and the SPC-1

Wait, What? How in the world does Isaac Newton relate to the SPC-1? As you may know, Newton was the founder of calculus, discovered the laws of motion and gravity. He is unquestionably one of the most notable scientists in history. Without him, we don’t have much of the modern world that we enjoy today. While some appreciate what he accomplished technically speaking, most people do not go around citing the intricacies of Newtonian Mechanics. However, we do appreciate the result of his discoveries: cars, planes, space shuttles, sending satellites to other planets, and many other amazing things. So, while these things are necessary to operate in the modern world, they are generally reserved for the areas of academia. Such is the case with the SPC-1.

This article has one simple objective. It is focused on drawing the parallel of what the SPC-1 demonstrates to implications in the real world. Similar to Newtonian Mechanics, most do not walk around citing SPC-1 results. However, just like with Newton, the results have real world implications specifically to the information technology world; a world which we are all deeply connected to in one way or another.

What Does The SPC-1 Show Us and… “So What?”

The SPC-1 analyzes all out performance and price/performance for a given storage configuration. While not showcased, latency analysis is also included within the full disclosure report for each benchmark run. The importance of latency will become apparent later in this article. But in the end, who doesn’t want performance, right?

One question that usually jumps out after referring to the SPC-1 results is, “so what?”. Well as it turns that is precisely what I am trying to answer here. On the surface there is basic vendor performance comparison. The higher the IO per second, the better the all-out performance is. The lower the $/IO the more cost efficient a system is. What happens when a vendor is able to achieve top performance numbers and price/performance numbers on the same benchmark run? Now that would be interesting.

Generally speaking, you will not find the same vendor system in the top 10 for both categories simultaneously mainly because the two categories fall at opposite ends of the spectrum. Typically, the higher the IOps produced, the more expensive the system and conversely, the lower the $/IO, the lower the total overall performance.

So hypothetically speaking, what would it mean if a vendor was to construct an individual storage system that landed in both categories? First off, it would mean that the system is both really fast and really efficient (one could argue that it is really fast because it is really efficient). Second, it would raise certain questions about how storage systems are constructed. In other words, it would be like having a Bugatti Veyron with a top speed of 268 mph for the price of a Toyota Camry. It wouldn’t just be interesting; it would change the entire industry.

If your next response is, “But I don’t need millions of IOps”, you would be missing the point completely. Ok, so you don’t need millions of IOps, but you get them anyways. What you need to realize is that you don’t need as many systems to achieve your goal in the infrastructure. In other words, why buy 10 of something when 2 will do the job?

What I am driving toward here is this: imagine how much more performance you could get for every dollar spent, imagine how much more application and storage consolidation you could get while simultaneously reducing the number of systems, imagine how much more you could save on operational expenses with less hardware, imagine running hundreds of enterprise virtual machines with true data and service high-availability in an N+1 configuration while simultaneously serving enterprise storage services to the rest of the network. Oh, the possibilities.

Below are examples of one type of convergence you can achieve with a system such as this. The server models shown below are used for illustration purposes, but it could be Lenovo, Dell, Cisco, or any multi-core x86-based system available in the market today. While traditional SAN, converged, and hyper-converged models are also easily achievable and have been available for many years, the model shown below represents a hybrid-converged model. It provides the highest level of internal application consolidation while simultaneously presenting enterprise storage services externally to the rest of the infrastructure. Without DataCore SANsymphony-V, this level of workload consolidation wouldn’t be possible.

Hybrid_Converged_HyperV

Hybrid_Converged_VMware

So, Does This System Actually Exist?

As it turns out, this isn’t theoretical, it is very real. It has been very real for many years now. DataCore’s SANsymphony-V software is what makes this possible. DataCore’s approach to performance begins and ends with software. This is completely opposite of all other vendors who try to solve the performance problem by throwing more expensive hardware at it. And this is precisely why for the first time (from what I can tell), a vendor (specifically DataCore) landed on both top 10 categories (price and price/performance) simultaneously with the same test system.

And What About This Matter of Latency?

There still tends to be a lot of talk about IOps. As I have been saying for years now, IOps is a meaningless number unless you have other pieces of information regarding the test conditions such as % read, % write, % random, % sequential, and block size. Then, even with this information, it only becomes useful when comparing systems that have been tested with the same set of conditions. In the marketing world, this is never the case. Every storage vendor touts some sort of performance achievement, but the numbers are incomparable to other systems because the test conditions are different. This is why the SPC-1 is so significant. It is a consistent application of test conditions for all systems making objective comparison possible.

One thing that is not talked about enough, however, is latency, and specifically the latency across the entire workload range. Latency is what will define the application performance and user experience in the end.

In general, when comparing systems, IOps are inversely proportional to latency (response time). In other words, the higher the IOps the lower the latency tends to be and vice versa. Note, this is not always the case because there are some systems the deliver decent IOps, but terrible latency (primarily due to large queue depths and/or queuing issues).

DataCore SANsymphony-V not only set the world record lowest price/performance number, not only landed in both top-10 categories for the same test system, but also set a new world record in terms of the lowest latency ever recorded by the SPC-1… sub-100 microseconds! Interestingly, the most impressive part, which you could miss if you are not paying attention, is that they achieved this world record latency at 100% workload. This is simply staggering! Granted, you may not run at an all-out 100% workload intensity, but that just means your latency will be that much lower under normal conditions. The analogy here is the same Bugatti Veyron mentioned earlier running at top speed while towing 10 tractor-trailers behind it.

Below shows a throughput latency curve comparing DataCore SANsymphony-V to the previous fastest response time on the SPC-1 benchmark (the fastest one I could find in the top-10 at least). Notice how flat the latency curve is for DataCore. This is indicative of how efficient DataCore’s engine is. Not only did DataCore SANsymphony-V post better than 7x latency numbers (at 100% workload) against Hitachi, it also drove an additional 900,000 SPC-1 IOs per second. And finally, it achieved this result at a cost of 1/13th the previous record holder!

LatencyCurve

How was this accomplished? Simply put, it is baked into the foundation of how DataCore moves IO through the system, in a non-interrupt-real-time-parallel fashion. In other words, DataCore doesn’t just “not get in the way”, it actually removes the barriers that normally exist.

Conclusion

Hopefully by now you can see the answer to the “so what” question. These SPC-1 results go well beyond just a storage discussion. This directly impacts the way applications are delivered. You can now achieve what was once impossible. Is it virtual desktops you are after? Imagine running 10x more with less hardware without sacrificing performance. Is it mailboxes you are after? Imagine running 20x more with less hardware without sacrificing performance. Is it database performance you are after? Imagine running on the fastest storage system on the planet (not my words, the SPC-1’s findings) with the lowest latency and doing it at a cost that is untouchable by other solutions (hardware and software-defined alike). So while the SPC-1 is rooted in storage performance, the effect this has on the rest of the ecosystem is beyond just interesting… it is revolutionary!

References

Storage Performance Council Website
SPC-1 Top Ten List
DataCore Parallel IO Website

DataCore Parallel I/O Redefines Enterprise Storage Economics With New SPC-1 World Record

DataCore Software Corporation has submitted the SPC-1 Result™ listed below.
The Executive Summary and Full Disclosure Report (FDR) are posted in the Benchmark Results section of the SPC website.
 
The documents may be accessed by using the URL listed below:
SPC-1 Results – “Top Ten” by Price-Performance

DataCore SANsymphony-V 10.0:
  SPC-1 Submission Identifier .…. A00164
  SPC-1 IOPS™ ………………….……… 459,290.87
  SPC-1 Price-Performance™ ….. $0.08/SPC-1 IOPS™
  Total ASU Capacity ………….……. 2,924.873 GB
  Data Protection Level ……….….. Protected 1 (mirroring)
  Total Price ………………………..…… $38,400.29 
 
SPC1_PriceCurve

Congratulations to DataCore on the company’s first SPC-1 Result as a returning member and establishing a new #1 entry in the “Top Ten” SPC-1 Price-Performance rankings.

FEATURE DEMO: SANsymphony-V Sequential Storage Feature (aka. Random Write Accelerator)

In this three minute demo, I will demonstrate the significant impact the sequential storage feature has on overall storage performance. Random write workloads are especially difficult on storage systems and certainly those consisting of a RAID-5 configuration (one of the most common RAID types).

The demo consists of the following configuration:

Hardware:
Dell PowerEdge R720 Server
6x 300GB 15k SAS drives in RAID-5 connected via a H710P PERC RAID Controller
Single RAID volume in a single SSV disk pool
8x 10GB Virtual Disks created from the disk pool (presented over loopback)

Software:
Microsoft Windows Server 2012 R2
DataCore SANsymphony-V 10 PSP2
IOmeter v1.1

IOmeter Test Parameters
8 Workers
16 outstanding I/O’s per target
Pseudo Random Pattern
8KB block size 100% write/100% random workload

Watch the demo below (make sure you click on the HD button in the upper right hand corner)

What is happening here?

Simply put, SANsymphony-V is taking all the writes that are inbound and writing them in a “sequential” pattern. This sequential pattern is highly favorable to disk subsystems because it doesn’t require as much armature movement to perform the write function. SANsymphony-V is quite literally “just writing the data”. In other words, the seek time normally experienced when writing data to the disk with a random pattern is all together eliminated resulting in massive performance boost for applications that have any measurable amount of random write activity.

DataCore Introduces a New Breakthrough Random Write Accelerator for Update Intensive Databases, ERP, OLTP and RAID-5 Workloads

Introduction
It’s here! DataCore Software this week has released exciting new breakthrough feature extending the arsenal of enterprise features already present within SANsymphony-V. This new feature serves to enhance the performance of random write workloads which are among the most costly operations that can be performed against a storage system. The new Random Write Accelerator in effect takes highly random workloads and sequentializes them to achieve greater performance. The Random Write Accelerator has shown yields up to 30 times faster performance for random-write-heavy workloads that frequently update databases, ERP and OLTP systems. Even greater performance gains have been realized on RAID-5 protected datasets that spread data and reconstruction information to multiple locations across different disk drives. The new feature is now available and included within SANsymphony™-V10 PSP1.

Internal testing with the Random Write Accelerator feature and 100% random write workloads yielded significant performance improvements for spinning disks (>30x improvement) and even noteworthy improvements for SSDs (3x improvement) under these conditions. The specific performance numbers will be covered later in this article.

The actual performance benefits will vary greatly depending on the percentage of random writes that make up the application’s I/O profile and the types of storage devices participating within the storage pool. Additionally, the feature is enabled on a per-virtual disk basis, allowing you to be very selective about when to apply the optimization.

Basis For Development
As applications drive storage system I/O, DataCore’s high-speed caching engine improves virtual disk read performance. The cache also improves write performance, but its flexibility is limited due to the need to destage data to persistent storage. In many environments the need to synchronize write I/O with back-end storage becomes the limiting factor to the performance that can be realized at the application level; hence the purpose of this development.

With some types of storage devices, there are significant performance limitations associated with non-sequential writes compared with sequential writes. These limitations occur due to:

  • Physical head movement across the surface of the rotating disk
  • RAID-5 reads to recalculate the parity data
  • Write amplification in SSDs

DataCore SANsymphony-V software presents an abstraction to the application — a virtual SCSI disk. The way that SANsymphony-V stores the data associated with these virtual disks is an implementation detail hidden from the application. Data may be placed invisibly across storage devices in different tiers to take advantage of their distinct price/performance/capacity characteristics. The data may also be mirrored between devices in separate locations to safeguard against equipment and site failures. The SANsymphony-V software can use different ways to store application data to mitigate the aforementioned limitations, while not changing the abstraction presented to the applications.

Function Details
The Random Write Accelerator changes the way SANsymphony-V stores data written to the virtual disks by:

  • Storing all writes sequentially
  • Coalescing writes to reduce the number of I/Os to back-end storage
  • Indexing the sequential structure to identify the latest data for any given logical block address
  • Directing reads to the latest data for a block using this index
  • Compacting data by copying it and removing blocks that have been rewritten

Performance Details
Now the part everyone is waiting for, the performance numbers. There are three main states to consider from a performance perspective:

  • Base – the underlying level of performance that can be achieved with a 100% random write workload, without Sequential Storage enabled.
  • Maximum – the performance that can be achieved with a 100% random write workload, with Sequential Storage enabled but without compaction active.
  • Sustained – the performance that can be sustained with a 100% random write workload, with Sequential Storage enabled and with compaction active.

The greatest performance is achieved during the Maximum state. When the virtual disk is idle, a background level of compaction will occur to prepare the system to absorb another burst of random write activity. That is, the background compaction will prepare the virtual disks to deliver performance associated with the Maximum state.

The following performance has been observed using IOmeter running a 100% write, 100% random workload with a 4K block size and 64 outstanding I/Os:


* DataCore cache enabled for before and after scenarios. IOmeter test: 100% write, 100% random workload with a 4K block size and 64 I/Os outstanding.

Interesting Observations
The above results highlight 3 key observations:

  • Significant acceleration (>30x improvement) of low-cost SATA disks for random write loads is possible. In fact in this particular test with DataCore, the resulting sustained performance of 11,000 IOPS actually exceeded that of a conventional Solid State Disk which ran at 10,000 IOPS.
  • The Solid State Disk also displayed improved performance going from 10,000 IOPS to 36,000 IOPS (>3x improvement).
  • Write intensive RAID-5 workloads displayed the greatest amount of improvement from 860 IOPS to 40,000 IOPS (>45x improvement).

Conclusion
DataCore’s Random Write Accelerator capability aims to address a limitation every storage system experiences to some extent. Random writes not only severely impact application performance within mechanical systems such as magnetic disks, they can also drastically reduce the performance and shorten the lifespan of SSD/flash based devices because of the write amplification effects produced from the write I/O pattern (see this publication for more detail). Check out this new feature along with many others in the now available SANsymphony™-V10 PSP1 release.

DataCore’s Answer for Multi-Tenancy Storage and Quality of Service: Storage Domains

DataCore’s next release, SANsymphony-V10 PSP1, introduces many new and exciting features such as Sequential Storage which was covered in my last post. This post however, will focus on the new capabilities that relate to supporting storage segregation, isolation and tracking of storage resource utilization. Managed Service Providers and large IT organizations who manage internal Private Clouds are seeking more productive ways to cost-effectively centralize resources while providing their consumers (clients or departments) with what appears to be dedicated resources assigned to meet specific service level agreement (SLA) objectives.

A fundamental shift to a Quality of Service (QoS) model has already occurred, especially within the co-location, managed service and cloud service provider community. Storage is no longer simply deployed behind the primary offering (i.e. cloud offering, hosting environment, etc.), but rather provided as a service to clients directly from the provider’s central SAN. This same shift is also well underway within internal IT organizations who want to efficiently manage shared infrastructure resources via a private cloud model. They need a simple way to segregate, track and regulate their resources (including storage).

DataCore QoS - Top View

What Is Multitenancy?
Multi-tenancy simply means being able to service multiple independent consumers from a common centralized platform. The platform could be a virtual machine platform, mail hosting platform, web hosting platform, or in this case, a storage platform.

While not formally a part of the definition, there are implications attached to multi-tenancy such as isolation. Isolation, and reasonably so, should include at least end-to-end communication isolation so that consumers are not aware of one another (physically and logically). In terms of storage services provided by DataCore SANsymphony-V today, the isolation scheme takes the form of:

  • Dedicated links between the provider’s distribution switches and the client’s cabinet (called cross-connects in MSP-speak) – handled by internal IT networking team or by the MSP
  • Dedicated layer-2 segmentation (VLANs/Zones) for storage traffic across the switched network – handled by internal IT networking team or by the MSP
  • Dedicated logical volumes only accessible by the client – handled by DataCore SANsymphony-V
  • Dedicated storage pools formed from dedicated physical storage devices that are not shared in any way with other clients (not generally mandated unless client requires it due to specific regulatory compliance) – handled by DataCore SANsymphony-V

DataCore SANsymphony-V10 PSP1 – Extending Multi-tenancy Support
Together with the isolation schemes listed above, DataCore introduces new bandwidth isolation and resource tracking capabilities to ensure that no one client can impact the others and that resource consumption can be reported accurately. These capabilities specifically include:

  • Host Groups are implemented to segregate Hosts (e.g., those in use by the Finance Department) and establish their own storage domains
  • Storage Domains define a subset of resources
  • Storage Policies are implemented within Storage Domains in order to regulate the Quality-of-Service (QoS) levels which define the bandwidth storage domains can consume (IOPS and/or data transfer rate)
  • Chargeback implements resource utilization tracking which provides detailed reporting that can be used for billing purposes

Host groups form storage domains which contain the hosts and the storage resources where the policies are applied.

DataCore Storage Domains

Hosts and Host Groups are defined within DataCore SANsymphony-V as they’ve always been, except storage policies impacting Quality of Service can now be applied to each Host Group as shown below:

DataCore QoS

The QoS storage policy settings take effect immediately for the designated host group. These settings prevent one client or host group from consuming all available bandwidth. This protects all clients from potentially erratic storage behavior during high utilization conditions.

The screenshots below show the storage activity through the system before and after DataCore QoS storage policies have been applied:

Before QoS

After QoS

Chargeback can also be enabled from the QoS Settings screen allowing administrators to track and measure I/O statistics per consumer or host group. The following performance counters for the host group are added to the performance recording session:

  • Total Bytes Written/sec
  • Total Bytes Read/sec
  • Total Reads/sec
  • Total Writes/sec
  • Total Bytes Provisioned

These metrics allow individual consumers to be billed for the resources utilized, whether it is bandwidth and/or storage consumption related. The screenshot below shows what a typical chargeback report looks like.

Chargeback Report

Also, the report data can be exported to several different formats for ease of viewing or importing into other back office systems.

Chargeback Exports

Conclusion
Managed Service Providers and private clouds are continuing to drive a new model of storage service delivery. DataCore is ahead of the curve and well positioned to continue expanding the breadth and depth of these capabilities. Stay tuned for more coverage of the new features arriving with the release of SANsymphony-V10 PSP1 in November.

DataCore’s Answer to Random Write Workloads: Sequential Storage

Introduction
DataCore Software has developed another exciting new feature extending the arsenal of enterprise features already present within SANsymphony-V. This new feature serves to enhance the performance of random write workloads which are among the most costly operations that can be performed against a storage system. The new Sequential Storage feature will be available in SANsymphony™-V10 PSP1 scheduled for release within the next 30 days.

Internal testing with the Sequential Storage feature and 100% random write workloads yielded significant performance improvements for spinning disks (>30x improvement) and even noteworthy improvements for SSDs (>3x improvement) under these conditions. The specific performance numbers will be covered later in this article.

The actual performance benefits will vary greatly depending on the percentage of random writes that make up the application’s I/O profile and the types of storage devices participating within the storage pool. Additionally, the feature is enabled on a per-virtual disk basis, allowing you to be very selective about when to apply the optimization.

Basis For Development
As applications drive storage system I/O, DataCore’s high-speed caching engine improves virtual disk read performance. The cache also improves write performance, but its flexibility is limited due to the need to destage data to persistent storage. In many environments the need to synchronize write I/O with back-end storage becomes the limiting factor to the performance that can be realized at the application level; hence the purpose of this development.

With certain types of storage devices, there are significant performance limitations associated with non-sequential writes compared with sequential writes. These limitations occur due to:

  • Physical head movement across the surface of the rotating disk
  • RAID-5 reads to calculate parity data
  • Write amplification inherent to Flash and SSD devices

DataCore SANsymphony-V software presents an abstraction to the application — a virtual SCSI disk. The way that SANsymphony-V stores the data associated with these virtual disks is an implementation detail hidden from the application. Data may be placed invisibly across storage devices in different tiers to take advantage of their distinct price/performance/capacity characteristics. The data may also be mirrored between devices in separate locations to safeguard against equipment and site failures. The SANsymphony-V software can use different ways to store application data to mitigate the aforementioned limitations, while not changing the abstraction presented to the applications.

Functional Details
Sequential Storage changes the way SANsymphony-V stores data written to the virtual disks by:

  • Storing all writes sequentially
  • Coalescing writes to reduce the number of I/Os to back-end storage
  • Indexing the sequential structure to identify the latest data for any given logical block address
  • Directing reads to the latest data for a block using this index
  • Compacting data by copying it and removing blocks that have been rewritten

Performance Details
Now the part everyone is waiting for – the performance numbers. There are three main states to consider from a performance perspective:

  • Base – the underlying level of performance that can be achieved with a 100% random write workload, without Sequential Storage enabled.
  • Maximum – the performance that can be achieved with a 100% random write workload, with Sequential Storage enabled but without compaction active.
  • Sustained – the performance that can be sustained with a 100% random write workload, with Sequential Storage enabled and with compaction active.

The greatest performance is achieved during the Maximum state. When the virtual disk is idle, a background level of compaction will occur to prepare the system to absorb another burst of random write activity. That is, the background compaction will prepare the virtual disks to deliver performance associated with the Maximum state.

The following performance has been observed using IOmeter running a 100% write, 100% random workload with a 4K block size and 64 outstanding I/Os:

Base IOPS Maximum IOPS Sustained IOPS
Linear 20 GB volume, SATA WDC 1 TB drive 327 19,500 11,000
Linear 20 GB volume, SSD 840 EVO 250 GB Pool 10,000 62,000 36,000
Mirrored 100 GB volume, PERC H-800 RAID-5 Pool 860 67,000 40,000

Interesting Observations
The above results highlight 3 key observations:

  • Significant acceleration (>30x improvement) of low-cost SATA disks for random write loads is possible. In fact in this particular test with DataCore, the resulting sustained performance of 11,000 IOPS actually exceeded that of a conventional Solid State Disk which ran at 10,000 IOPS.
  • The Solid State Disk also displayed improved performance going from 10,000 IOPS to 36,000 IOPS (>3x improvement).
  • Write intensive RAID-5 workloads displayed the greatest amount of improvement from 860 IOPS to 40,000 IOPS (>45x improvement).

Conclusion
DataCore’s Sequential Storage capability aims to address a limitation every storage system experiences to some extent. Random writes not only severely impact application performance within mechanical systems such as magnetic disks, they can also drastically reduce the performance and shorten the lifespan of SSD/flash based devices because of the write amplification effects produced from the write I/O pattern (see this publication for more detail). You can expect this feature along with many others in SANsymphony™-V10 PSP1 due out in November 2014.

VMworld 2014 Wrap-Up and Key Takeaways

VMworld 2014 has come and gone. It was a great show with a massive attendance exceeding 22,000 from 85 countries around the globe. This year the theme was “No Limits”, which was very appropriate since the common message across the board was about leveraging software to maximize hardware investments. I couldn’t agree more. VMworld 2014 confirmed that the industry appears to be ready for broad adoption of the software-defined storage architecture that DataCore introduced over 16 years ago and continues to innovate upon. DataCore, having released its 10th generation software-defined storage offering earlier this year, is in the industry “front seat”, leading the charge with its any storage, any server, any hypervisor product offering; a statement aligning perfectly with VMworld’s theme this year: No Limits, or in other words, Unleashed and Unbound.

vmworld2014_nolimitsNot surprisingly, Virtual SANs monopolized the topic of conversation this year. But the message was fragmented since there were many feature limitations coupled with the inability to integrate and co-exist with other storage and server components in the stack. This is what you would expect considering the infancy of the Virtual SAN concept. But this is where DataCore takes the lead yet again. As with traditional central SANs, the heart of DataCore’s Virtual SAN is SANsymphony-V. This means whether you are running traditional central SANs, Virtual SANs, or both simultaneously, DataCore offers the same enterprise-grade feature set and a single common management interface across the entire architecture. This is what you would expect from a 10th generation product release.

As a brief overview, DataCore™ Virtual SAN introduces the next evolution in software-defined storage whereby SANsymphony™-V is used to create high-performance and highly-available shared storage pools using the disks and flash storage in your application servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers without the need for a separate external SAN infrastructure.

A DataCore Virtual SAN is comprised of two or more physical x86-64 servers with local storage, running SANsymphony-V. It can leverage any combination of flash and magnetic disks (although flash is not required) to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from the virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.

DataCore’s Virtual SAN addresses the challenges that exist today within many IT organizations such as poor application performance (particularly within virtualized environments), single points of failure, low storage efficiency and utilization, and high infrastructure costs.

DataCore’s Virtual SAN opens up many new possibilities within the infrastructure. Below are some of the most common use cases:

    • Latency-sensitive Applications – Speed up application response and improve end-user experience by leveraging high-speed flash as persistent storage closest to the applications and caching reads and writes from even faster server DRAM.
    • Compact Server Clusters at Remote Sites and Branch Offices – Put the internal storage capacity of your application servers to work as a shared resource while protecting your data against outages.
    • Virtual Desktop (VDI) Deployments – Run more virtual desktops on each hypervisor host and scale them out across more servers without the complexity or expense of an external SAN.
    • Highly-available Applications – When you are running applications that cannot suffer downtime, then you need synchronous mirroring. Synchronous mirroring provides real-time synchronized copies of all data across multiple hosts and/or regional sites, ensuring the highest levels of data and application availability.

Request a Free Virtual SAN: Virtual SAN

As the industry heads full-speed down this road, you can expect very exciting advancements to develop. I know that it will be an exciting time for DataCore’s customers and partners as DataCore continues, as it always has, to invent new ways of raising the bar in the software-defined storage arena.