Advertise here

Reviews PernixData FVP

Published on March 10th, 2014 | by Brian Suhr

6

PernixData FVP review

PernixData FVP review Brian Suhr

Review Rating

Installation
Performance
Documentation
Interface
Scalability
Flexibility

Summary: Pernixdata was built from great DNA and its performance will not disappoint.

4.6

Kicks Butt


It’s no secret that the ongoing support and monitoring of a data center environment takes a lot of time for admins. When it comes to virtual environments storage is typically the most common performance issue identified. Since typically the VMware admin has little control over the SAN storage and it can be expensive to add performance to the typical large shared storage array, this leaves opportunity for vendors to help solve this problem.

Enter the server-side caching market space. Some of you may be saying what does server-side caching mean? To help with storage performance, server-side caching is a method that attempts to move commonly accessed storage bits closer to the compute. This means that there is some local caching point in the VMware hosts, this could be SSD  disks or server memory. By doing this the VMs or application requesting the data is able to read the bits from the local cache rather than traveling all the way to the shared storage. This both typically lowers the latency response times and free’s up the storage to handle requests that cannot be served out of this cache.

This brings us to PernixData which offers the FVP product that is a server-side caching product. PernixData is a  software based solution that introduces a seamless layer of flash resources. The solution is resistant to power loss situations and reboots or power losses do not result in data-loss. FVP works by deploying a PernixData management server on Windows server. The management server can be a virtual machine or physical server as long as it can communicate with your vCenter server. Then each host in a cluster that you want to have accelerated, FVP will extend the kernel by introducing a kernel extension module. Pernix is utilizing public APIs and the pluggable storage architecture while extending the kernel code. This is what allows Pernixdata to intercept storage traffic and accelerate it or pass it through.

Once FVP is deployed on each host it can be enabled on a per datastore basis that you want to enable acceleration on. It does this by utilizing the VMotion network for communication, in an upcoming update the ability to select an alternate network will be available. While you enable on a per datastore method,  you can also enable on a per VM basis. This process is done within the vSphere client today by utilizing the Pernixdata plugin. You can easily add or remove the acceleration of a virtual machines without interruption of your workloads. In the near future PernixData is expected to offer a vSphere web client plugin to support the direction that VMware is driving their management platform.

The PernixData FVP architecture follows your vSphere cluster layout. This means that as you accelerate vSphere cluster nodes they join into the FVP cluster and work together to provide acceleration services. This allows for a larger caching layer that can be used by all hosts in the cluster and provides additional redundancy. The redundancy feature is very important because Pernixdata is also capable of accelerating the writes as well as the reads. Heck reads are pretty easy to speed up, lots of people are doing that in some fashion today. But no one else is doing the writes because its a hard problem to solve. The clustering nature of FVP is important as it supports VMware HA and DRS while also leveraging the flash resources across the FVP cluster. To read more about these remote flash benefits I would recommend reading the following posts from Frank Denneman, FVP remote cache access and Hit rate metric.

Well PernixData has solved that hard problem and is capable of providing the same speed increase to your writes from the FVP layer. To protect the writes you are able to choose the level of protection by choosing if the writes will be written to one or two additional hosts. This means that should their be a failure on the host that is running the VM, there are one or two other hosts that also contain the write data and commit it to the shared storage.

There are two write caching modes support by FVP. The first is write through which only acknowledges writes when they have been committed to both the server-side flash and the underlying SAN. In this mode the write performance should improve due to the reads being accelerated by FVP. The other write caching mode is write back and all writes are acknowledged when they have been committed to the server-side flash. The writes are then de-staged to the SAN overtime asynchronously.

pernix_diagram

Pernixdata FVP

 

The Good and the Bad

Pros:
+ Easy to install and manage
You can install Pernixdata in less than 30-90 minutes depending on the size of the install. The vCenter plugin is very easy to use and in 15 minutes you should be a master.

+ Read/Write caching.
Can be setup to accelerate both reads and writes. This is a differentiating feature for Pernixdata. By also caching writes the performance the performance of additional workloads can benefit.

Cons:
+ Block Only
The current 1.x version only supports block storage. In the future it would be nice for them to also be able to accelerate NFS storage.
+ Pricing
At $7500 list per host this feels a little bit high. Cost of license plus flash devices per host for say 8+ hosts you are approaching the cost of some hybrid flash appliances.

 

Ratings Explanation

The follow descriptions are items that our rating was based on. These are important values to the review team in the evaluation of products and their on going usage. Each item is described on what items affected the rating in a positive or negative fashion.

Installation: We gave a 4.5 star rating for the installation process because while the product was very easy to install, there were some manual tasks. The installer for the FVP management server is very simple and takes just a few minutes. The time consuming part is installing the VIBs for the host extensions must via command line or update manager. While it does add time its not a difficult process so we only made a small point deduction. Another positive part of this rating was solid documentation for the install process, which is rare for startups.

Performance: We thought that the PernixData FVP product performed amazing. The reason for the 4.5 is that there is still probably a little room for improvement. What I think could push the rating to 5 stars would be as PernixData learns from customer implementations that they share sizing data with customers. This would help in selecting the right type of flash and how much capacity should be purchased for this use for the workloads being accelerated.

Documentation: The rating of 4.5 starts was awarded based on the following criteria. Currently the product offers a good set of documentation around install and ongoing use. It is a very easy to use product but I was pleasantly surprised to see this good of documentation from a startup.

Interface: The interface received a rating of 4.0 stars. We thought that it was easy to use and the information was clearly presented. There is some room to improve here and also there seemed to be a slight discrepancy between the values reported in the plugin to what we saw from ESXTOP.

Scalability: The PernixData solution scales within each vSphere cluster. This means that as you accelerate VMs, FVP scales with the vSphere cluster to form a caching layer. This allows for all hosts to benefit from the greater cache layer. FVP allows for the use of multiple flash devices per host.

Flexibility: The FVP product supports the VMware HCL for servers, network cards and storage adapters. This removes the hurdle of a small supported hardware list. PernixData also allows you to mix the types of flash in your FVP cluster, you are able to use both SATA and PCIe flash at the same time.

 

PernixData Licensing Cost

PernixData licenses the product by VMware host. Socket based licensing is the most common type of licensing virtual environments, but host based licenses simplifies that even easier. This allows customers the flexibility to licenses hosts and not worry about VM density. Currently PernixData offers two different licensing options SMB Edition and Enterprise Edition.

  • The Enterprise Edition of PernixData FVP is sold on a per host basis, with no limit on the number of hosts or VMs deployed. There is also no limit on the number of processors or flash devices supported per host. List price for the Enterprise Edition of FVP is $7,500 (USD) per host.
  • The SMB Edition of PernixData FVP includes licenses for up to 4 hosts, with a maximum of 2 processors and 1 flash device per host. Up to 100 VMs can be supported across these hosts. List price for the SMB Edition is $9,999 (USD).

 

Management Console

The management console for PernixData FVP is a vCenter plug-in and is clean and easy to understand. A sample of the image is shown below, there are four main tabs  that offer multiple views into the information within the plug-in.

Flash Clusters: This tab is how you create and interact with the PernixData clusters.  From this view you see the list of Flash Clusters whether you have one or many. There is a simple graphical map of the flash cluster that illustrates the hosts and the flash devices inside of each. A list of the VM counts that are being cached or not by the cluster. And lastly there are some high level stats for the clusters that show you the realtime performance of the flash cluster.

Usage: This section allows for a more detailed view at either a per VM view or a per flash device view. Both of these views are very helpful in understanding how the pieces are performing or in a troubleshooting scenario. In the VM based view you will see how much local flash storage the VM is consuming and the network flash usage if you are using write redundancy. It’s also easy to see which flash devices are being used by each VM. The flash device view presents similar details in the opposite manor, the view shows which VMs consuming the selected device and how much capacity they are using. This section also provides insight into remote access behavior of network flash.

Performance: This section initial shows the top consumers of the flash cluster in a block style layout of VMs. Upon clicking on any VM you will be taken to a more detailed view of the item that will show values for IOPS, latency, throughput, hit rate and eviction rates. These stats are available on a per VM or flash device view so that admins can find the details they need for reporting or troubleshooting.

Advanced: The final section is the advanced settings tab and this is used for viewing the setup. The FVP licensing is also contained in this tab and the ability to export FVP cluster logs for support if needed.

The following image gallery has samples from each of the sections described above. I give credit to PernixData they have not created anything support flashy here, the plug-in is a good foundation that works for a 1.x product and they seem to have covered all the main details that I would expect from a users point of view.

  

PernixData Hardware Requirements

The good thing is that the fine folks have PernixData are matching the support hardware list for VMware. In general this is going to mean if you purchased something that is supported by vSphere you should be in the clear for FVP also. By following VMware’s lead on hardware support there should not be many questions on what to use by customers. The one thing I would stress is to pay close attention to the Flash Devices that you purchase for FVP use, likely almost any device would work but you will see a difference in performance and longevity of the device. This is an important thing to follow, I would not just pickup some SSD drives from Best Buy for this.

PernixData started the PernixDrive initiative, in which we test and approve flash devices. You can find more information about this here: https://www.pernixdata.com/partners/pernixdrive/

I have pasted a sample of the different areas of hardware support from the PernixData documentation below. This shows the major areas with some samples of what you will be considering.

 

pernixdata-hcl

PernixData FVP – HCL

 

Read Test Performance

I wanted to start out by trying a workload that was all reads, as this is an easy problem to solve and would showcase what PernixData can do for the back end storage that I was using. I was interested in how much read performance I could squeeze out of this test. To execute the test I used I/O Analyzer from VMware labs. The workload that was used for this test is the 4K 100% Read that is included with the tool.

The following chart shows how they IOPS built over the test period. It was able to reach near the 40,000 IO mark towards the end. As PernixData was able to better cache the data the hit rate went up and so did the IOPS. The test was operated on a single host with a single I5 processor and the shared storage was a pair of SATA drives. This really shows how much PernixData can assist with reads.

pernix-allread-iops

PernixData read performance

 

The next chart is showing the throughput during the testing period. Much like the IOPS the throughput increased dramatically once the flash hit rate went up in the later part of the test.

pernix-allread-throughput

PernixData read performance

 

The next chart is showing the latency for our test period. During the test PernixData accelerated our reads and as a result our latency was positively affected. You can see that the read latency stayed very close to 1ms or less for most of the test.

pernix-allread-latency

 

This chart shows the hit rate for PernixData on the flash percentage.  During the first two thirds of the 10 minute test run the hit rate was less than 10% but still yield around 3000 IOPS. In the later part of the test the hit rate rose to 100% and that is when the IOPS soared to near 40,000 for our test setup.

pernix-allread-hitrate

PernixData read performance

 

The last image is a sample that I took with ESXTOP during the height of the test. I watched the performance manually and snapped the imaged when the IOPs (CMDS/s) reached its highest value. This was one of the peaks during the test, I always like to compare ESXTOP values to what the tools present to you.

pernix-allread-esxtop

 

SQL Test performance

For the second test I wanted to try a workload that was not just an all reads test, as this would showcase the weakness of the back end storage that I was using. This would also demonstrate the strength of PernixData by also being able to accelerate the writes. To execute the test I again used I/O Analyzer from VMware labs. The workload that was used for this test is the SQL Server 16K.icf that is included with the tool. You can grab the full PDF from the PernixData SQL test if you want further details.

The image below represents the view from the PernixData management console after the SQL test completed. The console is showing the IOPS acceleration that PernixData was able to provide during the period selected. The results show a large improvement in performance.

pernix-sql-iops

 

The following chart shows the IOPS over the test period. The test started out with very good performance but soon the writes started to affect the overall performance of the test, but this was a limitation of the back end disks not PernixData.

pernix-sql-iops-chart

 

The next chart is showing the throughput values over the test period.

pernix-sql-throughput-chart

 

This chart does the best job of showing the benefits of PernixData on this SQL test. The charts is showing the datastore latency during the testing period. It clearly shows that PernixData was able to aid and keep the write latency low during the test, while the read latency kept slowly climbing through the test.

pernix-sql-latency-chart

 

The following image was taken from the PernixData plugin from vCenter and it shows that the effective latency was kept very low. But the datastore latency from the actual disks was over 700ms during the test. So PernixData was able to hid the majority of this bad performance for our tests.

pernix-sql-latency

PernixData SQL test

The last image is a sample that I took with ESXTOP during the height of the test. I watched the performance manually and snapped the imaged when the IOPs (CMDS/s) reached its highest value. This was one of the peaks and is a higher value than shown in the average in the earlier sample from I/O analyzer. Pretty great results for PernixData to be able to turn out these results from a small SSD and a pair of SATA disks. The total IOPS peaked around 5600 and write of almost 2000.

pernix-sql-esxtop

 

Max read IOPS

For a final test I wanted to see what the maximum performance I could get out of this test configuration. This will be accomplish by a workload that was all reads and a small working set the goes right into the cache. This would showcase what PernixData is capable of with my wimpy test box. To execute the test I will once again use I/O Analyzer from VMware labs. The workload that was used for this test is the Max Read IO that is included with the tool. You can grab the full PDF from the PernixData test if you want further details.

The image below represents the view from the PernixData management console after the Max IO test completed. The console is showing the read acceleration that PernixData was able to provide during the period selected. This again shows some impressive results nearing 50,000 IOPS.

pernix-maxio-iopsview

 

The following chart shows how the IOPS were high and built over the test period. It was even able to reach near the 50,000 IO mark towards the end. That is incredible given my test host only had a single 60GB consumer grade SATA3 flash drive in it.

pernix-maxIO-iops-chart

 

The next chart is showing the throughput during the testing period. Much like the IOPS it performed well and increased at the end of the test.

pernix-maxIO-throughput-chart

 

The last chart is showing the latency for our test period. During the test PernixData accelerated our reads to an incredible level and the read latency was flat around 1ms.

pernix-maxIO-latency-chart

 

The next image shows the hit rate for our testing period from the PernixData plugin. With this Max IO test it immediately had a hit rate of 100% and stayed through the test window for this small read test.

pernix-maxio-hitrate

PernixData hit rate – read test

 

The last image is a sample that I took with ESXTOP during the height of the test. I watched the performance manually and snapped the imaged when the IOPS (CMDS/s) reached its highest value. This was one of the peaks and is a little higher value than shown in the view from the plugin. For a few brief periods the IOPS peaked at 50,000 and the rest of the period it operated in the 35,000 to 45,000 range.

pernix-maxio-esxtop

 

Final Thoughts

Overall we have been really impressed with what PernixData was able to create and release in their first version of the FVP product. This speaks to the level of talent and engineering that the company has put together. For me personally they have probably the best performing product in this maturing space and also have a more complete offering, which includes read/write caching, documentation, hardware support and product support.

I have spoken with several people in the organization from Sales Engineers to the CTO and all were very helpful in answering questions and explaining the technology. This is good to see a young company support the community in effort to educate. We look forward to what PernixData might be able to do in future versions of their product.

 

Disclosure:

I am a member of the Pernix Pro program which entitles me to an NFR license. This license was used in the testing of the product for this review.

 

Tags: , , ,


About the Author

Brian is a VMware Certified Design Expert (VCDX) and a Solutions Architect for a VMware partner. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status for 2011 to 2015.



6 Responses to PernixData FVP review

  1. Pingback: PernixData FVP review at Data Center Zombie | VirtualizeTips

  2. Pingback: Brian Suhr – PernixData FVP Review | PernixData Blog

  3. saj says:

    time for an update, NFS is being introduced with other features

  4. Syed says:

    Hi Brain,

    Hope you are well.

    With all the new updates and changes in the FVP version along with the release of PernixData Architect, do we look for a your review on it soon.

    Regards,
    Syed.

    • Brian Suhr says:

      Hello,

      Given that I moved to the vendor side late last year, I would not expect any updates at this time. If someone else wanted to write the content and it was of same quality I would consider posting it.

Leave a Reply

Back to Top ↑