Advertise here

Software infinio featured

Published on January 28th, 2014 | by Brian Suhr

2

Infinio Accelerator product review

Infinio Accelerator product review Brian Suhr

Review Rating

Installation
Performance
Other
Interface
Scalability

Summary: We were pleasantly surprised by the power that these little Accelerator VMs pack.

4

Innovative


It’s no secret that the ongoing support and monitoring of a data center environment takes a lot of time for admins. When it comes to virtual environments storage is typically the most common performance issue identified. Since typically the VMware admin has little control over the SAN storage and it can be expensive to add performance to the typical large shared storage array, this leaves opportunity for vendors to help solve this problem.

Enter the server-side caching market space. Some of you may be saying what does server-side caching mean? To help with storage performance server-side caching is a method that attempts to move commonly accessed storage bits closer to the compute. This means that there is some local caching point in the VMware hosts, this could be SSD  disks or server memory. By doing this the VMs or application requesting the data is able to read the bits from the local cache rather than traveling all the way to the shared storage. This both typically lowers the latency response times and free’s up the storage to handle requests that cannot be served out of this cache.

This brings us to Infinio which offers the Accelerator product that is a server-side caching product. Infinio is a completely software based solution and does not require any additional hardware. The Accelerator works by deploying a management VM within your vCenter Server, then each host in a cluster that you want to have accelerated will have an Accelerator VM deployed on it. These Accelerator VMs are small appliances that consume 1 vCPU and 8GB of memory, not very high requirements. These accelerator VMs are typically using excess capacity available on most hosts in todays environments or can be easily added to allow for this type of offering.

Once deployed the Accelerator VMs on each host step into the storage path for any NFS datastores that you enable the caching on. It does this by inserting itself into the virtual switch and using one of your VMkernel ports. This is done on a per datastore method, so you can enable NFS datastores individually. This process is done with a single click of a button in the management console and can be done without any interruption of service for the datastore. You can also easily remove the acceleration of a datastore with a click and without interruption of your workloads.

The Infinio Accelerator architecture follows your vSphere cluster layout. This means that as you accelerate nodes in a cluster the Infinio Accelerators will federate and create a global cache for the cluster. This allows for a larger caching layer that can be used by all hosts in the cluster. For example if a VM on host 1 is reading some data and that data is available on the accelerator on host 3 it will grab the data from that Infinio appliance rather than going back to the storage array. So you can imagine that as you accelerate more nodes in your cluster the caching benefits that can be realized with the global cache and deduplication benefits will increase.

infinio cache

 

The Good and the Bad

Pros:

+ Easy to install and manage
You can install Infinio in less than 30 minutes and requires no ongoing maintenance.

+ Cost effective licensing.
At $499 per socket, creates a “Why not solution”.

+ Performance caches on a cluster level not per VM.
The solution creates a global cache on a per cluster basis. This allows another VM to benefit from bits already cached from a previous VM’s read.

+ No additional hardware required.
The solution does not require you to purchase SSD or PCIe flash.

Cons:
+ Requires VSS switches.
The curent 1.0 version only supports VSS switches. Does not support VDS or 3rd party virtual switches yet.
+ Only read caching.
Only able to accelerate reads today and passes writes through.
+ NFS Only
The current 1.0 version only supports NFS storage. Would like to see support for block storage also. (See updated notes at end of review)

 

Ratings Explanation

The following descriptions are items that our rating was based on. These are important values to the review team in the evaluation of products and their on going usage. Each item is described on what items affected the rating in a positive or negative fashion.

Installation: We gave a 4 star rating for the installation process because while the product was very easy to install, the documentation was pretty limited. The install attempts to do most of the work for you with presenting you with or asking minimal questions, but for example it looks to select the proper vMotion port to piggy back on. During test installs depending on how your virtual switches are configured it may select the wrong one. I would suggest that Infinio update the install process to offer the preferred choice but allow you to select another adapter from a drop down list incase another choice is the right one. A detailed blog post that explains the Infinio install process can be read here.

Performance: We thought that the Infinio solution performed well, actually very well for the limited resources that it uses from your hosts. The reason for the 3.5 is that we think there is room for improvement and the product to mature over time. I think there are things that as they increase their install base and learn from real world installs, such as how much value would there be to increasing the memory of each accelerator node to more than 8GB. Another possible option for them is to enable the use of local SSD in hosts for additional caching benefits.

Other: The rating of 3 stars was awarded based on the following criteria. Currently the product offers little in the way of documentation around install and ongoing use. It is a very easy to use product but that does not remove the need for documentation. In the current offering only NFS is supported, a large percentage of installs use block storage. This prevents a large number of shops from benefiting from the solution.

Interface: The interface received a rating of 5 stars. We thought that it was very easy to use and the information was clearly presented for users to consume. A few suggestions that I could offer is the ability to export the statistics to a few file types would be a great feature. Also if you could view or export the log from the management VM would be a nice improvement also, currently you must grab via SSH.

Scalability: The Infinio solution scales within each vSphere cluster. This means that as you accelerate hosts the Infinio nodes within each vSphere cluster federate together to form a single caching layer. This allows for all hosts to benefit from the greater cache layer.

 

Licensing Cost

Infinio licenses the product by CPU Sockets in VMware hosts. Socket based licensing is common in virtual environments and allows customers the flexibility to licenses hosts and not worry about VM density. The retail cost at the time of this review was $499 per socket. The licensing is sold directly through the Infinio website allowing customers to easily purchase the software. I suspect that this direct purchase method also allows Infinio to keep the pricing affordable for its customers.

 

Management Console

The management console for Infinio Accelerator is clean and easy to understand. A sample of the console is shown below, there are four main sections to the console.

Storage Boost: This section looks at the busiest workload periods and greatest boost value achieved and represents it in a per disk value. The value is attempting to represent the performance value that you gained from Infinio in the terms of 15K RPM disk drives. In my example its tell us that the performance bump that we saw was equivalent to having an extra 10 disks.

Requests Offloaded: This section is shown the number of requests or I/O’s that Infinio was able to prevent from going to your shared storage. The values are represented in two different graphs and a percentage of average offloaded I/O’s.

Effective Cache Size: This section represents the effective size of the caches. It adds up the physical memory of each Accelerator VM in the cluster, this would be 8GB x number of hosts. It also calculates the deduplication benefit in terms of GB of memory. These together you can get an idea of the usable size of the cache in present time and over a 14 day peak.

Datastores: The final section shows the datastores that being accelerated and not. For the ones being cached you will get to see the performance through different charts. I will explain these further below.

console view

 

The image below shows the performance view for a single datastore for the current 1 Hour view. In the blue section you get a high level view of the datastore. This shows you the performance values as a current summary. If you click on the datastore it will expand and present the three number values with graphs to the right of each. These values and graphs represent the Response Time Improvement, Requests Offloaded and Bandwidth Saved. These are important values to view the impact of the caching of each datastore.

Infinio 1 Hour Stats

 

The image below shows the same metrics but the view has been switched to the 24 Hour view. This represents the caching effect over a longer period of time, allowing you to see the performance impact over a given period. This can help you view peaks in demand in your environment.

Infinio 24 hour stats

 

Read Test Performance

I wanted to start out by trying a workload that was all reads, as this would showcase what Infinio can do for the backend storage that I was using. I was interested in how much read performance I could squeeze out of this test. To execute the test I used I/O Analyzer from VMware labs. The workload that was used for this test is the 4K 100% Read and 100% Random that is included with the tool. You can grab the full PDF from the Infinio read test if you want further details.

The image below represents the view from the Infinio management console after the all Read test completed. The console is showing the read acceleration that Infinio was able to provide during the period selected. The results show a 2x improvement in response time and high values for requests offloaded and bandwidth saved.

Infinio read test console results

Infinio read test console results

 

The image below shows the ESXTOP data captured by I/O Analyzer during the Read test. You can see that we were able to squeeze out more than 10,000 IOPs out of a pair of SATA drives. This value is pretty impressive given the limited scope of our test case.

read test vm stats

Infinio read test per VM view

 

The following chart shows how they IOPs built over the test period. It was even able to reach the 12,000 IO mark towards the end, bring the test average north of 10,000 IOPs.

read test IOPs

Infinio read test IOPs

 

The next chart is showing the throughput during the testing period. Much like the IOPs the throughput steadily increased through the test.

read test throughput

Infinio read test throughput

 

The last chart is showing the latency for our test period. During the test Infinio accelerated our reads and as a result our latency was positively affected. You can see that the read latency stayed very close to 1ms, while the write performance steadily decreased for most of the test.

read test latency

Infinio read test latency

 

The last image is a sample that I took with ESXTOP during the height of the test. I watched the performance manually and snapped the imaged when the IOPs (CMDS/s) reached its highest value. This was one of the peaks and is a higher value than shown in the average in the earlier sample from I/O analyzer.

read test esxtop

SQL read test ESXTOP view

 

SQL Test performance

I wanted to try a workload that was not just all reads a test, as this would showcase the weakness of the backend storage that I was using. But I was interested in how much it might help the overall test by improving the read performance. To execute the test I again use I/O Analyzer from VMware labs. The workload that was used for this test is the SQL Server 16K.icf that is included with the tool. You can grab the full PDF from the Infinio SQL test if you want further details.

The image below represents the view from the Infinio management console after the SQL test completed. The console is showing the read acceleration that Infinio was able to provide during the period selected. The results show a 13x improvement in response time and high values for requests offloaded and bandwidth saved.

Infinio SQL test console results

Infinio SQL test console results

 

The image below shows the ESXTOP data captured by I/O Analyzer during the SQL test. We can see that we were able to squeeze out 364 IOPs out of a pair of SATA drives. I would expect that if we would put a properly architected storage array behind Infinio the results would be very positive.

Infinio SQL test per VM view

Infinio SQL test per VM view

 

The following chart shows the IOPs over the test period. The test started out with very good performance but soon the writes started to affect the overall performance of the test, but this was a limitation of the backend disks not the Infinio Accelerator.

Infinio SQL test IOPs

Infinio SQL test IOPs

 

The next chart is showing the throughput values over the test period. Similar to the I/O values, the throughput started to suffer due to write performance from lack of disks.

Infinio SQL test throughput

Infinio SQL test throughput

 

This chart does the best job of showing the benefits of Infinio on this SQL test. The chart is showing the datastore latency during the testing period. It clearly shows that Infinio was able to aid and keep the read latency low during the test, while the write latency kept climbing rapidly through the test.

Infinio SQL test latency

Infinio SQL test latency

 

The last image is a sample that I took with ESXTOP during the height of the test. I watched the performance manually and snapped the imaged when the IOPs (CMDS/s) reached its highest value. This was one of the peaks and is a higher value than shown in the average in the earlier sample from I/O analyzer.

SQL test ESXTOP stats

SQL test ESXTOP stats

 

Management VM Stats

The impact of performance on each host was something that I was interested in. I have collected some of the performance graphs from vCenter to show the impact on resources of the Accelerator VM during the test windows. The peaks that are shown in each of the graphs below represent the windows that the tests were performed in.

CPU Impact: During the height of the big tests that were run the CPU usage of the Accelerator VM was between 30-53%. When there was a low steady state load of just a few VMs running with little to no load the CPU usage averaged around 4%. Overall for the performance that each of these Accelerators offer this is not a huge impact on CPU.

mgmt vm CPU

 

Memory Impact: The memory of the Accelerator VM is pretty steady at just under 8GB. With the VM also sized for 8GB, I guess this type of memory usage is to be expected.

mgmt vm memory

 

Network Impact: Similar to CPU the Network usage spiked during the larger tests that were run. During a lower usage steady state the network was almost zero. At the highest point the Accelerator VM was transferring 51MB/s which is about 40% of a 1GbE link.

mgmt vm network

 

The Wrap Up

To wrap up this review in a few words, I thought that Infinio has a pretty solid product offering. For this being a new type of technology and their first GA release things are pretty impressive so far. Granted there are things that we think they can improve on and mentioned them, I am looking forward to the future of Infinio Accelerator.

Got any feedback on your usage of Infinio drop us a note in the comments and share with the community.

 

Updates

Infinio 2.0:

 

  • Support has been extended from NFS to include SAN environments such as Fibre Channel and iSCSI.
  • Application-level reporting that starts with a weekly datastore view and enables drill down to a minute-by-minute, per-application view.
  • Usability enhancements including a redesigned GUI; advanced datastore graphs to provide an additional performance insight into performance; acceleration can be controlled on a more granular level; integration between vCenter and Infinio Accelerator 2.0 is more resilient; and large enterprise support was enhanced for multiple DNS servers.

 

 

Tags: , , ,


About the Author

Brian is a VMware Certified Design Expert (VCDX) and a Solutions Architect for a VMware partner. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status for 2011 to 2015.



Back to Top ↑