Published on September 30th, 2014 | by Brian Suhr2
Tintri VMstore storage review
Summary: One of the most innovative storage products ever!
It Just Works!
Lets face it storage is a hot topic in virtualization and it causes or solves a lot of issues for customers. Thats why its important to find a solution that can provide the services you require and exceed the performance that you are looking for. Admins are increasingly asked to do more with less resources, so as storage matures it is also becoming easier to manage. Tintri has staked its reputation on solving these issues. They have created a storage array that is so simple my kids could manage it, but that does not mean it lacks any features. Oh and it also performs like an all-star, don’t worry that it does not have hundreds of knobs to turn. The team at Tintri are incredibly smart and have created a product to meet the most demanding workloads being run in virtualized environments and you can count on the VMstore to make the right decisions rather than you having to make reactive tweaks.
I have been working with Tintri storage for more than three years now, I was lucky enough to have access to a pre-GA test array. Working with the Beta product and seeing them start with a single offering and work their way up to the current three models that they offer today. The three models offer a larger and smaller version of the original array that started everything for Tintri. I will cover the models in more detail later in this review. Besides increasing their number of models they have expanded their features to include replication, multi-hypervisor support, VMware View integration and Global management. These along with other features and their VM-Aware offering position them as a leader in the storage marketplace.
What is Hybrid Flash Storage?
A simple definition of Hybrid Flash Storage is an array that combines flash storage and spinning hard disks. These two types of drives are not used in a classic three tier architecture. Hybrid flash vendors combine both types of storage into a single tier or pool and typically use the flash to accelerate the read and write traffic. This allows them to offer performance that approaches an all flash type of array with a lower price point.
The VMstore line of storage is a growing set of different sized storage arrays. I like to think of them as storage appliances simply because they are highly available fixed capacity providers of storage. A VMstore is a 3U or 4U sized chassis that is built on commodity technology. The real magic is in the storage software that Tintri has created with I will go deeper into next. The storage arrays are all dual controller devices that are Active/Standby for storage connectivity. Connectivity to the array is accomplished by 10GbE connections using NFS as the storage protocol. Each controller has a pair of 10GbE storage connections and a pair of 1GbE management connections. This allows for a highly available design by providing redundant controllers and network connectivity.
VMstores are hybrid storage in the sense that the drives installed in each array are 50% flash drives and 50% hard disk drives (HDD). This allows Tintri to provide a flash like experience while enjoying the capacity experience of HDD’s. By using a hybrid storage approach Tintri is able to allow customers to take advantage of the benefits of both disk types while keeping an affordable price. The promise of Tintri is that 99% of reads and writes will be served from the flash layer and my experience shows that they do an excellent job at delivering on this.
The current Tintri OS version is 3.0 and that is what this review is based on. In this version Tintri supports both VMware vSphere and KVM based hypervisors. Both types of hypervisors would use NFS for connections to a VMstore. The nice thing is that to the VMstore both types of hypervisors still just provide VMs, when looking in the management console you see VMs from both platforms side by side. There is nothing that you need to do differently when managing your VMs. All environments can share the total pool of capacity and performance, no need to separate resources or management. In the future Tintri will also support Hyper-V and while the protocol will be SMB v3 the VMs will still be managed with the others. This keeps things very simple for admins and does not force them to learn anything different on the Tintri side.
Since Tintri is VM-aware for storage it has the unique ability to understand VMs and vDisks along with low level storage details (IO, Throughput, latency, etc.) in one place. Other vendors typically only have one or the other at a time. This allows Tintri to make intelligent decisions on how to control and provide performance to their VM customers.
To make this all possible Tintri uses lanes for each vDisk and VM that is located on a VMstore. This allows for the IO tracking down to the individual vDisk on every VM. This level of visibility along with the marriage of storage details is how Tintri is able to work their magic. They are able to understand historical data at this level of what resource and of which type each vDisk has been consuming along with what its current requests are. These swim lanes are critical in how they are also able to report performance details on this resources to admins through the management interface.
Another important part of the Tintri architecture is their ability to manage performance isolation. Think of it as QoS for storage, but its far more intelligent and does not need to be tuned by the customer. The VMstore does all the management in this area. Performance isolation is how Tintri tracks who uses what resources and prevent a big hungry VM from hurting the performance of others while still providing it as much performance as possible. This is where they are using metrics like IOPS, throughput, latency and others along with history to make these types of decisions. The overall mantra is to punish the new guy, which means if a new VM or one that has an unusual spike in performance it will be punished allowing the greater VM population to perform normally.
Tintri VMstore do an incredible job of managing diverse workloads. Sure you can toss on a thousand virtual desktops or hundreds of servers and it will not even break a sweat. But they really won me over with the fact that I can easily run hundreds of desktops and servers and other demanding workloads without issue. This is accomplished by their use of flash and the performance isolation. Later in the SQL test I will talk a bit more about this.
The Tintri OS is built with the idea of Flash First in its architecture. Which is how they are able to perform like an all flash array. The flash layer is used for 100% of all writes sent to a VMstore and the goal of 99% of reads. The flash layer uses inline dedupe and compression to allow for the larger amount of data served out of flash. Tintri uses a working set analysis to understand VMs and also helps provide fairness of performance. The working set analysis also helps with the classification of hot and cold blocks. With hot blocks residing in the flash layer and cold blocks being written or read from the HDD layer.
In Tintri OS a block must be proven as cold before it will be evicted from flash. This is opposite of most arrays as they must prove a block to be hot before it would be promoted to flash. This is usually too slow for the VM to benefit from the performance. By considering every block of data as hot is how Tintri provides performance by reading so much data from the flash layer. Only as blocks are proven as cold are they written down to the HDD’s. The flush of cold data does not happen all at once in large chunks, Tintri pushes out the oldest blocks first and also pre-stages some cold data before its evicted to prevent any bottlenecks. With a flash first approach there are many blocks that are never classified as cold because they are over-written before they are considered cold never reaching the HDD’s.
To wrap up this part Tintri provides an NFS plugin for vSphere. This allows it to take advantage of the VAAI NFS primitives. This allows for Tintri to utilize their zero space pointer snapshot technology to create near instantaneous clones of VMs for vSphere. By taking this approach each VM created on a VMstore is very space efficient and only consumes additional space when unique data is written to the VM. This is all accomplished without any performance impact.
All Tintri VMstores offer a dual controller architecture. Each controller on the storage array offers dual 10GB for NFS traffic and dual 1GB management connectivity. These redundant connections allow for a link failure to occur and not require a controller failover. The redundant storage controllers allow for a complete controller failover without impacting performance.
Tintri has an active/standby storage controller design, allowing a VMstore to provide maximum performance from the single active controller. Tintri keeps the standby controller warm to make the fail-over quick. This also allows them to offer full performance even under a controller failure event.
Tintri currently produces three different VMstore storage arrays. This review is based on the T540 the middle offering in their line up. The two smaller arrays offer the same capacity but with less flash in the T620 array, it’s built to support fewer VMs.
- T620 VMstore – 13.5TB of usable capacity and up to 500 VMs
- T540 VMstore – 13.5TB of usable capacity and up to 1000 VMs
- T650 VMstore – 33.5TB of usable capacity and up to 2000 VMs
The Good and the Bad
The following descriptions are items that our rating was based on. These are important values to the review team in the evaluation of products and their on going usage. Each item is described on what items affected the rating in a positive or negative fashion.
Installation: We gave a 4.5 star rating for the installation process. A Tintri VMstore is very easy to install, there are a couple of network connections to connect. Once powered up the VMstore only requires you to configure the network settings and connect it to a vCenter(s) so that it can gather VM data. The last part is to configure the VMstore as an NFS datastore for any vSphere clusters. This process is simple and only takes a short amount of time. I think something that could be improved for install would be for Tintri to build out and publish a set of common configurations or reference architectures. These would focus around NFS configurations or different uplink designs. This type of details would answer common customer questions.
Performance: I have found that Tintri VMstores perform very well, this earned them a 4.5 rating. The intelligence that they have built into their OS that manages VMs and performance is incredible. For this review a few different workloads were tested and all performed without issue. I have seen numerous customer implementations running nearly every kind of workload without issue. Since there is no dedupe on the capacity layer of the storage Tintri is probably not the best place to run large slow workloads, they will eat up capacity and take up valuable space in the cache.
Other: The rating of 4.25 stars was awarded based on the following criteria. This section is heavily based on the upgrade process for Tintri OS. The upgrade process is very simple and non-disruptive which is what customers expect. Something that negatively affected the rating for this section is that Tintri has not yet released an SRA adapter for VMware Site Recovery Manager (SRM). This allows for SRM to control the storage replication and perform and automated failover. Tintri is expected to release their SRA later in 2014, this would affect this rating positively.
Interface: The interface received a rating of 4.25 stars. This rating is based on the quality of the Tintri web interface. The VMstore management interface is a simple looking web based management portal. The design is well laid out and easy to understand. There were some nice features like ability to see end-to-end storage latency. It was easily to find out how the storage was performing and gain insight into how each VM was performing also.
Scalability: The rating of 4 stars was based on the following factors. There is no real limit for Tintri since they scale by adding appliances or arrays. Each array is a separate set of resources that does not affect the other, this is attractive for a building block style architecture. Today there are three sizes of VMstores to allow for different VM densities and capacity configurations. I believe there are still other models that Tintri should offer in the future, such as a extra-small and extra-large version. Also as Tintri matures the Global Center features they will likely add some impressive features that customers can realize at scale.
Documentation: The documentation received a rating of 4.25 stars. There was a full set of documents that covered install, upgrade, administration guides as well as hardware related guides. I think Tintri can improve this section by testing and publishing reference architectures for the most common and popular solutions. There are a limited amount of these today.
Tintri offers three models of VMstores as of the time of this review. The VMstores are sized for 500, 1000 and 2000 VMs and offer a related amount of capacity. The testing for this review was performed on a T540 model. I was not able to get permission to publish a list price for the tested configuration yet, but hope to be able to soon.
Tintri Global Center is licensed separately on a per VMstore basis, for those customers looking for the ability to manage multiple VMstores from one interface. The replication feature is also a separately licensed feature.
The management console for the Tintri VMstore is a web based portal that is built into each storage array and is clean and easy to understand. The management interface supports both local accounts and Active Directory authentication. Along with different authentication methods there are three built roles to allow for controlling the level of access for different admins.
The main page offers a high level overview of what your VMstore is providing for performance and capacity. The information is presented in clear table format and gas type gauges. You can click on any of these items to drill down to deeper insight. On the right side of the page you will see top consumers of performance and capacity, these are shown in VMs not LUNS.
If we drill down a bit deeper into the VMstore performance view shown below you can see via performance charts how the storage is performing now or roll back for older data. If you hover or click with the mouse on the chart it will provide details for that point in time as illustrated in the image below. This view is capable of showing details about IOPS, throughput, latency, reservers, flash percentage and space.
In the image below we are looking at the Latency view of the performance chart. This is something that I have only seen Tintri do so far. They are showing you end to end latency. Since they talk to vCenter they gather latency for the host, network and their own storage if there is any. The chart then shows the different values based on the colors, this helps a VMware admin quickly track down the layer that is causing or suffering from the latency.
Next up is a very powerful view that only Tintri can provide in this clear fashion. The Virtual Machine view provides a wealth of data about each VM located on the VMstore. You can see values like IOPs and the same latency values but now on a per-VM basis. Imagine what you could do with this type of information at your finger tips and all by click one mouse click.
By clicking on a VM in the table you will see the same type of performance charts shown earlier, but these are on a per-VM basis.
When in the VM view you can click on the Virtual Disk view from the heading and get insights into each VMDK per virtual machine. So not only can you see performance and capacity data on a VM basis, but in this view you can drill down one step further to a per-virtual disk view. Now you can see if there is a specific VMDK on a SQL VM that is getting hammered, now imagine what you could do with this.
Last up in the management console walkthrough is the Hardware view. This shows you health details about the drives in the storage array. The controller view at the bottom shows you the network connections and if they are up or down and which is the active storage controller. I’ve seen hardware views that might be a bit fancier, but this is clean and gives you the details that you require.
Tintri Global Center
With each VMstore having management built-in there was a need for centralized management of multiple storage arrays. This was not available in the early days, Global Center was released in early 2014. It brings much the same clean easy to understand look that each VMstore has but with a view of multiple arrays.
Upon logging in you will see a summary screen like the one shown below. This view is showing performance and capacity summaries for all VMstore’s being managed by the Global Center instance. For this review the test setup was managing 3 VMstores.
If you click on any of tiles a chart will open to show more detailed data. In the sample below I clicked on the IOPS tile. Below the tiles you can see a performance chart is being shown, you are able to adjust by changing the number of days and timeframe. Click on any of the other tiles will show their related data.
At the top of management web page there are choices for Virtual Machines or VMstores. If you click on the VMstores option you will be able to see summary performance data about each storage array. This is a good place to start, if you need a deeper look into the performance you may need to look at the management page on the affected VMstore.
The Virtual Machine view will show you a similar high level view on VM performance. This is helpful to see how your greater Tintri environment is performing. But I think that if you want full on VM performance data you need to look directly at the VMstore page for full details.
There is another summary view of the configuration that is entirely based on tiles. A sample is shown below that gives you different summarized data that might be nice to see for some things.
Global Center is a relatively new offering from Tintri and I think its off to a decent start. I have seen the roadmap for Global Center and I know Tintri will be working hard to add more functionality to the product.
To aid in the availability of information Tintri now also offers a vSphere web client plug-in. This allows for VMstore data to be presented to users of the web client. This is nice and I know many customers will like this. I personally like the VMstore management page and prefer it.
The sample image below is taken from the datastore view of one of the VMstores. There was more data than I could not get on my screen for the image. The view shows performance and capacity details.
Another big benefit of the web client plug-in is the ability to perform some VM related management items. By right clicking on a VM that is on a Tintri VMstore I can easily clone, protect (replicate) or take a snapshot. This part I do think is very helpful and will save time from bouncing between management interfaces when not needed.
Tintri OS upgrades
I have been lucky enough to upgrade several VMstores with different OS versions over the past few years. The process has definitely gotten easier. For several versions now you have been able to perform the upgrade from the management web page. To get the process started you download the OS upgrade from the Tintri support site. You upload it to the storage array through the web page.
Once uploaded you now have the package staged and can do the install when your ready. The process can be done at any time without any interruption and has no affect on performance. The upgrade process does a single controller at a time and handles the failover of the controllers during the process. The upgrade takes maybe 10-15 minutes on average.
If you are interested in a more detailed explanation of the upgrade process I have written a how to upgrade Tintri OS blog post on a different site you can review.
VMware View Planner Performance
I have used Tintri in many VDI designs and customer implementations so there was little concern over how it would perform. For the test I will need something more than a bunch of desktop VMs that I could boot up and let idle. To perform the test I used View Planner from VMware. This is a load generator that simulates a higher workload for desktop patterns. The tool runs scripts on every desktop that simulates a high end desktop user by running applications.
For the test I created two Linked Clone pools with 350 desktops each for a total of 700 test desktops. This I knew would not break the Tintri but should make it sweat a bit. For my testing I was limited by only having 8 nodes with two different memory configurations, this easily allowed for 700 desktops to run without contention of CPU or memory.
For the purpose of this test I was just using VMware View to create my 700 desktops as linked clones. Once created, View would no longer be in control of them. The provisioning of the VMs was fast with Tintri and their support of VCAI clones can increase the speed even further.
The image below shows the performance charts during the boot up of 700 desktop VMs. The latency was always around 1ms or below which is very impressive. The IOPS bounced around been 15K to 25K during the boot period. The flash hit ratio was at 98-100% during this phase of the test.
The actual workload portion of the View Planner test performed just as well as the provisioning numbers. The performance sample shown in image below was taken during the workload generation portion. This represents 700 VDI users creating load on the storage. The desktops performed extremely well and the dashboard is showing a high flash hit ration and low latency.
The tests were performed on a T540 array which is rated at up to 65,000 IOPS and up to 1000 VMs. At no point in testing of 700 desktops did the array show any signs of trouble. It easily handled the boot storm and almost laughed at the 700 users running the desktop workload. This falls in line with real customer implementations that I have worked on in the past.
Tintri VMstore arrays offer native storage replication that can be enabled with licensing. The replication can be configured on a per VM basis. The per VM ability to manage aside from the performance is what makes Tintri shine. From the start they focused on caring about VMs rather that making you look at a bunch of knobs to turn.
Within the Tintri management interface it is very easy to enable replication for a VM. You navigate to the Virtual Machine view and find the VM to enable and right click on it. You simply choose the option to Protect.
As part of the replication setup for a VM is to set the schedule. The following image shows the single step window that you will use to configure the replication settings for the VM. I have chosen to have Hourly replication that will start at 5 minutes past each hour. I have selected for the replication to happen everyday, you could change if you had something that had different requirements. You also choose how long you want to keep the snapshots around for local and remote copies.
The next step would be to choose which type of snapshot you want to take for the VM. The options are Crash-consistent or VM-consistent. Next check the box to choose to replicate the VM and choose which remote VMstore to replicate it to.
Once the replication is setup for the VM if you go back and take a look at the VM list and find the VM. You can now see the amount of data change each day and the current replication state.
Last up I want to take a look at where you setup replication targets in the VMstores settings. There is the option to throttle replication on a per target basis. You can limit different values for business hours vs. non-business hours. This is a good feature as most customers do not want to let the replication traffic go crazy.
The VMstore compares the data to be replicated between the source VMstore and target VMstore and only sends the unique data. This does an excellent job reducing the amount of data that needs to be replicated between sites.
SQL Test performance
For the second test I wanted to try a workload that was not VDI related, because Tintri is not just a good solution VDI only. For this test I will use a SQL workload and also a mixture of desktops and server VMs. This would also demonstrate the strength of Tintri by showcasing its performance isolation feature. To execute the test I used I/O Analyzer from VMware labs. The workload that was used for this test is the SQL Server 16K.icf that is included with the tool.
For this test I used 4 Cisco UCS blades in a vSphere cluster, I placed one worker VM for I/O analyzer on each host. They all executed the same test. During the test I also used my virtual desktop on the same VMstore and I was not able to notice any difference in the performance of my VM. I also randomly tested different server VMs on the same VMstore.
The image below represents the view from the VMstore management console during the SQL test. The console is showing the IOPS, throughput, latency and flash hit at a point during the test.
The following chart shows the IOPS over the test period for one of the worker VMs. Each of the workers VMs averaged around 6000 IOPS during the test. The write performance was very steady around 2000 IOPS.
The next chart is showing the throughput values over the test period for a single worker VM. Each of the worker VMs had very similar details.
The last chart is showing the VM latency during the testing period. It shows that Tintri was keeping the worker VMs read latency below 5ms during the test while the entire VMstore averaged about 2ms during the test. This was the fairness engine allowing the other VMs to keep their performance while the greedy worker VMs still got a large chunk of the VMstores performance.
The following image shows the Guest level stats from the IO Analyzer report. It shows all performance details for each worker VM.
Having worked at a VAR that has partnered with Tintri since the start, I have been impressed with the products. Tintri has never let any of my customers down and once they use a VMstore they seem to fall in love with it. I think that has something to do with the simplicity and depth of data they provide to admins.
I have spoken with several people in the organization from Sales Engineers to Software Engineers and all were very helpful in answering questions and explaining the technology.
To perform testing I was provided access to a local VMstore and a remote lab that allowed for testing replication and centralized management.