11 December 2014 blogs Symon Perriman 13 min read
We recently held two exclusive webinars”What’s coming in the next version of Hyper-V?”, hosted by Microsoft’s Senior Technical Evangelist, Symon Perriman. If you missed the webinars, we now have the on-demand version available.
During the webinars, Symon showed an overview of the new capabilities coming in the next version of Hyper-V for Windows Server. The new features will enhance management of virtualized servers, storage, networks, and workloads. He also talked about upgrading the fabric and virtual machines, Linux support, quality of service, backup, and dynamically adding new resources.
During the webinars, several questions were raised during the Q&A session. We thought we would share them with you, in case you also have the same questions regarding the new capabilities coming in the next version of Hyper-V.
Q1: With respect to mixed mode clusters, what is the minimum version of the operating system that you need to have installed on the Hyper-V hosts?
Mixed mode cluster is a new technology we’ll be introducing in the next version of Hyper-V. In that version we’ll support 2012 R2 plus vNext. In our next version it’s going to be vNext and vNext + 1. So, we’ll allow you to run both the current version and the next version of Hyper-V.
Q2: Has the performance of storage spaces improved compared to 2012 R2?
Yes it has, performance is something that we are constantly evaluating. When we first released the product we tried to work with our hardware vendors as much as possible, but in many cases we were not at the point where we were getting that real world feedback about how customers are using these technologies in different scenarios. We continue to make enhancements based on what our customers are doing and based on the latest storage enhancements as well. So, you should see performance improvements in this release. However, we have not yet actually finished the product, so it’s too early for us to give a specific number.
Q3: For storage spaces without shared storage, does the storage footprint on each cluster node need to be identical?
We do replication at the volume level. So, we require the volumes attached to each node to be the same. The underlying disk we don’t care about – it could be a 10, 20 or 20 terabyte disk – but whatever volume you pick within that disk must be consistent across all nodes that will be supported for this direct attached cluster.
Q4: With respect to network virtualization; is there room for hardware vendors to come in and integrate new software-based solutions into your network control layer?
Absolutely, we have so many customers today who have already invested with these great networking vendors. It’s not only about networking equipment, it’s also the things to ensure network safety and performance – ingress and egress monitoring, port mirroring, port forwarding – we want to make sure that we still integrate with all of our partners. We provided an extensible platform, both in the new network controller with Windows server, and also with the advanced management using System Center, where our network hardware partners can plug-in, so that we can use that network controller role and still deploy their load balancers or their fire walls. And likewise with Virtual Machine Manager, if they plug in to that, we can go and create these distributed services, we can go manage the networks, regardless if it is a Microsoft technology or a third party technology.
Q5: Hyper-V now prevents manual editing of virtual machine configuration files. For people who were hand-editing them before, have you looked at their use cases and given them new APIs through PowerShell or VMM to accomplish what they were trying to do?
What we have done is that we still provide full support through all of our different APIs to go edit that. So, this means you can connect to APIs, you can use WMI, you can use PowerShell, you can use Hyper-V Manager and SCVMM. Any of those options you can go and use and make any adjustments into a VM or a VMs configuration file. What we have prevented is for somebody to just open up something like Notepad and make the change in Notepad. The reason why we had to lock this down, is that when people were making those changes and their Virtual Machine failed over to a different node or migrated, many times those changes were not recognized. So their virtual machine would not work and they actually lose high availability. Instead of having a script that edits the text file, your script will instead call WMI, or PowerShell or one of those other interfaces to make that same change.
Q6: Hyper-V hosts and Virtual Machine Manager make queries to WinRM. Have there been any performance improvements there?
The big investment has been made in different ways to connect to the virtual machines. Hyper-V Manager now uses WinRM behind the scenes. So, this means that it is faster, better performing, and it can give you different options to connect. Performance tends to be one of the things that we get to last during our development cycle, so we are still working out a lot of the details and I cannot give you any specifics numbers yet about how much faster it will be. But it’s definitely one of our key areas for investment and you will notice that the new Hyper-V Manager will be much quicker to turn on VMs, connect to them, start them and things like that.
Q7: Is there any native replication method used for site to site replication?
We have two today, but we are going to add a new one in the next version. So, today you can work with an existing partner, which does provide different types of replication. We have storage partners, the major storage vendors, who provide orchestration of replication already. We have software partners, who provide software based replication, where they are actually replicating the file on the VM level. So, that’s one option today, work with partners. The second option today we have between multiple sites is using the Hyper-V replica technology, where we are replicating the virtual hard disk for VMs between these different sites. We have similar capabilities as these partners where you can do test replications, you can fail over, you can fail back, but again the limitation with that inbox solution is that it is only asynchronous, which potentially means you have data loss. We are going to be introducing in our next version – that we are making available for all our customers for free – a synchronized version of the replication, where we can copy any type of volume from one datacenter over to another datacenter. So, today we have a couple of different options, but this is an area that we are going to continue to invest in for multisite recovery scenarios. Now, if you don’t have a second datacenter, another option you can consider, is a technology known as Azure site recovery. It’s something which was announced over the summer that actually allows you to fail over your virtual machines to one of Microsoft’s datacenters. You can set up replication and then fail over from your primary datacenter to a datacenter run by Microsoft Azure. So if you don’t have that second backup site, you can take advantage of Microsoft sites for disaster recovery, which are located throughout the world. If you want to learn more about that, visit the following website:http://azure.microsoft.com/en-us/services/site-recovery/
Q8: Will synchronous replication be for the full VHD or for change blocks only?
We are doing changed block tracking. So what we do when we do this replication is, once you select the volume that you are going to replicate, we do a first initial replication. You can send that over the wire or over the LAN if you want, or you have the ability to actually copy that data on to a local disk and physically mail that disk to your secondary site. Once you do that initial configuration and you sync that initial config, from that point on we just do change tracking, where we sync the deltas that are happening between our different sites. We also do compression when we are sending the statuses across. So, our goal is to optimize speed and performance as we support this important disaster recovery scenario.
Q9: Will the new networking controller role somehow replace the current NVGRE gateways?
Just to add some context to the NVGRE gateway: basically the NVGRE gateway is a network function virtualization role, so one of those physical things that we now converted into a virtual machine that allows us to go and basically pass data between isolated virtual networks. What we will be doing with the networking controller, is that the network controller is going to be able to leverage that gateway. Think of the network controller as kind of the master brain and the NVGRE gateway is one of the many functions that it will be able to manage. In addition to be able to manage the NVGRE gateway, the network controller will also be able to manage your software load balancers, it will be able to manage IP address policies, it will be able to manage fail over policies updating those IP addresses. To answer the question, the NVGRE role will still exist; it’s going to be one of the components of one of the features of this new network controller functionality.
Q10: Where in the user interface will we find the new features that are coming?
All of the features, unless the ones that are coming specifically with System Center, will be available in Windows Server. In fact, they are all available today in the technical preview that you can download from Evalcenter: aka.ms/EvalCenter. You can start playing around with these features today. There won’t be any difference between the standard edition versus the enterprise edition. As far as all these basic features around VM management, storage management, network management, we provide these in the box with basic Windows server. The only kind of differentiator that you are going to see between the standard edition and the datacenter edition, is the virtualization rights around the number of VMs that you can run in your environment. As for how we’re going to expose these new features, they might be available through Hyper-V Manager, through failover cluster manager, through SCVMM, that’s still something we are working on. One of the big areas of investment in this release, is trying to build better consistency across these products. Trying to eliminate the number of interfaces that our customers need to go to, to actually turn on and light up these features, whether they are running on a standalone host, in a cluster or through SCVMM. So we are still working on it, but it is definitely an area of big investment where we try to centralize the management of all these new features.
Q11: Do you see the next version of Hyper-V dropping the cost of a virtual machine?
To set some context for this question: at TechEd Europe Microsoft made an announcement of a new partnership with Dell, producing what is now known as the Cloud Platform System or CPS. Essentially CPS is a hyper-converged system, built between the Microsoft and the Dell engineering teams. So what this means, is that you speak to Dell ahead of time and you provide them with configuration information, such as what is going to be the domain, what is the sizing, etc., and they actually go and preconfigure those settings for you in the factory, before they actually ship that physical hardware. Once the hardware arrives, it takes a couple of hours to set up by a Dell engineer, all of the software and hardware is preconfigured including redundant storage, redundant network connections, it includes clustering, it includes all of the Window server stack, it includes System Center to manage, including automation tools, monitoring, virtual machine provisioning, etc. It includes all the System Center products and it also includes Azure Pack as well, which provides a self-service portal that goes and ties into all of these resources at the back end. This is literally a full cloud in a box that you can purchase and go and connect into your datacenter. Now as far as updating it, one of the things that we have worked on, is ensuring that we have a smooth upgrade path. As the next version of Window server comes out, you will be able to go through that rolling upgrade process, which means that one at the time you can go and upgrade each of the physical cluster nodes, we will move the VM’s around, and there will be no downtime to the actual VMs or the applications, providing us the ability to go upgrade that fabric, roll it out to the next release and still maintain the service availability for all of our different VMs. Making sure our customers have a great world class experience with CPS is something that our engineering teams invested in significantly over 18 months. Back to this specific question, is it going to improve density? The idea should be, yes. We are not going to be changing our initial scale requirement, so we still have 8000 VMs per cluster, 1000 VMs a node; as of right now that is not changing. But, the goal is to continue to go and reduce prices, provide better density, so you should get a better cost per VM overtime.
Q12: If I am a small or medium business and I want to build the physical host servers myself without purchasing CPS, where can I go to get the information of what hardware to put in it to hit the highest network throughput, disk throughput and so on?
What we have done with the CPS system provided by Dell, is that we have provided the reference architecture around what type of hardware is being used. The really kind of special sauce that is included within CPS, is how we are actually working with Dell to preconfigure, pre-engineer, and actually optimize all of the VMs for that particular hardware, so taking advantage of things such as Dell’s hardware offload, Dell’s memory management, etc. So building that exact same system as you see in CPS, is going to be kind of tough for an individual. But we have provided a program known as the Private Cloud Fast Track. Basically what that does, is that we worked with a bunch of different hardware providers, all the big ones like EMC, NetApp, Dell, etc., to provide reference architectures, where they say what they recommend when you go build your own cloud system: storage, network, compute, the power, management capabilities, etc. If you search for “Private Cloud Fast Track Program”, you will see a bunch of different reference architectures provided by hardware manufacturers, so you can go and build your own cloud system, if you wish to.