While designing a solution for a customer I’m once again looking at domain controller placement. No matter which way we look at this there are pro’s and con’s. There is always a way of getting into a chicken and egg scenario!
For the sake of clarity this is only relating to Windows Server 2012 (here’s the TechNet article outlining ADDS on Hyper-V guidance http://technet.microsoft.com/en-us/library/virtual_active_directory_domain_controller_virtualization_hyperv(v=ws.10).aspx)
There are a few options (I’ve listed the main ones)
- Run 2 Physical Domain Controllers (Microsoft Recommended Solution)
- Run at least 2 virtual domain controllers
- Run a domain controller in the parent partition
If we follow the guidance we should probably have at least 4 Physical DC’s, this covers the local availability and geo-availability. The problem with this is if you are not a multisite company. (For this an idea would be to use Azure Site-to-Site VPN and a pair of Azure VM’s)
Run at least two virtual domain controllers
Ok so this seems easy…however what boundary do our Hyper-V hosts sit in? Workgroup or Member server?
Depending upon your requirements a workgroup would be fine. The main reason I can see not to have this is that Shared-Nothing Live migration only functions on hosts that are a member of a domain. You can still setup Hyper-V replicas using certificates however you will experience service downtime if you want to failover to the secondary replica (this also will apply to fail back)
A solution for this is to make the Parent partition a member of the virtual domain. If you do this you just need to ensure that the DC’s are set to always start and any other virtual machines have a start delay included. It’s also a good idea to anchor the DC’s to a specific host and stay away from CSV’s and making them highly available. (P.S. also don’t lose your local admin account details – in the event that the VM’s don’t come up you will need to logon locally)
Run a domain controller in the parent partition
So I would have said make sure you reserve host memory however (http://technet.microsoft.com/en-us/library/hh831766.aspx) states that Hyper-V 2012 dynamically reserves host memory – it only states this for the hyper-v and Failover Cluster roles though. I don’t know the exact mechanism it achieves this so it’s still something to be mindful of.
Secondly when we install a domain controller role the OS disables Disk Write Caching. This (so I have read) can have a 30% degradation of performance. It may be possible to isolate the NTDS/SYSVOL elements to a different physical disk group to avoid this.
Is this a supported configuration? It’s not a recommended configuration, as with all things your case will most likely be dealt with however you could potentially be asked to put a physical DC in at some point.
If you choose this route I would make sure every host is a DC (global catalogue and AD integrated DNS) and that no DNS islands can be created.
I’ve run every one of the mentioned configurations without having a problem. The main problem I find is that people are not prepared enough. As part of your ADDS implementation it is highly advisable to test and document recovery steps for all possible scenarios. Remember, ADDS underneath is a complex system, even on two physical servers you need to understand the impact of a server failing and the steps required to recover. (One test that people often forget is to turn off all your DC’s at once and then try to get back up and running again – http://support.microsoft.com/kb/2001093)
It’s a good idea to read this as well! Best practises for Hyper-V 3.0: http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
Where ever your domain controllers end up, choice what right for your scenario, if you spend the right level of time upfront then you should be able to get any of the possibilities working for you in a relatively simple manner.
In my travels I have come across the following article:
This explains that new Windows Server 2012 functionality means we can build a two node cluster with virtual domain controllers and that the cluster will still start. I’m going to setup a rig to test this soon!