Vapourware

Thought of the morning (now afternoon), since we know that transparency is key with shadow marketing, why do so many Companies market and sell services that do not exist until after a sale? Surely its simpler to be honest, explain the model to your customers and advice them of the process. I like bespoke, sure if you take a standard architecture and tailor it ,that’s great but tell me that! (this is how I create service propositions usually). I understand the need to drive sales but you need to have the confidence and capability to deliver on your promises.

This isn’t to say everything needs to be 100%, I’m just thinking of situations where I’ve seen 100% bespoke solutions delivered as “cookie cutter” services.

As a customer a way to avoid this is thorough research & understanding, or if time/skills or risk mitigation prohibits that, buy some consulting services.

With the right skills, with advances in technology you can achieve a lot if you know how.

Enterprise Architecture – TOGAF 9.1 Maturity Assessment Tool

So I’ve finally got round to writing a new maturity assessment tool, this time it’s for TOGAF 9.1. I’ve constructed the bare bones of the basic tool (based in Excel) using the TOGAF ACMM (CMMI Based). The first phase was to construct the assessment tool using TOGAF’s defined ACMM levels which covers 9 capability areas. The next step is to develop a comprehensive version which covers more detailed questioning to assess each area in a granular manner. I’d imagine version 2.0 will contain at least 100 questions, the main bulk of time is not in creating what good looks like, but in writing 5 distinct levels for each question, something which some maturity tools have failing in. I’ll post more as I develop this further.

TOGAF 9.1 Assessment Dashboard

My Path to Advisory Services

The journey from support to advisory

To me everything seemed obvious, surely no one would NOT understand why having the need for services that were business management focused. The concept that having penetration points along all key elements of the lifecycle to me, has been obvious for years.

I thought I’d share some insight into my career to date so the path to today is a bit clearer – I tried to include some of the key highlights! There’s been far more excitement but that’s for another day.

IT as a business because of games?

Continue reading

Evidence collection and capability maturity – how RaaS can help!

I was conducting a maturity assessment recently and I wanted to look into the customers datacentre hardware/software data. They were unable to provide anything on use so my recommendation in this area was fairly obvious – get greater data insight!
People often underestimate the requirement for fairly wide ranging IT data even on a day to day basis. A quick call to the service desk and I want to have confidence they will help, understanding my device load out and categories (laptop/desktop/tablet) would immediately give me some indication of confidence without access to data everything is re-active and generally takes longer.

So the general rule of thumb is, have semi-real or real time IT estate data, ensure it’s automated, link it to your CMS and make sure (where appropriate) the data is accessible to the masses.

On this note this brings me to the second point of this post – from an enterprise perspective I like to have fixed agent based tools and centralised reporting. However from a project perspective I like to have easily accessible project data so as not to put risk in my area. Here we can look at various tools, one of which is reporting as a service (RaaS). Here we can conduct data collection utilising a lightweight service and send it off to a central location to avoid having to pivot table the hell out of raw data.

Why do I mention this? Well Xtravirt have just opened up the beta of its SONAR RaaS platform. In its initial stages it’s VMware Vsphere focused, it will however expend out offering an exciting new way of enterprise and project based data insight! – http://t.co/QBEYYO6JjC

My Virtual Whiteboard: The differences between Enterprise, Solution and Technical Architecture

I often find in a number of organisations people are unclear about the differences between the different roles (EA, SA, TA) so I thought I would put together a very simple poster to demonstrate the differences. I’ve put together the image and PDF version. I hope people find this useful, I’m thinking I might do a whole series on IT architecture from conception to implementation.

EA-SA-TA Continue reading

$5,000 dollar TCO reduction with a Chromebook?

I wrote this a month ago and remembered I hadn’t posted it. It’s no work of art but it should demonstrate how far people will go trying to use TCO as a hook….

So I’m writing this from a ChromeOS vm. First things first we’re going to have to make a few assumptions to verify this 5k figure. So we’ll go head to head on hardware to start with. Chromebook Samsung ​​is $249 when googled. To compare I’m going to grab a dell latitude 5000 with an Intel vPro chip for $729. So we’ve got more expense but far more bang for our buck, oh and an Ethernet port with the all important vPro AMT features.

Now we need some software, again using list prices we’ll go for SA desktop and office for our PC device and we’ll use VDA for our BYOD access on the chromebook. so SA per year is $50 and VDA is $100. Continue reading

Inside the mind of a solution – Selecting a general purpose document management system

“I started to write this a while ago… but never got round to publishing….” – Dan

A shared folder will do surely?

Back when I started in IT a network share was KING of the hill when it came to sharing documents and spreadsheets. Moving onwards from FAT based systems we could start giving access to groups (hopefully the correct type for those who have shared the pain when people haven’t used local groups) and having granular permissions both at the share and file system level.

Alas this shared pool of file storage still had many limitations, some can be augmented with search and shadow copy but mainly it’s a large pool, generally only separated by two dimensional folder structures (I’m ignoring access based control and metadata search but again this is an augmentation)

Continue reading

So where do I put those domain controllers again?

While designing a solution for a customer I’m once again looking at domain controller placement. No matter which way we look at this there are pro’s and con’s. There is always a way of getting into a chicken and egg scenario!

For the sake of clarity this is only relating to Windows Server 2012 (here’s the TechNet article outlining ADDS on Hyper-V guidance http://technet.microsoft.com/en-us/library/virtual_active_directory_domain_controller_virtualization_hyperv(v=ws.10).aspx)

There are a few options (I’ve listed the main ones)

  • Run 2 Physical Domain Controllers (Microsoft Recommended Solution)
  • Run at least 2 virtual domain controllers
  • Run a domain controller in the parent partition

Physical DC’s

If we follow the guidance we should probably have at least 4 Physical DC’s, this covers the local availability and geo-availability. The problem with this is if you are not a multisite company. (For this an idea would be to use Azure Site-to-Site VPN and a pair of Azure VM’s)

Run at least two virtual domain controllers

Ok so this seems easy…however what boundary do our Hyper-V hosts sit in? Workgroup or Member server?

Depending upon your requirements a workgroup would be fine. The main reason I can see not to have this is that Shared-Nothing Live migration only functions on hosts that are a member of a domain. You can still setup Hyper-V replicas using certificates however you will experience service downtime if you want to failover to the secondary replica (this also will apply to fail back)

A solution for this is to make the Parent partition a member of the virtual domain. If you do this you just need to ensure that the DC’s are set to always start and any other virtual machines have a start delay included. It’s also a good idea to anchor the DC’s to a specific host and stay away from CSV’s and making them highly available. (P.S. also don’t lose your local admin account details – in the event that the VM’s don’t come up you will need to logon locally)

Run a domain controller in the parent partition

So I would have said make sure you reserve host memory however (http://technet.microsoft.com/en-us/library/hh831766.aspx) states that Hyper-V 2012 dynamically reserves host memory – it only states this for the hyper-v and Failover Cluster roles though. I don’t know the exact mechanism it achieves this so it’s still something to be mindful of.

Secondly when we install a domain controller role the OS disables Disk Write Caching. This (so I have read) can have a 30% degradation of performance. It may be possible to isolate the NTDS/SYSVOL elements to a different physical disk group to avoid this.

Is this a supported configuration? It’s not a recommended configuration, as with all things your case will most likely be dealt with however you could potentially be asked to put a physical DC in at some point.

If you choose this route I would make sure every host is a DC (global catalogue and AD integrated DNS) and that no DNS islands can be created.

Conclusion

I’ve run every one of the mentioned configurations without having a problem. The main problem I find is that people are not prepared enough. As part of your ADDS implementation it is highly advisable to test and document recovery steps for all possible scenarios. Remember, ADDS underneath is a complex system, even on two physical servers you need to understand the impact of a server failing and the steps required to recover. (One test that people often forget is to turn off all your DC’s at once and then try to get back up and running again – http://support.microsoft.com/kb/2001093)

It’s a good idea to read this as well! Best practises for Hyper-V 3.0: http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx

Where ever your domain controllers end up, choice what right for your scenario, if you spend the right level of time upfront then you should be able to get any of the possibilities working for you in a relatively simple manner.

In my travels I have come across the following article:

http://blogs.technet.com/b/wincat/archive/2012/08/29/windows-server-2012-failover-cluster-enhanced-integration-with-active-directory-ad.aspx

This explains that new Windows Server 2012 functionality means we can build a two node cluster with virtual domain controllers and that the cluster will still start. I’m going to setup a rig to test this soon!