I’m not a licensing expert and I don’t play one on TV but it occurs to me that many organizations are paying twice for their endpoint protection solutions. I have been involved in over two dozen System Center 2012 Configuration Manager deployments and only one of the organizations was even mildly interested in System Center Endpoint Protection. My understanding is that the System Center Endpoint Protection (SCEP) CAL is included in the System Center 2012 Configuration Manager CAL. So at least from a licensing perspective if you already have Configuration Manager, you have SCEP. So why are organizations paying Symantec, McAfee, Trend, or some other endpoint protection vendor in addition to Microsoft? I understand that SCEP may not fit the bill for some organizations and that they may have specific requirements that need to be addressed by their chosen solution but doesn’t it make sense to at least evaluate the SCEP option – especially if you have already paid for it? What are some of the possible reasons that SCEP is flying under the radar of most organizations?
- Microsoft isn’t in the Gartner Magic Quadrant, they are in the Challenger’s quadrant.
- There have been very few independent reviews of SCEP apart from one pseudo review since it really isn’t a stand-alone product but part of a suite.
- Microsoft isn’t really pushing the solution since there is no financial upside (the product is already sold, just not deployed).
- Organizations are complacent and don’t have the time or desire to make a change.
What are some of the reason’s that an organization might want to try out SCEP?
- Save money! The license is already owned as part of Configuration Manager. Why continue to pay another provider until you’ve at least evaluated it for your particular use cases?
- Minimize infrastructure and administrative overhead. Configuration Manager already has the infrastructure for managing client configurations and moving software and updates to them as part of software distribution and patch management solutions. This is essentially the same managing endpoint policies and distributing malware signature files. Why maintain a duplicate infrastructure for third party endpoint clients and signature files and train administrators on multiple products?
- Unified security posture visibility. When you need to understand your complete desktop security posture, do you want to get one report from your endpoint solution and another form your patch management solution to and try to correlate the data to understand your actual security posture? Wouldn’t you rather have a single repository for all of the relevant data and be able to create a unified report? What about integrating endpoint protection policies with compliance management built in to Configuration Manager?
What are you waiting for? Start being SCEPtical. Turn on System Center Endpoint Protection!
I often get asked why I like Hyper-V or why I don’t like VMware. The answer, strangely, isn’t about technology. Anybody that knows me well, knows that I’m not a technology bigot. Meaning I don’t get fanatical about particular companies or pieces of technology. In my house we have six tablets. A Surface RT, a Surface Pro (soon to be replaced by a Pro 2), 3 Android tablets, and an iPad. They all get used on a regular basis. There is no favourite. Just a preference for one device over the other based on the particular use case in question and the strengths of each device at addressing that use case. I’ve used VMware products for years and I like them. They have met many of the requirements I’ve had for a long time.
So how does this relate to Microsoft vs. VMware? Well, I see a lot of fanaticism over VMware. A large percentage IT Pros really love it and many are fanatical about it. They are quick to criticize alternatives (like Hyper-V) without having all of the facts. Another issue is that most people see the results of past consumption and mistake it for current market trends. Let me explain that with an example. Currently Android phones outsell iPhones however, most people see more iPhone sin use that Android phones because iPhones have been around longer have had past sales success. What is being seen is phones that were purchased over the last several years still in use.
Enough digressions. Back to Microsoft and VMware. Historically, VMware has had the edge over Microsoft in the hypervisor market. With Hyper-V 3, most experts would agree that the gap has narrowed enough that for most organizations, the differences are insignificant from a pure technical capabilities perspective. It’s like choosing between a Honda and a Toyota. Both vendors have offerings in every major segment. Most consumers would be equally well served by a Camry or an Accord but preferences still abound. In the virtualization world, there are many other factors to consider such as migration costs, retraining, new licensing, etc. VMware has had very strong technical offerings for a long time and the investments made by many organizations can’t easily be shifted. Of course, historically, there are many examples of a technically superior product being eclipsed (BetaMax vs. VHS, Amiga vs. PC, FLAC vs. MP3). It also isn’t about first or early movers in a market. Consider Blackberry losing 33% market share in 2012 while Android now has nearly 80% market share in the smartphone market. Of course, depending on when you read this the current market share may be very different.
So back to my previous statement “It isn’t about technology”. I’ve shown examples of a superior product losing out as well as examples of an early mover with a dominant market position being eclipsed by a relative newcomer. If not technology, what’s it about then?
Well, I’m an IT Pro. Any IT Pro worth his salt will tell you that the three key elements of a successful IT rollout of any system are People, Process, and Technology. Not necessarily in that order, but all three ingredients are required for success.
As I’ve mentioned previously, VMware has great technology and Microsoft is no slouch either. We can remove people from the equation since both Microsoft and VMware have access to the pretty much the same talent pool and really, the people that matter most aren’t the vendor’s staff but the enterprise customers’ datacenter staff. So a talented VMware administrator could easily be a talented Microsoft administrator. Using the same logic, you might conclude that the processes that are used in enterprise datacenters would also be a wash between VMware and Microsoft implementations and for the most part you’d be right. However I believe Microsoft has an edge. Here’s why:
Microsoft has a long history of supporting cloud/online services that process billions of transactions a year. Consider Hotmail/Outlook.com, XBOX Live, Office 365, Azure, as a few examples with revenue Microsoft has had to develop some fairly robust processes for managing their datacenters. This isn’t new for Microsoft. Consider the ITIL based Microsoft Operations Framework (MOF) currently at version 4.0 has been around since 2000. VMware doesn’t have an online services history to learn the hard lessons of datacenter management or the history of helping customers manage their datacenters from a process perspective. Microsoft has taken the battlefield tested processes they’ve used for over a decade and incorporated many of them into one of the newer and lesser known products in the System Center suite, Service Manager.
Service Manager helps organizations align business processes with technology delivery to create efficiencies in service delivery. The product is tightly integrated with the rest of the system Center suite (especially products like Operations Manager, and Configuration Manager) as well as Active Directory. The rich CMDB provided by Service Manager helps to manage the inevitable VM sprawl that accompanies virtualization. It is also a great platform to bolt on a SAM/ITAM solution like the one from Provance (Full disclosure: Provance is headquartered a few kilometres from my homeand I know many of their staff professionally – We’ve worked on joint projects and I’ve had more than a few drinks with them over the years.).
Until VMware has a similar offering, organizations that want to enable IT Service Management (ITSM) best practices, will find it much easier with a Microsoft private cloud solution than with a VMware solution.
BTW – Market share numbers for last year shows an interesting trend in the hypervisor adoption rates:
Source – Wall street Journal / IDC
Are we in the midst of a Blackberry like decline for VMware?
My first blog post on TechRepublic, “How to install Windows Server 2008 R2 with Hyper-V and Windows 7 on the same partition” focussed on booting Windows 7 or Windows Server 2008 from a VHD. I used the same concept to deploy Windows 8 to a series of new laptops for my team and we had some interesting findings:
1 – It’s Easy
We were able to use the same methodology to deploy Windows 8 to a VHD as we used for Windows 7 and Windows Server 2008 R2. A brief overview of the process:
1. Create a VDisk
2. Attach it
3. Partition it
4. Format it
5. Install OS
2 – It’s Fast
We weren’t able to detect any performance issues when we used fixed size VHDs. Although we didn’t try it, there is also support for the VHDX format which has a size increase to 16TB (VHDs are limited to 2TB) and can be significantly faster for some workloads depending on block and sector size requirements. Using Dynamic sized VHDs is not recommended.
3 – It’s Portable
We easily moved the VHDs between machines and had no issues. In some cases we had to make some minor changes with BCDEDIT. It’s not as portable as Windows to Go but it is portable and makes for an easy provisioning experience.
4 – It’s Virtual
Although we haven’t finished our testing yet, we fully expect to be able to use the VHD as a virtual machine in Hyper-V. More details in a future post.
5 – It’s Limiting
There are a few things that you can’t do when booting Windows 8 from VHD. The two that we found most readily are:
· The hibernate functionality of a laptop is not available
· The Windows Experience cannot be measured
If there are other issues that you notice, please let me know. I’m now working with Hyper-V 3 in Windows 8 Enterprise and will let you know my finings in a future post.
It has been a while since I’ve posted anything here but I have a very good reason. My blog has been picked up by TechRepublic and most of my posts are going to be hosted there for the foreseeable future. Believe it or not, they actually pay me for this stuff.
I’m going to look into adding the links to my blogroll here but please follow me on TechRepublic if you can.
Here are links to my first four posts on TechRepublic:
- How to install Windows Server 2008 R2 with Hyper-V and Windows 7 on the same partition
- Costs and risks to consider when planning a move to the public cloud
- Don’t overlook these seven problem areas of virtualization
- How to optimize VM memory and processor performance
We keep hearing that it doesn’t matter if your machine is physical or virtual. Your software will still work just fine. That’s true most of the time but there are some exceptions. Monitoring is one of those exceptions. In truth, the monitoring tool will work and will give you accurate information but it will be meaningless.
Remember that in a hypervisor based virtual environment, the guest OS is typically unaware that the hardware has been abstracted and that resource scheduling is taking place to provide shared computing resources based on some preset business rules (some guests may be configured to get more resources than others).
In a scenario like this, a legacy monitoring tool that is targeted at the guest VM, may get false positives concerning resource availability. Typically you might see near 100% CPU consumed. This will be based on the telemetry coming back from the guest VM indicating that it is nearly out of resources, when in reality, there may be more resources available just not committed or allocated to that VM at that point in time.
To get a more realistic and complete view of what is actually happening, the monitoring tool would need to monitor the hypervisor and all of the guests correlating the telemetry form all of them and providing a more holistic view of the availability of resources.
Products like this are starting to emerge. Make sure that when you plan to migrate a server or application to a virtual environment you also plan for the monitoring requirements.