Deploying Applications with System Center 2012 Configuration Manager

Posted on Updated on

With the General Availability of System Center 2012 Configuration Manager (ConfigMgr 2012) just around the corner, many of my customers have been starting to prepare for the new version.  This is not a simple task as there is no upgrade option.  The only option is migration.  There are some good tools in ConfigMgr 2012 to help migrate objects but in order to take full advantage of some of the new features, you will have to learn some of the new paradigms.  In this post I will focus on one of the most obvious enhancements, the new Application Model.

The Old Way – SCCM 2007

The previous version of SCCM (and SMS for that matter) uses a Package model that has four key conceptsP

  1. Package – Container for files – replicated to Distribution Points
  2. Program – command line within a package that runs something such as an exe, msi, bat, etc. typically used to install software
  3. Advertisement – Makes a program and package available to a client.  Advertisements can be assigned (mandatory)
  4. Collection – Target for advertisement

Any logic like dependencies of or hardware requirements need to be manually built into the installation program or used as part of the collection membership logic.

The Package Model continues to be supported but is very limited compared to the new
Application Model provided in Configuration Manager 2012.  If required, packages can be migrated from a 2007 site using the Package Conversion Manager.

The New Way – Configuration Manager 2012 Application Model

The New Application Model is much more flexible than the Package model as it provides the ability to include much of the dependency logic that was has to be created manually in the Package Model.  Some new concepts:

  1. Application – A piece of software that users need access to
  2. Deployment – Distribution of an application.
  3. Required Deployment – A mandatory deployment (much like an assigned package)
  4. Global Conditions – Global Conditions are requirements that can be re-used across multiple applications without having to recreate them.  Examples of global conditions are platform (x86 or x64, OS version, Service pack version, language, etc.
  5. Requirement Rules – These are local to the particular application.  They can be used to evaluate prerequisites like disk space, memory, other required applications, etc.
  6. Deployment Types – Deployment types allow the same application to install differently depending on the target device.  For example, a full local installation can be performed on a user’s primary device that is on the corporate LAN while a virtual application would be streamed if they were not on their primary device.
    1. Windows Installer – Creates a deployment type from a Windows Installer file.
    2. Script Installer – runs a script on the client that performs an action – typically installing an application.
    3. Microsoft Application Virtualization – Creates an App-V deployment type based on the associated manifest file
    4. Windows Mobile Cabinet – Creates a deployment type from a Windows Mobile Cabinet (CAB) file. Configuration Manager can retrieve information from the CAB file to automatically populate some boxes of the Create Deployment Type Wizard.
    5. And my favourite – Nokia SIS file – Creates a deployment type from a Nokia Symbian Installation Source (SIS) file.
    6. Dependencies – Deployment types that must be installed before another deployment type is installed.  An application can be configured to install all required dependencies that are not are required if they are not present.

    I expect there to be a lot of interest in this new software distribution model.  I’ll keep you posted on new developments.

Advertisements

Be Heard: Tell Microsoft How They’re Doing!

Posted on

Every fall and spring, a survey goes out to a few hundred thousand IT folk in Canada asking what they think of Microsoft as a company. The information they get from this survey helps them understand what problems and issues you’re facing and how they can do better. The team at Microsoft Canada takes the input they get from this survey very seriously.

Now I don’t know who of you will get the survey and who won’t but if you do find an email in your inbox from “Microsoft Feedback” with an email address of “ feedback@e-mail.microsoft.com ” and a subject line “Help Microsoft Focus on Customers and Partners” from now until April 13th — it’s not a hoax or phishing email. Please open it and take a few minutes to tell them what you think.

This is your chance to get your voice heard: If they’re doing well, feel free to pile on the kudos (they love positive feedback!) and if you see areas they can improve, please point them out so they can make adjustments (they also love constructive criticism!).

The Microsoft team would like to thank you for all your feedback in the past — to those of you who have filled out the survey and sent them emails. Thank you to all who engage with them in so many different ways through events, the blogs, online and in person. You are why they do what they do and they feel lucky to work with such a great community!

One last thing – even if you don’t get the survey you can always give the team feedback by emailing us directly through the Microsoft Canada IT Pro Feedback email address .

They want to make sure they are serving you in the best possible way. Tell them what you want more of. What should they do less of or stop altogether? How can they help? Do you want more cowbell ? Let them know through the survey or the email alias. They love hearing from you!

Getting Started with Cloud Computing

Posted on Updated on

You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between “Public Cloud” and “Private Cloud”. I want to address these questions and help you get started. Let’s begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy.

Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others.

Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful.

So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure.

For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on.

Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call “The Global Relationship Study” – they reach out and contact you to see how they’re doing and what Microsoft could do better. If you get an email from “Microsoft Feedback” with the subject line “Help Microsoft Focus on Customers and Partners” between March 5th and April 13th, please take a little time to tell them what you think

Cloud Computing Resources:

Picked up by TechRepublic

Posted on

It has been a while since I’ve posted anything here but I have a very good reason.  My blog has been picked up by TechRepublic and most of my posts are going to be hosted there for the foreseeable future.   Believe it or not, they actually pay me for this stuff.

I’m going to look into adding the links to my blogroll here but please follow me on TechRepublic if you can.

Here are links to my first four posts on TechRepublic:

  1. How to install Windows Server 2008 R2 with Hyper-V and Windows 7 on the same partition
  2. Costs and risks to consider when planning a move to the public cloud
  3. Don’t overlook these seven problem areas of virtualization
  4. How to optimize VM memory and processor performance

 

The Difference between Cloud Computing and Virtualizaton

Posted on Updated on

Lately I’m getting asked to explain the difference between virtualization and cloud computing.  I like answering this question because it shows that the person asking the question has at least enough knowledge to identify that there are similarities but that they are probably not the same.  Explaining the difference to this type of questioner is not usually a problem.

What bothers me a little more is when so called IT professionals use the terms VM and cloud interchangeably and then claim that they are pretty much the same thing or that if it works in one it should work in another.  It is easy to get into a debate and find specific examples to bolster most claims on either side.  Reality is not quite so simple.  The right answer will usually start with “it depends

With the rest of this post I’ll try to explain some of what it depends on and why there aren’t any simple answers. I’ll also give some examples of the beginnings of some more complicated answers without getting too technical.

The question worded a little differently:  Aren’t Virtualization and Cloud Computing the same thing?

Before we begin, let’s get a couple of things straight:

  1. Remember that Cloud Computing is a delivery model and that Virtualization is a technology.  Virtualization as a technology may be used at the back end of a service that is delivered with a Cloud Computing service offering but not necessarily.
  2. Virtualization is only one of the building blocks for cloud computing and there are many types of virtualization (server, desktop, application, storage, network, etc.) so categorical statements about virtualization and cloud computing is risky.  It really depends on what is being virtualized an how it is made available by the cloud provider.
  3. There are different Cloud styles (fabric based, instance based, etc.), service models (Saas, PaaS, IaaS) and deployment models (private, public, hybrid, community).  Thus, an answer with any significant depth that is correct when describing a fabric based community PaaS will most likely be incorrect when applied to an instance based private IaaS.

At the risk of oversimplifying let’s just consider a simple VM running on a bare metal hypervisor based virtualization platform.  Although the hypervisor abstracts the hardware and makes it available to the VM, the VM is still bounded by the physical server itself.  What I mean by this is that although you may be able to move a live VM from one physical server to another, the entire VM (memory and processor resources) must reside on one physical server and a single virtual LUN is required for storage.

Something very similar is the instance based cloud (in fact Amazon’s EC2 uses Xen based VMs at it’s core).  This one-to-many relationship between physical resources and user containers (call them VMs if you like but technically they should be referred to as instances) obviously puts limits on the linear scalability and redundancy of this cloud approach.  For many, this scalability limitation is offset by the ease of porting an application to an instance based cloud.

Fabric based clouds achieve higher scalability through the use of a fabric controller that keeps track all of the computing resources (memory, processor, storage, network, etc.) and allocates them as services to applications.  The physical resources can be distributed among many physical systems.  Again, at the risk of oversimplifying, the fabric controller is like an operating system kernel and the fabric itself acts similarly to a traditional OS as far as its relationship to a specific application.  Fabric based clouds have a Many-to-Many relationship that allows a many applications to use resources on many physical resources.  This model results in superior scalability and theoretically less downtime.  However, this comes at the cost of application compatibility as applications must be designed to run in a fabric.

So yes, in some instances (pun intended) cloud computing is just large scale server virtualization but cloud computing is not necessarily the same as virtualization and there are many examples of cloud computing that are significantly different from traditional virtualization.

Enhanced by Zemanta

Monitoring Virtual Environments

Posted on

We keep hearing that it doesn’t matter if your machine is physical or virtual.  Your software will still work just fine.  That’s true most of the time but there are some exceptions.  Monitoring is one of those exceptions.  In truth, the monitoring tool will work and will give you accurate information but it will be meaningless.

Remember that in a hypervisor based virtual environment, the guest OS is typically unaware that the hardware has been abstracted and that resource scheduling is taking place to provide shared computing resources based on some preset business rules (some guests may be configured to get more resources than others).

In a scenario like this, a legacy monitoring tool that is targeted at the guest VM, may get false positives concerning resource availability.  Typically you might see near 100% CPU consumed.  This will be based on the telemetry coming back from the guest VM indicating that it is nearly out of resources, when in reality, there may be more resources available just not committed or allocated to that VM at that point in time.

To get a more realistic and complete view of what is actually happening, the monitoring tool would need to monitor the hypervisor and all of the guests correlating the telemetry form all of them and providing a more holistic view of the availability of resources.

Products like this are starting to emerge.  Make sure that when you plan to migrate a server or application to a virtual environment you also plan for the monitoring requirements.