Configuration Manager is a constantly evolving and improving product. Distribution Points (DPs) in Configuration Manager have advanced quite a bit since SCCM 2007. Configuration Manager 2012 introduced bandwidth scheduling and throttling to the DP role. A feature previously limited to secondary sites. For many organizations, secondary sites are no longer required. The new Distribution Point functionality is sufficient to replace many secondary site use cases.
TechNet does a fantastic job of educating IT Pros on what the new features are and how to configure them. What I’m going to attempt to do in this post is help identify the some use case scenarios where they make sense.
Let’s start with a high level review of the different types of Distribution Points (DPs).
Distribution Point Concepts
Distribution Points (DPs) provide content (applications, software updates, etc.) to clients. Boundary groups (groups of boundaries containing AD site info or IP subnet, IP range, IPv6 prefix) are assigned to DPs to help clients locate preferred DPs. A DP can optionally be configured as a fallback content point so that clients that cannot retrieve content from a preferred content point can access it from the fallback location. For a client to successfully retrieve content, it must be in a boundary associated with a boundary group on a preferred or fallback DP.
Standard Distribution Point
A standard Distribution Point is used to serve content to clients. There is a limit of 250 DPs per site (and secondary site).
Pull Distribution Point
A Pull DP is very similar to a Standard DP except that is gets its content from another DP (known as a source DP). This minimizes the load on the site server since the Pull DP manages its own content transfer in much the same way that a Configuration Manager client would. There is limit of 2000 Pull DPs per site (and secondary site)
PXE & Multicast Distribution Points
DPs can be configured to respond to PXE requests and send multicast streams as part of OSD scenarios. In order to support these features, WDS must be installed ad enabled on the distribution points. Both Standard DPs and Pull DPs support PXE and Multicast.
Cloud Distribution Point
A Cloud DP is an Azure hosted distribution point that can be rapidly scaled up or down to meet changing requirements.
IT has many of the advantages of other cloud based IaaS offerings. Cloud DPs do not support OSD or SUS since they do not support PXE or software update packages. There are other limitations as well. For more information on Cloud DPs check TechNet.
Use Case Scenarios
|DP Type||Sample Use Case|
|Standard DP||Standard DPs make sense anywhere that there are large numbers of clients to serve. Although there is no clear line in the sand, it’s fairly easy to make the case for a DP at a location with more than 50 clients.|
|Pull DP||Augment the number of DPs beyond 250 per site (up to 2250) and or minimize the content distribution load on the site server(s).|
|PXE & Multicast DP||Support for OSD. Example Migration from Windows XP to Windows 7 , 8, 8.1, etc.|
|Cloud DP||Support for elastic operations such as a temporarily large distribution to clients. Example, rollout of a new CRM tool.|
Depending on the complexity of your environment you may need to mix and match DPs to meet your specific requirements. Of course, all of these scenarios can be made more efficient by incorporating BranchCache support on clients. For more information on how to use BranchCache to optimize software distribution while minimizing infrastructure components see my post on CanITPro.
While Windows 8.1 and Windows Server 2012 R2 was released earlier this month, when nobody was looking, System Center 2012 Configuration Manager R2 came out. Did anybody notice? Aside from support for Windows 8.1 and Windows Server 2012 R2, there are a quite a few new features. I understand that many organizations typically wait before deploying new versions of products but what’s in store for those who are ready to install if only for evaluation purposes? Here are the features that I’m most interested in exploring:
Profiles. Profiles, Profiles
A raft of new profile types can be managed including Remote Connection profiles, VPN profiles, Wi-Fi profiles, and Certificate profiles. This can really simplify the management of some complex settings across devices.
Reassign clients to another site in the hierarchy. This will primarily be useful for large organizations with a CAS.
Many new features and enhancements including user self-enrollment for Android and iOS using the company portal app. Another neat new feature that I’m excited about is support for personal and corporate owned devices. This feature will be useful in lifecycle management and BYOD scenarios where a selective wipe makes more sense when a device is lost. There are also some new compliance settings specifically targeted at mobile devices.
Software Distribution and Application Management
There’s a new Deployment Type for web based applications. This is really just a way to manage links to web based applications but it does help to simplify and centralize all software deployments. There are also some new features that are intended to help manage scenarios that include Windows Store Apps and the company portal.
There are some enhancements to ADRs as well as a new type of maintenance window specifically for Software Updates. I can see this being very useful for organizations that need to manage software updates on a different schedule that normal application deployments.
Check out fellow MVP Kent Agerlund’s TechEd New Zealand’s presentation for some demos of some of the changes. For a full list of the changes and additions in Configuration Manager 2012 R2 check TechNet
I’m not a licensing expert and I don’t play one on TV but it occurs to me that many organizations are paying twice for their endpoint protection solutions. I have been involved in over two dozen System Center 2012 Configuration Manager deployments and only one of the organizations was even mildly interested in System Center Endpoint Protection. My understanding is that the System Center Endpoint Protection (SCEP) CAL is included in the System Center 2012 Configuration Manager CAL. So at least from a licensing perspective if you already have Configuration Manager, you have SCEP. So why are organizations paying Symantec, McAfee, Trend, or some other endpoint protection vendor in addition to Microsoft? I understand that SCEP may not fit the bill for some organizations and that they may have specific requirements that need to be addressed by their chosen solution but doesn’t it make sense to at least evaluate the SCEP option – especially if you have already paid for it? What are some of the possible reasons that SCEP is flying under the radar of most organizations?
- Microsoft isn’t in the Gartner Magic Quadrant, they are in the Challenger’s quadrant.
- There have been very few independent reviews of SCEP apart from one pseudo review since it really isn’t a stand-alone product but part of a suite.
- Microsoft isn’t really pushing the solution since there is no financial upside (the product is already sold, just not deployed).
- Organizations are complacent and don’t have the time or desire to make a change.
What are some of the reason’s that an organization might want to try out SCEP?
- Save money! The license is already owned as part of Configuration Manager. Why continue to pay another provider until you’ve at least evaluated it for your particular use cases?
- Minimize infrastructure and administrative overhead. Configuration Manager already has the infrastructure for managing client configurations and moving software and updates to them as part of software distribution and patch management solutions. This is essentially the same managing endpoint policies and distributing malware signature files. Why maintain a duplicate infrastructure for third party endpoint clients and signature files and train administrators on multiple products?
- Unified security posture visibility. When you need to understand your complete desktop security posture, do you want to get one report from your endpoint solution and another form your patch management solution to and try to correlate the data to understand your actual security posture? Wouldn’t you rather have a single repository for all of the relevant data and be able to create a unified report? What about integrating endpoint protection policies with compliance management built in to Configuration Manager?
What are you waiting for? Start being SCEPtical. Turn on System Center Endpoint Protection!
I get a lot of questions about Microsoft’s mobile device management (MDM) strategy. It can be confusing because to achieve the full spectrum of management functionality, multiple Microsoft products are required:
- Exchange ActiveSync (EAS)
- System Center 2012 Configuration Manager
- Windows Intune
Can you do some MDM with only EAS? Of course. Can you do MDM with only Intune? Absolutely. So how do you explain this multi-product approach to MDM? Although not strictly true, the way I like to look at it is as a series of layers, with each layer adding additional functionality, and Configuration Manager bringing it all together.
|Exchange ActiveSync (EAS)||Configuration Manager||Intune|
Microsoft calls this approach Unified Device Management (UDM) since it goes beyond simply managing mobile devices. Using the MS approach all devices including servers, desktops, laptops, tablets, and mobile phones can be managed with the same tool set. Some might consider this too confusing and prefer a point solution with less moving parts, however, consider the following:
- Many organizations already have Configuration Manager in place
- Many organizations already have Exchange or hosted Exchange in place
- Using an incremental approach allows you to start small using the pieces you already have without purchasing new software and tailor the solution to your specific needs while controlling costs
Start with Exchange and Configuration Manager and add InTune when and where it makes sense.
I’m sure that most organizations perform some sort of backup of their System Center 2012 Configuration Manager (CM12) sites, however, how many of them have actually tested their backup?
Probably very few, as it can be very difficult to simulate a failure in production and perform a site recovery. Backups are good. Backups that you know you can actually restore from are better.
This post is intended to give the reader an understanding of the general case backup requirements of CM12, a sample backup strategy, and how to test the backup by simulating a failure and performing a restore of the database portion of the site. I typically walk my clients through this process before handing them the keys to their new CM12 environment.
The instructions provided here are based on System Center 2012 Configuration Manager SP1 and MS SQL Server 2012 SP1.
The Scheduled Backup Task
CM12 has a built in maintenance task call Backup Site Server. It performs synchronization between the database and the site control file and other key configuration elements of the Configuration manager site. Although a restore from a database backup is also supported in CM12, this post will only address using the Configuration Manager scheduled backup task as most administrators should be familiar with it.
As part of the site configuration, the maintenance task to perform a site backup should be configured to perform a daily backup stored in an easily accessible location. For the purposes of this post, let’s use E:\CM12_Backup
Figure 1 – Configure Backup Maintenance Task
The success of the backup task can be verified in the following ways:
- Review the timestamp on the SMSBKUP.LOG file that the Backup Site Server maintenance task created in the backup destination folder. Verify that the timestamp has been updated with a time that coincides with the time when the Backup Site Server maintenance task was last scheduled to run. Review the log files for errors.
- In the Component Status node in the Monitoring workspace, review the status messages for SMS_SITE_BACKUP. When site backup is completed successfully, you see message ID 5035, which indicates that the site backup was completed without any errors.
- When the Backup Site Server maintenance task is configured to create an alert if backup fails, you can check the Alerts node in the Monitoring workspace for backup failures.
- In <ConfigMgrInstallationFolder>\Logs, review Smsbkup.log for warnings and errors. When site backup is completed successfully, you see Backup completed with a timestamp and message ID STATMSG: ID=5035.
The backup files in E:\CM12_Backup should be moved to an archival media as per corporate standards. Multiple copies should be maintained in the event that one copy is corrupted or unavailable as it is preferable to restore from an older backup that to recreate the entire infrastructure manually if the latest backup is unavailable.
Additional Items to Backup
The Backup Site Server task is intended to backup key elements of the Configuration Manager site that require synchronization, or other special attention.
Items that are not backed up by the Backup Site Server maintenance task that should be considered for inclusion in any supplementary backup tasks are listed below:
- Any custom reports and extensions used to create them (models, views, tables, etc.)
- The content library stored in the <drive:>\SCCMContentLib folder
- Package source files
- User State Migration Data
To restore a Configuration Manager Site Server, follow these steps:
- Run the Configuration Manager Setup Wizard from installation media or a shared network folder. For example, you can start the Setup wizard by using the Install option when you insert the Configuration Manager DVD. Or, you can open Setup.exe from a shared network folder to start the Setup wizard.
- On the Getting Started page, select Recover a site, and then click Next.
- Complete the wizard by selecting the options that are appropriate for your site recovery
Once the site has been successfully recovered, the following tasks need to be performed:
- Re-enter the passwords for any accounts that are used by site systems (refer to the Configuration guide for a list of accounts – you have one right?)
- Reinstall and hotfixes or Cumulative Updates applied to the site since the initial build
- Restore the Content Library
- Restore the Package Source Files
- Restore the User State Migration Data
Partial Recovery – Database Only
The site recovery wizard will run the same prerequisite checks that a full install will perform. If a full rebuild of the server OS was not performed and only a database recovery is required, the restore process may fail on the detection of a dedicated SQL instance with the following error:
Dedicated SQL Server instance Failed
The same error may be encountered during a test of the restoration process due to remnants of the site in the registry that may indicate a previous installation.
To remedy this error, delete the following registry keys:
- [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Operations Management\Components\SMS_SITE_SQL_BACKUP
The same issue may also arise when testing the DR process. The same resolution method can be used. I have created a batch file that I use to speed up this process. Here is the source for my DelKeys.bat file:
Performing a Database Only DR Test
As the base OS, SQL Server and, Configuration Manager Application software can all be rebuilt from generic source media, the most important part of a DR test is to verify that the site configuration and database can be restored as these items will be specific to your organization. The following method can be used to simulate a failure and restore.
- Create one or more objects (such as collections) and make some configuration changes (such as a boundary).
- Perform a site backup. To quickly start a site backup without modifying the backup schedule, use Service Manager to start the SMS_Site_Backup service.
- Once the backup has completed, delete the objects and configuration changes made in step 1
- Use the command PREINST.EXE /STOPSITE to stop all CM12 services. Browse to “\Program Files\Microsoft Configuration Manager\Bin\x640000409” to find the executable.
- Use SQL server Management Studio to Detach the site database
Figure 2 – Detach Configuration Manager Database
- Rename the <DB_Name>.MDF and <DB_Name>.LDF files which are found under \Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA
Delete the following three registry keys:
- HKLM\SOFTWARE\Microsoft\SMS\Operations Management\Components\SMS_SITE_SQL_BACKUP
- Perform the Site Recovery task and select restore database from backup.
- Browse to the location of the database backup.
- The site recovery details will be pre-populated.
- Browse to the location of the downloaded prerequisites from the initial installation process.
- The pre-requisite check will run and complete and then the installation will begin.
- Site and installation settings will be pre-populated as will the database information.
- Once the process is started it will appear as the original installation. Refer to the Server Build Guide for more details.
- Once the recovery is complete, verify that the objects and configuration items that you created in Step 1 are recovered.
- Remove the Test Objects and configuration items.
- Additional information about the restore can be found in the c:\ConfigMgrSetup.Log file
Now go test those backups.
Managing 32 bit and 64 bit versions of applications using Global Conditions, Requirement Rules and Deployment Types
In CM07 (System Center Configuration Manager 2007, or SCCM 2007) collections were used to determine any installation prerequisites. If different architecture versions of the same application needed to be distributed, two separate collections would need to be maintained as targets for the advertisements.
CM12 (System Center 2012 Configuration Manager) makes this simpler by including the concept of requirement rules that are evaluated at install time. Creating requirement rules eliminates the need for multiple collections to manage targeting of x86 and x64 versions of the same application.
We can simplify the administrative overhead even further by incorporating global conditions into the requirement rules.
This method is easier than using collections for target as the OS version is detected at install time. Additionally, using global conditions is easier than creating requirements rules for each application deployment type.
Before I walk you through an example, consider the following scenario:
- You are running CM12CU2
- You have ten applications that each use a requirement rule that addresses x64 versions of Windows XP and Windows 7
- You update to CM12SP1 which includes support for windows 8
- You now have to update all ten applications requirement rule to support Windows 8 x64
- If you had used global conditions instead, you would only need to update the one global condition and all of the applications requirements would automatically be updated to support Windows 8 x64
Let’s try an example with 7-Zip since it has both x86 and x64 versions, we can create a single application with two deployment types, each using a requirement rule and each requirement rule uses a global condition that specifies that the deployment type should only run on the specified architecture version of specific operating systems. In the images below, I have already upgraded my system to SP1 so support for Windows 8 is already visible in some of the screen shots.
Overview – Creating Applications with Global Conditions, Require Rules and Deployment Types
This overview assumes that you are already familiar with Applications and Deployment types. More details are provided on Global conditions and Requirement rules.
Create the Global Conditions
- Create Two Global Conditions for OS Architecture (one x86 and one x64)
- Navigate to Software Library\Application Management\Global Conditions
- Click Create Global Condition
- Provide a Name such as OS Architecture x64
- Provide a Description (Optional)
- Change the Condition type to Expression
- Click Add Clause
- Set the Category to Device
- Set the Condition to Operating System
- Set the Rule Type to Value
- Set the Operator to One of
- Select all of the x64 OS versions that you want to support with this GC (if you are running CM12 SP1 or later, you can also select Windows 8 versions)
- Click OK
- Validate the Global Condition and then click OK
- Repeat the process for x86 Architectures
Create the Application and Deployment Types
- Create a source folder with both x86 and x64 MSI versions on the application. For my example I used 7-Zip
- Create an Application and select the x64 version of the MSI
- Create a new Deployment Type of x86 and select the x86 version of the MSI
- Add the new Global Conditions as requirements for each Deployment Type
If you created the GC before installing SP1, you can simply update the Global condition to support Windows 8 x64 once SP1 is applied.
This same method can be used to support RT (ARM) and x64 versions of Windows 8 applications.
Over the years, I have read many books about Systems Management Server (SMS), and Configuration Manager (CM). Most of them have been focussed on preparing readers for the relevant Microsoft certification exam or theoretical explanations of how the system works and how to configure it for basic functionality. These types of publications are great for new users of products looking to get a basic working knowledge of CM.
This book is different. It I written by professionals who have years of in depth experience with the technology in some extreme use case scenarios. This background gives them some rather distinctive perspectives on how to get more out of a CM infrastructure. Whereas traditional books on the subject focus on the “What” and the “How”, Brian and Greg also include the “Why”. For example, they include decision making frameworks that help the reader decide what is best for their particular implementation, use case scenarios, and other objectives. In situations where the best practice might prove impractical, they provide rationale for making trade-offs (E.g. budget vs. performance, Application Catalog vs. Software Center).
CM is a complex product. Each of its modules could easily be a product in itself. There are dozens of point solution products aimed specifically at a sub feature of CM. As such, this book is not targeted at somebody who is new to the technology. There aren’t a lot of pages spent on how to install the software or the basic functionality. Rather, the writers assume a basic understanding of the feature set (easily obtained form on of the dozen or so traditional books available on the subject) and provides a series of in depth modules intended to help a generalist administrator get more out of a specific feature as the need arises.
The organization of the book into a series of recipe cards lends itself to be easily consumed. You don’t need to read it cover to cover. You can jump right in to the topic that you currently need to learn more about. Each topic is organized into the following sections:
- Getting Ready – What’s you’ll need before you start
- How to do it – Step-by-step instructions
- How it works – An explanation of the mechanics of the product
- There’s more – Additional information should your requirements be a little more complex
- See also – Additional resources if relevant
This recipe book firmly establishes Greg and Brian as CM Iron Chefs. I will be recommending this book to all of my customers. Buy it directly from Packt Publishing.
Full disclosure: I have met both Brian Mason and Greg Ramsey multiple times. They did not approach me to review this book. I was asked by the publisher to review this book. As a token of their appreciation once my review is completed, I will receive a free eBook from Packt Publishing.