ConfigMgr – Applications with Runtime Dependencies

Recently, Andreas Stenhall posted a fantastic blog post showing some PowerShell code to identify applications that are dependent upon other runtime frameworks such as .NET for Visual C++.  This got me extremely excited as Visual C++ 2005 is already End Of Life and VC++ 2008 will be going EOL early next year.

Being able to quickly and easily identify applications that could be impacted if one or more of these legacy frameworks are removed has been, up until now, been primarily a manual effort.

This post is designed to expand upon Andreas` work and extend his solution into SCCM where a majority of us actually manage our systems.

Extending SCCM Inventory

The first step is to extend your ConfigMgr inventory to collect the proper WMI Class data during Hardware Inventory.

To do so, open up your Configuration Manger Console and browse Administration > Client Settings > Default Client Settings and open Properties.

Select Hardware Inventory and hit the Set Classes button.

Click Add, Connect to a computer and search for the following WMI classes

  • Win32_Installedwin32Program
  • Win32_InstalledProgramFramework

Hit OK thru the remaining windows to save your changes.  During the next inventory cycle the information should begin to populate in your ConfigMgr Database.

Creating a Report

Now that we’ve inventoried the data, we need to do something with it.  Creating a report is a great first start as we can start searching for applications running EOL or near-EOL frameworks and begin either upgrading those applications or working with the Application Developers or 3rd Party ISV’s to get them updated to use a newer framework.

Now, for those of you who are used to writing SQL Queries and reports against inventoried data, I caution you to leverage the SQL Code below and/or the RDL I’ve attached to this post.  The v_GS_INSTALLED_WIN_32PROGRAM View can get HUGE.  This in turn dramatically slows down the SQL.  Especially in an environment with tens of (or hundreds of (thousands of clients.

If you simply want a SQL dump of all the Programs and their associated frameworks, you can use the following SQL code:

SELECT DISTINCT prog.Vendor0, prog.Name0, prog.Version0, pf.FrameworkName0, pf.FrameworkVersion0


LEFT JOIN (SELECT ProgramId0, Vendor0, Version0, Name0 From v_GS_INSTALLED_WIN_32PROGRAM) prog on pf.ProgramId0 = prog.ProgramId0

This query has been optimized so that it executes within about 25 seconds in my environment with ~30K systems.  To further improve this (as I have done with the report), add on a WHERE statement to filter the information down even further by the FrameworkName as such:

SELECT DISTINCT prog.Vendor0, prog.Name0, prog.Version0, pf.FrameworkName0, pf.FrameworkVersion0


LEFT JOIN (SELECT ProgramId0, Vendor0, Version0, Name0 From v_GS_INSTALLED_WIN_32PROGRAM) prog on pf.ProgramId0 = prog.ProgramId0

WHERE pf.FrameworkName0 = @FrameworkName AND pf.FrameworkVersion0 = @FrameworkVersion

ORDER BY prog.Name0, prog.Version0, pf.FrameworkName0, pf.FrameworkVersion0

The above query is what I use within the report. You will notice two SQL Parameters which correspond to the FrameworkName and FrameworkVersion that you want to query against.  This allows you to search for systems based on a specific framework and version (think EOL hunting).

Download Report –  You’ll need to update the datasource within Report Builder and point it to your own data source.

Wrap Up

Now that you have access to the data, you can begin identifying applications that are dependent upon older versions of software frameworks including Visual C++, Java, etc.  From there, start building collections of systems that have these older applications installed to handle upgrades as well as the (eventual) removal of the legacy runtimes.


SCCM Collection Queries Running Slow? Split ‘Em up!

Like many of you, my SCCM environment contains a rather large number of collections (1000+).  These collections are used for various purposes from identifying systems with certain Software installed, or identifying systems by Hardware Attributes such as Make, Model or Free Disk Space.

For each one of these collections, we have different ways we can populate them with members.  We can use Direct Memberships, Collection Queries, or Collection Include/Exclude rules.  Microsoft has a nice little guide showing How to Create Collections which gives an explanation of each.  Go ahead and read up, I’ll wait…

Ok, now that you are all caught up on the various Collection Membership Rules, I want to dive into the Query Rule a bit further.  Again, Microsoft has some information on How to Create Query Rules.  If you are unfamiliar with this process, please read up before continuing.


Lets say your environment has 10,000+ clients and you need to define a collection of systems that have Microsoft Visio Professional 2016 installed.  Lets lay out the criteria for this collection before we build it.

  • The collection must contain ALL instances of ‘Microsoft Visio Professional 2016’ regardless of architecture (x86 and x64)
  • The collection should be updated once per day and NOT use incremental updates.

If we take the above parameters into account, we should be able to come up with a collection query rule that looks something like this:

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType



,SMS_R_SYSTEM.Client from SMS_R_System

inner join SMS_G_System_ADD_REMOVE_PROGRAMS on SMS_G_System_ADD_REMOVE_PROGRAMS.ResourceID = SMS_R_System.ResourceId

inner join SMS_G_System_ADD_REMOVE_PROGRAMS_64 on SMS_G_System_ADD_REMOVE_PROGRAMS_64.ResourceID = SMS_R_System.ResourceId

where SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName = 'Microsoft Visio Professional 2016'

or SMS_G_System_ADD_REMOVE_PROGRAMS_64.DisplayName = 'Microsoft Visio Professional 2016'

As you can see from the above query, we are looking at both the 32-Bit and 64-Bit ADD_REMOVE_PROGRAMS WMI class.  Once the new collection is created, it will take a few moments for the Collection Evaluator to update the collection membership so we can see how many systems we have.  Each environment will vary on how long it takes to execute this query, and how many members the collection has once it has updated.

Analyzing the Results

The Collection Evaluator is the Site System component responsible for executing Collection Membership Queries and ultimately keeping your collections up to date.  Microsoft has an excellent tool that comes with the ConfigMgr Toolkit called CEViewer.exe which can be used to see all of your collections and details about their most recent evaluations.  Microsoft has a nice post on How to use CEViewer.exe.

If we open CEViewer on our Site Server and look at the last evaluation time for our new collection, we can see how much time it took for that evaluation to occur.   In our case here, we see that it took 28.18 seconds to evaluate.


You may be asking what is an “acceptable” threshold for collection evaluations.  Unfortunately, I haven’t seen anything from Microsoft on the subject so here is my own personal recommendation.  If a collection evaluation takes more than 20 seconds, you should look at optimizing the query rules.

Help!  My collection evaluations are taking too long!

There are a couple of really simple tweaks we can make to help reduce our overall collection query evaluation times.  (NOTE: Making changes to existing collections or collection queries will immediately cause that collection to update its membership)

  2. Split up your Query Rules into individual Queries.

Lets start with the first item.  Using SELECT DISTINCT on all your query rules ensures that when a query rule is evaluated, each potential system will only be returned one time.  We can see the behavior of this using the Monitoring > Queries node in the ConfigMgr console.  Lets take a look at the difference between these two queries.  First, the “bad” way.

If we copy the query rule from above into a new Query Rule (Monitoring > Queries > New Query Rule) and execute it, we can see from the following screenshot that each Resource ID gets returned multiple times.  In this instance, they were each returned 59 times!


Now, lets try it using SELECT DISTINCT.

select distinct SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType



,SMS_R_SYSTEM.Client from SMS_R_System

inner join SMS_G_System_ADD_REMOVE_PROGRAMS on SMS_G_System_ADD_REMOVE_PROGRAMS.ResourceID = SMS_R_System.ResourceId

inner join SMS_G_System_ADD_REMOVE_PROGRAMS_64 on SMS_G_System_ADD_REMOVE_PROGRAMS_64.ResourceID = SMS_R_System.ResourceId

where SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName = 'Microsoft Visio Professional 2016'

or SMS_G_System_ADD_REMOVE_PROGRAMS_64.DisplayName = 'Microsoft Visio Professional 2016'

After running this new query, you can see that each Resource ID is only listed once and the total execution time is waaaay less.


Now, lets go back to our collection and change it to use SELECT DISTINCT.  There are two ways you can do this.  First, is to edit the WQL directly as I’ve shown above.  The other (easier) way, is to just check this box.  And if you ask me, this should be checked by default!


Divide And Conquer

The second way to speed up your collection evaluations is to split up your query rule into multiple query rules.  In our example, we are joining three different WMI classes (SMS_R_SYSTEM, SMS_G_ADD_REMOVE_PROGRAMS, SMS_G_ADD_REMOVE_PROGRAMS_64).  Running this query essentially pulls all results from all three classes, checks for the matches against DisplayName and THEN finally pulls them into the collection.  Even with SELECT DISTINCT, we are still having to pull ALL DISTINCT results from each class.

To improve the performance here, simply split out your query against SMS_G_ADD_REMOVE_PROGRAMS and SMS_G_ADD_REMOVE_PROGRAMS_64 into their own queries.  And don’t forget to use SELECT DISTINCT!


select distinct SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_ADD_REMOVE_PROGRAMS on SMS_G_System_ADD_REMOVE_PROGRAMS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName = "Microsoft Visio Professional 2016"

select distinct SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_ADD_REMOVE_PROGRAMS_64 on SMS_G_System_ADD_REMOVE_PROGRAMS_64.ResourceId = SMS_R_System.ResourceId where SMS_G_System_ADD_REMOVE_PROGRAMS_64.DisplayName = "Microsoft Visio Professional 2016"

Now if we evaluate collection membership and go back to CEViewer, we can see that the evaluation time has drastically been reduced to well within our artificially defined “threshold”.



To recap, use CEViewer to keep an eye on your Collection Evaluations.  In addition, when creating your collection queries make sure to use SELECT DISTINCT and split out your query rules to improve performance where possible.

IE Enterprise Mode Edge Redirect Overwritten by ConfigMgr Client Settings

Like most companies out there, my company is getting ready to migrate to Windows 10.  As part of our migration, we are using IE Enterprise Mode to handle many legacy web applications to ensure our end users get the best experience possible, and reduce the manual effort to ensure they are using the proper browser (and browser mode).

IE Enterprise Mode includes the ability to automatically re-direct sites from Microsoft Edge to Internet Explorer on Windows 10.  This is fantastic for those who choose to use Edge as their default browser (as I do).

This post will not be covering exactly HOW to implement IE Enterprise mode as there is plenty of that documentation out there on the web.  Instead, I’ll be focusing on a recent discovery that my colleagues and I made.


You’ve spent the last several months working with business partners to come up with a customized MS Edge redirect XML file and are ready to implement the file via GPP (or Compliance Settings or any other method you choose to set the registry key).  You choose to set this on a Per-User basis because you want it to follow the user, not the machine (don’t ask why, just assume this is the reason).

The registry key that gets configured with your custom MSEdge xml file is at:



In addition to the above, you are running System Center Configuration Manager build 1602 (v1511 may also be affected – see below).


You expect this new XML file to redirect your custom LOB apps, but it doesn’t.  Upon further inspection within Edge (HINT: Use about:compat in the URL bar of Edge), you ONLY see entries for your site server (specifically the server hosting the Application Catalog site).

The Solution

I messaged David James (@djammer) on the ConfigMgr Product Team to confirm that the ConfigMgr Client Settings to add the Application Catalog URL in Trusted Sites is, indeed, using an MS Edge redirect XML file similar to that from IE Enterprise Mode.


As you can see from that last tweet, the solution (workaround really) is to set the reg key using the same path in the HKEY_LOCAL_MACHINE hive instead of HKEY_CURRENT_USER.  Doing so allows your custom IE Enterprise Mode XML file to load and not be overwritten.

This can be verified by looking at SoftwareCatalogUpdateEndpoint.log and looking for this



I hope this information can help someone else out there.  IE Enterprise Mode and ConfigMgr are awesome products and will be integral to a successful migration.

Using a BitLocker Data Recovery Agent to unlock a BitLocker encrypted drive

This blog post is a follow-up to my first post on BitLocker, MBAM and Data Recovery Agents (DRA).

In this post we’ll cover actually USING the BitLocker DRA to recover/unlock a BitLocker Encrypted drive using the BitLocker DRA Certificate.


Installing a BitLocker DRA Private Certificate

Before you can actually unlock a drive using the DRA certificate, you must install the Private (.pfx) certificate file on your system. My recommendation is to install it under a local account (not a domain account) however to avoid a potential issue with Credential Roaming.

  1. Log into the system using a local (Administrative) account – again, this is to avoid the Private certificate from roaming with individual users and installing on multiple systems.
  2. Locate the BitLocker DRA (.PFX) private certificate file (obtained from your Certificate Authority) and double-click on it.
  3. Follow the wizard and provide the password for the private key (should be provided by your Certificate Authority also).

  4. Click Next thru the rest of the wizard pages.
  5. Delete the .PFX certificate file from the machine.

One other recommendation on this, I’d suggest that you keep track of who has this certificate and where it’s installed. When you have to renew the certificate this will make things much easier to go back thru and update the locally installed private certs.


Unlocking a BitLocker Encrypted Drive with a BitLocker Data Recovery Agent

Now that we have the Private (PFX) certificate installed, we can proceed with unlocking BitLocker encrypted drives. Unlocking a BitLocker Encrypted drive starts at the Command Prompt (Elevated) where we can then leverage the manage-bde.exe utility to work with BitLocker Drive Encryption.

  1. At the (elevated) Command Prompt, type manage-bde –protectors –get <drive letter> where <drive letter> is the drive you wish to unlock. You should see an output similar to below (Image credit: TechNet).

  1. Take special note of the Certificate Thumbprint highlighted above. That long string is your certificate ID which you will use to actually unlock the drive.

NOTE: It IS possible to have more than one Certificate listed here if your company uses more than one DRA cert for BitLocker. You may have to try each one until you get one to work

  1. To unlock the drive, type manage-bde –unlock <Drive Letter>: -Certificate –ct <Certificate Thumbprint>



That’s pretty much all there is to it. Recovering a BitLocker encrypted drive with a BitLocker DRA Certificate is pretty simple once it’s all setup. Of course I still would recommend using MBAM or Active Directory recovery methods as your primary recovery method (they are a lot easier) however this will hopefully give you that ‘warm & fuzzy’ feeling knowing that you can always unlock a BitLocker encrypted drive.

BitLocker, MBAM and Data Recovery Agents (DRA)

I’ve been using the Microsoft BitLocker Administration and Monitoring (MBAM) software from the Microsoft Desktop Optimization Pack (MDOP) for the past couple of years and I love it. It makes enforcement, reporting and key recovery for systems fairly simple once the pre-requisites have been met (i.e. TPM Enabled and Activated). In this post, I’ll be discussing a lesser known method of securing your BitLocker encrypted drives with Data Recovery Agents (DRA).

Data Recovery Agents – What are they?

A Data Recovery Agent, or DRA, is an account typically based on a Smart Card or Certificate which can be used for Encrypting and Decrypting a file or folder (EFS) or an entire drive (BitLocker). In our case we will be discussing a BitLocker DRA.

Why would I use a Data Recovery Agent when I have BitLocker

Honestly, most people don’t. MBAM already handles key escrow, enforcement, key recovery and reporting for the BitLocker environment and does a very good job at it. However, I’ve seen a few issues during implementation that prompted me to take a closer look at managing our overall BitLocker environment, outside of just what MBAM provides. Here are a couple of scenarios I’ve seen which has caused us issues in recovering drives:

  • MBAM cycles a new key but escrow back to the server fails.
  • Helpdesk or end user manually encrypts drives with BitLocker but MBAM doesn’t get installed.

There may be other instances in which MBAM is unable to escrow keys however the above were the ones I saw most. So, just how do we solve this issue? There are numerous ways we could look at this from a process perspective, but I want to reduce the amount of human error and interaction wherever possible. Some ideas are:

  • Required deployment of MBAM to all systems – This could cause unwanted prompts for compliance on end user systems and doesn’t solve issues where MBAM simply fails to connect back to the server.
  • Store everything in Active Directory – Again, you still need a connection to AD for this and in a large environment this can significantly increase the size of your DB (your AD team may not like this).
  • Implement processes for your Help Desk to validate that keys are being escrowed – This becomes cumbersome and unsustainable.

By implementing a BitLocker Data Recovery Agent, you always have an additional recovery method available just in case MBAM either isn’t there or (gasp!) fails to properly escrow a key. I use this as my failsafe, my life preserver, my backup because telling a user, manager or heaven forbid an executive that their data is lost because something “just went wrong” (and they forgot to make a backup or stopped the backup software) is not the most pleasant part of our job.

How do I create a BitLocker Data Recovery Agent

I’ll be the first to admit this, but I’m not well versed at managing a PKI infrastructure so instead of diving into the details and (possibly) getting it wrong, I’d suggest you reach out to your PKI team and request a certificate that can be used as a Data Recovery Agent for BitLocker. What they should provide you is two certificate files. A public .CER certificate which will be deployed to all your systems, and a private .PFX certificate which allows you to decrypt systems that were encrypted and have the DRA installed.

Setting up the BitLocker Data Recovery Agent

To configure and deploy the BitLocker Data Recovery Agent, we will leverage Group Policy. I use the same GPO that I use for configuring MBAM. The following steps will guide you in setting up your BitLocker DRA Certificate and other required/recommended settings for using a BitLocker DRA.

1 Edit the Group Policy Object that will apply to client machines.  
2 Expand Computer Configuration > Policies > Administrative Templates > Windows Components> BitLocker Drive Encryption  
3 Enable the setting Provide the unique identifiers for your organization. For BOTH the BitLocker identification field and Allowed BitLocker identification field use ‘MyCompany‘.

NOTE: The value you enter here is CASE SENSITIVE! Make sure it’s typed EXACTLY the same in both locations.

4 Expand Computer Configuration > Policies > Administrative Templates > Windows Components> BitLocker Drive Encryption >Fixed Data Drives  
5 Enable the policy Choose how BitLocker-protected fixed drives can be recovered and configure it EXACTLY as shown in the screenshot

Perform the same steps for Operating System Drives and Removable Data Drives

6 Expand Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies > BitLocker Drive Encryption

Right-Click it and select Add Data Recovery Agent

7 On the Welcome screen of the Add Recovery Agent Wizard, click Next.  
8 Click on the Browse Folders button and select the exported Certificate retrieved from your CA. (Make sure this is a .CER file which only contains the Public Key and NOT the .pfx file which contains the private key)
9 Click Next and Finish the wizard. You should then see the certificate of the DRA is listed.


This post concludes with you (hopefully) having a BitLocker DRA certificate installed in your environment which should provide you with an additional recovery method for your BitLocker encrypted drives. If you haven’t quite gotten that ‘warm and fuzzy’ feeling yet, stay tuned for my follow-up article on Using a BitLocker Data Recovery Agent to unlock a BitLocker encrypted drive.

WinPE 5.0 x64: Microsoft.SMS.TSEnvironment Unavailable?

Recently I began exploring leveraging Prestart Commands in my Configuration Manager 2012 R2 environment.  I’d previously leveraged them in the form of a “WebService Boot ISO” compliments of Maik Koster.  I figured this would be no big deal, however I found my self running into troubles right out of the gate.

Specifically, the issue I ran into was not being able to load the Microsoft.SMS.TSEnvironment COM Object during the WinPE Prestart phase (before you select a Task Sequence).  Now Technet provides some lovely documentation telling me that this is for sure possible and they even provide a nice little code snippet showing me that it should work.  The only problem, when I try it, I get this ugly error in PowerShell:

Microsoft.SMS.TSEnvironment Error

Strange error so I start doing some searching and come across this forum posting:  WinPE SysNative Forum Post

Ok, easy enough.  I’ve dealt with this before so we’ll just load up the cmd shell in WinPE 5.0 x64 and launch the 32-bit PowerShell.exe.  SysNative Path Not Found

Path not found.  What???  How can this be?  It’s always “just been there”.

Ok, enough whining, just call it from SysWOW64 directly.

No WindowsPowerShell Directory!
No WindowsPowerShell Directory!

Or not.  The WindowsPowerShell directory doesn’t even exist!  No 32-Bit instance of it is here! And for that matter no 32-bit instances of cmd.exe, cscript.exe, etc.  I even confirmed this by mounting the boot image .wim file.

You may now be thinking “Just use DISM to load the 32-bit PowerShell components from the ADK”.  Yeah, doesn’t work.


Just for good measure I loaded up a stock 64-bit MDT 2013 Boot Image (WinPE 5.0 of course) and same result.

Now here is the kicker.  The Microsoft.SMS.TSEnvironment IS available during the preboot phase, BUT (you knew this was coming) you have a very limited window where this environment is accessible.  If you are just trying to test out some code (before making permanent changes to your Boot Images), you can add a Prestart command to launch cmd.exe /k (The ‘/k’ keeps the command window open so you can test).


So long as you are executing your code leveraging this Prestart Method, you can access the Microsoft.SMS.TSenvironment.

I hope this helps someone out there.  Took me a while to track this “limitation” down while testing out new code.


PowerShell: Identifying Hard-Wired Network Connections

As is often with scripting, automation and tool making, we find ourselves needing to ensure we have a stable (wired) network connection before performing certain tasks on a system.  Whether it be for automating a domain join or simply copying a large file, ensuring you have that hard-wired connection can be critical.

After researching the internet and interrogating WMI on a few different test systems, I concluded that leveraging the Win32_NetworkAdapterConfigurations class was going to be my best option.  In one line of code we can get a collection of all hard-wired network connections on a system.

$NetworkConnections=Get-WmiObject-Class Win32_NetworkAdapterConfiguration -Filter“IPEnabled=’TRUE'”|Where-Object {($_.Description -notlike“*VMware*”) -and ($_.Description -notlike“*Wireless*”)}

Now to explain some of the filter choices I used.  The first is obvious.  IPEnabled=True limits the list of interfaces dramatically by only showing those with an IP Address.

Next we jump into the Where-Object cmdlet.  (Yes, I know in PowerShell 3 I could have used the shortened version of this but I also needed to support systems still on PowerShell v2).  In the Where-Object cmdlet we leverage the Description property and strip out any Wireless and VMware network adapters since these may show up in the list, but we know they aren’t wired.

Now, you may be asking yourself “Why not just filter out the word ‘Virtual’ to cover all platforms?”.  Well, because of Hyper-V, that’s why.  You see Hyper-V actually takes over your network connection when you bind it so all network traffic on your system (physical and virtual) are routed thru the Virtual Switch.  So if you have Hyper-V enabled, you could have a Network Connection named something like this:


As you can see, my Hyper-V Virtual Ethernet Adapter has the word “Virtual” in it, so we don’t want to exclude that from our search results.