Regrecent comes to PowerShell

About 20 years ago, after I found out that registry keys had last modified timestamps, I wrote a tool in C++ called regrecent which showed keys that had been modified in a given time window. If you can still find it, this tool does work today although being 32 bit will only show changes in Wow6432Node on 64 bit systems.

Whilst you might like to use Process Monitor, and before that regmon, or similar, to look for registry changes, that approach needs you to know that you need to monitor the registry so what do you do if you need to look back at what changed in the registry yesterday, when you weren’t running these great tools, because your system or an application has started to misbehave since then? Hence the need for a tool that can show you the timestamps, although you can actually do this from regedit by exporting a key as a .txt file which will include the modification time for each key in the output.

The PowerShell script I wrote to replace the venerable regrecent.exe, available here, can be used in a number of different ways:

The simplest form is to show keys changed in the last n seconds/minutes/hours/days/weeks/years by specifying the number followed by the first letter of the unit. For example, the following shows all keys modified in the last two hours:

.\Regrecent.ps1' -key HKLM:\System\CurrentControlSet -last 2h

We can specify the time range with a start date/time and an optional end date/time where the current time is used if no end is specified.

.\Regrecent.ps1' -key HKLM:\Software -start "25/01/17 04:05:00"

If just a date is specified then midnight is assumed and if no date is given then the current date is used.

We can also exclude (and include) a list of keys based on matching a regular expression to filter out (or in) keys that we are not interested in:

.\regrecent.ps1 -key HKLM:\System\CurrentControlSet -last 36h -exclude '\\Enum','\\Linkage'

Notice that we have to escape the backslashes in the key names above because they have special meaning in regular expressions. You don’t need to specify backslashes but then the search strings would match text anywhere in the key name rather than at the start (which may of course be what you actually want).

To specify a negation on the time range, so show keys not changed in the given time window, use the -notin argument.

If you want to capture the output to a csv file for further manipulation, use the following options:

.\regrecent.ps1 -key HKCU:\Software-last 1d -delimiter ',' -escape | Out-File registry.changes.csv -Encoding ASCIII

I hope you find this as useful as I do for troubleshooting.

When even Process Monitor isn’t enough

I was recently tasked to investigate why an App-V 5.1 application was giving a license error at launch on XenApp 7.8 (on Server 2008R2) when the same application installed locally worked fine. I therefore ran up the trusty Process Monitor (procmon) tool to get traces on the working and non-working systems so I could look for differences. As I knew what the licence file was called, I honed in quickly on this in the traces. In the working trace, you could see it open the licence file, via a CreateFile operation, and then read from the file. However, in the App-V version it wasn’t reading from the file (a ReadFile operation) but no CreateFile operation was failing so I couldn’t understand why it wasn’t even attempting to read from the file when it didn’t appear to be unable to access it. The same happened when running as an administrator so it didn’t look like a file permission issue.

Now whilst procmon is a simply awesome tool, such that life without it would be an unimaginably difficult place, it does unfortunately only tell you about a few of the myriad of Microsoft API calls. In order to understand even more of what a process is doing under the hood, you need to use an API monitor program that has the ability to hook any API call available. To this end I used WinAPIOverride (available here). What I wanted was to find the calls to CreateFile for the licence file and then see what happened after that, again comparing good and bad procmon traces.

WinAPIOverride can launch a process but it needs to be inside the App-V bubble for the app in order for it to be able to function correctly. We therefore run the following PowerShell to get a PowerShell prompt inside the bubble for our application which is called “Medallion”:

$app = Get-AppvClientPackage | ?{ $_.Name -eq 'Medallion' };
Start-AppvVirtualProcess -AppvClientObject $app powershell.exe

We can then launch WinAPIOverride64.exe in this new PowerShell prompt, tell it what executable to run and then run it:


Note that you may not be able to browse to the executable name so you may have to type it in manually.

Once we tell it to run, it will allow us to specify what APIs we want to get details on by clicking on the “Monitoring Files Library” button before we click “Resume”.


You need to know the module (dll) which contains the API that you want to monitor. In this case it is kernel32.dll which we can glean from the MSDN manual page for the CreateFile API call (see here).


Whilst you can use the search facility to find the specific APIs that you want to monitor and just tick those, I decided initially to monitor everything in kernel32.dll, knowing that it would generate a lot of data but we can search for what we want if necessary.

So I resumed the process, saw the usual error about the licence file being corrupt, stopped the API monitor trace and set about finding the CreateFile API call for the licence file to see what it revealed. What I actually found was that CreateFile was not being called for the licence file but when I searched for the licence file in the trace, it revealed that it was being opened by a legacy API called OpenFile instead. Looking at the details for this API (here), it says the following:

you cannot use the OpenFile function to open a file with a path length that exceeds 128 characters

Guess how long the full path for our licence file is? 130 characters! So we’re doomed it would seem with this API call which we could see was failing in API monitor anyway:


I suspect that we don’t see this in procmon as the OpenFile call fails before it gets converted to a CreateFile call and thence hits the procmon filter driver.

The workaround, as we found that the installation wouldn’t work in any other folder than c:\Medallion so we couldn’t install it to say C:\M, was to shorten the package installation root by running the following as an admin:

Set-AppvClientConfiguration -PackageInstallationRoot '%SystemDrive\A'

This changes the folder where App-V packages are cached from “C:\ProgramData\App-V” to “C:\A” which saves us 18 characters. The C:\A folder needed to be created and given the same permissions and owner (system) as the original folder. I then unloaded and reloaded the App-V package so it got cached to the \A folder whereupon it all worked properly.

The Taming of the Print Server

The Problem

A customer recently reported to me  that sometimes their users complained (users complaining – now there’s a rarity!) that printing was slow.

The Investigation

When I logged on to the print server I observed that the CPU of this four vCPU virtual machine was near constantly at 100% and in digging in with taskmgr saw that it was mostly being consumed by four explorer processes for four different users who were actually disconnected (in some cases, for many days). Getting one of these users to reconnect, I saw that they had a window open on “Devices and Printers” and with over six hundred printers defined on this print server, my theory was that it was spending its time constantly trying to update statuses and the like for all these printers.

The Solution

Logoff the users and set idle and disconnected session timeouts so that this didn’t happen again! Well, that’s great but what if the users are actually logged on, checking print queues and the like. as administrators have a tendency to do?

What we have to remember here is that consumption of CPU isn’t necessarily a bad thing as long as the “right” processes get preferential access to the CPU. So how do we define the “right” processes on this print server? Well, we know it’s probably not the explorer processes hosting the “Devices and Printers” applet so why don’t we ensure that these don’t get CPU so that more deserving processes can have CPU resource?

Therefore, what we do here is to lower the base process priorities of the explorer processes so that they have a lower base priority assigned to them. This means that if a process (technically a thread in a process) arrives in the CPU queue with a higher base priority than it gets on the CPU before the lower priority thread.

You can do this manually with taskmgr by right-clicking on the process and changing its base priority but that’s not the most fun way to spend your day although I may be wrong.

Explorer lower priority

So I wrote a few lines of good old PowerShell to find all processes of a given name for all users and then change their base priorities to that specified on the command line. This must be run elevated since the “-IncludeUserName” parameter requires it. This parameter is used in order to filter out processes owned by SYSTEM or a service account, not that explorer processes are likely to be thus owned, so that the script example can be used for any process since we shouldn’t mess with operating system processes as it can cause deadlocks and similar  catastrophic issues. Also, I would strongly recommend that you never use the “RealTime” priority as that could cause severe resource shortages if the process granted it is CPU hungry.

I first implemented this in the All Users Startup folder so it would run at logon for everyone once their explorer process had launched but I felt slightly nervous about this in case explorer got restarted mid-session.

I, therefore, implemented the script as a scheduled task that ran every ten minutes, under an administrative account whether that user was logged on or not, which looked for all explorer processes and set their base priorities to  “Idle”, which is the lowest priority, so when any spooler thread required CPU resource it would get it in preference to the lower priority explorer threads. However, if these explorer threads needed CPU and nothing else needed it then they would get the CPU, despite their low priority, so potentially there are no losers. Users might experience a slightly unresponsive explorer at times of peak load but that’s a small price to pay to get happier users I hope you’ll agree.

 [string]$processName ,
 [string]$priority = "Idle"
Get-Process -Name $processName -IncludeUserName | ?{ $_.UserName -notlike "*\SYSTEM" -and $_.UserName -notlike "* SERVICE" } | %{ $_.PriorityClass = $priority }

So we just create a scheduled task to run under an administrator account, whether we are logged on or not, passing in a single positional/named parameter of “Explorer” as the base priority defaults to “Idle” if not specified. If you implement this on a non-English system then you may need to change the account names above to match “SYSTEM”, “LOCAL SERVICE” and “NETWORK SERVICE”. Job done, as we say in the trade.

Oh, and I always like to create a folder for my scheduled tasks to keep them separate from the myriad of other, mostly built-in, ones.

task scheduler




The Curious Case of the Slowly Populating Start Menu

The Problem

When a user was logging on to a Citrix XenApp 7.7 session it was taking up to two minutes for Receiver 4.4 to populate their start menu. Given that this was the primary way of delivering applications from that desktop, this was a showstopper.

The Troubleshooting

Frustratingly, the issue was intermittent in that occasionally all of the shortcuts would be present as soon as Explorer made the Start Menu available although more often it was at least twenty seconds before they showed up and sometimes even longer.

I tried all the usual things including:

  1. Opened the configured StoreFront URL for the store in a browser in the session for the user – no problems
  2. Disabling StoreFront load balancing (with a hosts file entry on XenApp)
  3. Disabling the mandatory profile (rename WFDontAppendUserNameToProfile and WFProfilePath values in HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services – no need for messing about with GPOs although watch for Group Policy background refreshes)
  4. Disabling customisations we were making to the Receiver settings
  5. Disabling “Check for Publisher’s certificate revocation” and “Check for server certificate revocation” in Internet Properties
  6. CDF Tracing (resulting in a csv trace with other 138,000 lines!)
  7. Various different command line options to selfservice.exe (the handy SysInternals Strings.exe utility revealed some undocumented ones like -fastConnectLogon) See
  8. SysInternals Process Monitor (procmon)
  9. Renaming the files for some of the launched Citrix processes, like ceip.exe (Customer Experience Improvement Program) and Win7LookAndFeelStartupApp.exe that featured in the procmon trace, so they wouldn’t run

Process Monitor showed that it was only the AuthManSvr.exe process (one of the many that SelfService.exe and SSOnSvr.exe launch for the user) that communicated with the StoreFront server and yet the IIS logs on the StoreFront server showed that this particular XenApp server (via its IPv4 address in the log) wasn’t generating any requests until a good twenty seconds, or more, after SelfService.exe was launched.

So I then set about finding out how to enable logging for AuthManSvr and SelfService which didn’t take long with the help of Google – in HKLM\Software\Wow6432Node\Citrix\AuthManager, add a REG_SZ value “LoggingMode” set to “Verbose” and “TracingEnabled” set to “True”. For SelfService logging, see which was enabled by temporarily adding the required file to the local mandatory profile.

Looking first in the SelfService.txt log file, located in %LocalAppData%\Citrix\SelfService, I looked for large gaps in the time stamps and found the following:

10/16:56:45 authman D CtxsJobL: Created AuthManager connection
10/16:56:45 dservice ? CtxsJobL: Do HTTP Request
10/16:56:45 dservice > CtxsJobL: {
10/16:57:05 dservice ? CtxsJobL: }00:00:20.0422895

Where the “10/” is the date so we have a twenty second gap and this correlates with the “00:00:20.0422895” which is presumably the exact duration of the request.

I then cross referenced this time to the AuthManSvr.txt log file, located in %LocalAppData%\Citrix\AuthManager\Tracing, and found this section:

02/10/16 16:56:45 > T:00002FA0 . . . . . . Trying proxy auto-detect (WPAD)
02/10/16 16:56:45 > T:00002FA0 . . . . . . {
02/10/16 16:56:45 > T:00002FA0 . . . . . . . CWindowsNetworkServices::TryGetAutoProxyForUrl
02/10/16 16:56:45 > T:00002FA0 . . . . . . . {
02/10/16 16:57:03 *WRN* T:00001D90 . . . . . . . WinHttpGetProxyForUrl call failed; last error=12180
02/10/16 16:57:03 < T:00001D90 . . . . . . }
02/10/16 16:57:03 < T:00001D90 . . . . . }
02/10/16 16:57:03 T:00001D90 . . . . . Using manual proxy config.
02/10/16 16:57:03 T:00001D90 . . . . . The manual proxy info settings contains an empty proxy list string.
02/10/16 16:57:03 T:00001D90 . . . . . No proxy info found
02/10/16 16:57:03 < T:00001D90 . . . . }
02/10/16 16:57:05 *WRN* T:00002FA0 . . . . . . . . WinHttpGetProxyForUrl call failed; last error=12180
02/10/16 16:57:05 < T:00002FA0 . . . . . . . }
02/10/16 16:57:05 < T:00002FA0 . . . . . . }

Having troubleshot an issue only last month where the Receiver was stalling whilst making a connection to a  published application that turned out to be caused by “Automatically Detect Settings” in Internet Properties being ticked, I reckoned we were seeing a similar issue since WPAD is mentioned above which is the mechanism used for autodiscovery of proxy settings.

The Solution

I therefore set about proving this theory by getting the “Automatically Detect Settings” tickbox unticked at logon before AuthMgr.exe is launched. It’s not unfortunately straightforward as there isn’t a GPO setting for it and if you monitor the registry when you change the setting manually (using procmon with a filter set on “Operation” is “RegSetValue”) you’ll find it’s changing a single bit in a large REG_BINARY value. As I was using AppSense Environment Manager, I knocked up the following PowerShell in a custom action in a logon node (comments removed for brevity):

[string]$regKey = “HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Connections”
[string]$regValue = “DefaultConnectionSettings”

$current = Get-ItemProperty -Path $regKey -Name $regValue | Select -ExpandProperty $regValue

if( $current -ne $null -and $current.Count -gt 8 )
$old = $current[8]
$current[8] = $current[8] -band 0xF7

if( $current[8] -ne $old )
Set-ItemProperty -Path $regKey -Name $regValue -Value $current

And that folks was how I spent my day today! The joys of being a  consultant!

I hope it helps someone else solve this problem as I did find other reports of this issue in my searches on the web for a ready made solution (if only!).

Advanced Procmon Part 1 – Filtering exclusions


For those of us who’ve been around the IT block a few times, we can remember life before Procmon, and filemon and regmon its honourable parents, and it was much, much harder to diagnose some issues although it still can’t tell you everything, unlike say an API monitor might. Anyway, this article is hopefully going to teach a few of you some extra filtering techniques that I’ve learned over many years of troubleshooting. The problem frequently being that you can’t see the wood for the trees in that there are so many events captured that you can’t find the ones you want amongst the many thousands of irrelevant ones.

Once you have configured filters, they can be exported to file via the File menu for subsequent importing at a later date or quickly configuring on another machine. I’d also select “Drop Filtered Events” from the Events menu since this will require less storage and resources although it does what it says on the tin so you won’t be able to see any of these dropped events if you later realise that you did actually want them.

Also, I always configure procmon to use a backing file rather than let it use the default of virtual memory as I believe that is usually less impactful of system resources, particularly when traces have to be run for a long time. This is on the File menu.

Results that can (usually) be safely ignored


You are almost certainly not seeing malicious code attempt a stack smashing hack; what is most likely happening is that the developer of the code that has had this result returned is trying to establish how big a buffer he (or she) needs to allocate in their code  in order to have the data returned that they require. This is because a developer doesn’t know in advance, for instance, how many registry values there will be in a key that he needs to enumerate so he’ll call the enumerate API with a zero length buffer which the API will interpret as a request to return the size of buffer needed to hold all of the data. The developer can then dynamically allocate a buffer of this size (and free it later when he’s finished with the data otherwise a memory leak will ensue) and then call the same API again with this buffer. You will usually see a procmon entry with all the same entries very soon after the “buffer overflow” one with a result of “success”. Many Microsoft APIs function this way.

buffer overflow


These will happen when APIs have been used that enumerate entries, such as the keys or values within a specific registry key. It is the API signalling to the developer that he should stop enumerating the item as there is no more data to be had.

In the example below we see that the Receiver process has been enumerating the per user installed products and after the fourth key (indexes start at zero not one) there are no more keys so the RegEnumKey API signals this by returning “no more entries”.

no more entries

Operations that can be ignored


Unless you are troubleshooting a handle leak or a failed access because an operation on a file or registry key fails since the handle has been closed, which is fairly pointless unless you have source code access for the suspect code and I find Process Explorer better for diagnosing handle leaks anyway since it will give you a list of the handles, then close operations can generally be ignored. We therefore can add the following to our filters:

procmon filter

There are probably more depending on exactly what it is you are trying to diagnose but these are the ones I typically start with to try and reduce the number of entries in a trace to make it easier to find what you are looking for. Remember also that you can right click on any item in a trace and select to exclude it which I do when I see an item appearing frequently in a trace that I reckon is nothing to do with what I am troubleshooting:

dynamic exclude 2

I tend to do this with writes to log files although sometimes it can be very useful to tie in a specific log file entry to other procmon activity – see here for a utility I wrote to quickly show you the text in a file at a given offset that you get from procmon.

Once you’ve found what you are looking for, or at least say the start of the area where the event of interest occurs, you can exclude all events before this point by right clicking on a specific line and selecting “Exclude Events Before” from the menu. This can make a large procmon trace quicker to search and filter further.

Processes that can be ignored

Unless investigating something that may be caused by your anti-virus product, I exclude all of it’s processes. Same for other file system filter or hook based products that your gut tells you aren’t relevant, such as the AMAgent.exe process which is part of AppSense’s Application Manager product.

Although the “System” process is ignored by default, I will sometimes unignore it if say I’m troubleshooting a service start issue at boot (where using the “Enable Boot Logging” option from the “Options” menu is handy and also for logon problems on desktop operating systems like Windows 7).

… and don’t forget that Ctrl-L is the shortcut to the filter dialog – use it and it will save you time if you are frequently tweaking filters as I tend to do.

Part 2 will cover filter includes for specific use cases such as trying to find where a particular application setting is stored in the file system or registry.

Transferring HP Recovery Media to Bootable USB Storage

Now that most desktops and laptops don’t ship with separate recovery media, like they did in the old days, and the cost of buying it afterwards is not insignificant, what happens if your hard drive completely fails thus taking with it the afore mentioned recovery media?

I kind of accidentally had this issue recently on a new laptop I was setting up so wondered if I could get the recovery media transferred from the hard disk to a bootable USB stick and then boot off this USB stick to perform the recovery to what was effectively a brand new hard drive. It was fortunately very easy to get this to work so here’s what you do:

  1. Get a blank USB stick/drive – for the recent HP laptop with Windows 8.1 I purchased, I used a 32GB stick although 16GB may just have worked.
  2. Format as NTFS – the main installation file is over 12GB but the maximum file size on FAT32 partitions is “only” 4GB so this is why FAT32 cannot be used.
  3. I’d taken an image of the laptop as it arrived, so before booting into Windows for the first time, so I mounted that on the system where I was preparing the bootable (not the destination laptop although you could use it). If your original recovery partition is still available you could use that instead.
  4. Copy all of the files/folders from the Recovery partition to the root of the USB stick. These are the folders you should see (note that they are hidden):hp recovery media
  5. On the USB stick, rename the file “\recovery\WindowsRE\winUCRD.wim” to “winre.wim” (this is the file that bcdedit shows as being the boot device in the \boot\BCD file)
  6. Make the USB stick bootable by running the following, obviously changing the drive letter as appropriate:
bootsect /nt60 e: /mbr

If it’s a Windows 8.x device then it may be configured for SecureBoot in which case you may need to enter the BIOS and disable this temporarily just whilst you are performing the recovery in order to get it to boot from USB. Don’t forget to change it back to the original settings once the restore is complete.

I’ll now keep this bootable around just in case the hard drive should fail or otherwise get hosed in such a way that the HP supplied recovery media will not work. At well under £10 currently for a USB 2.0 32GB USB stick, it’s a small price to pay.

Note that the recovery media is protected by a software mechanism that means that you cannot apply it to a different hardware model so this is not a means to clone illegal, activated, copies of Windows!

Reasons for Reboots – Part 2

In part 1 we covered the relatively well known PendingFileRenameOperations registry value. In this post I will cover the much less well known mechanism that Windows Update uses to update files. I stumbled upon this mechanism by accident a while ago by trying to understand why, after applying Windows updates and it demanding a reboot, that there was nothing listed in PendingFileRenameOperations. It seems to be mostly for Side by Side assemblies (WinSxS). What I found was the following which seems to be largely undocumented:

  1. The “SetupExecute” value in “HKLM\SYSTEM\CurrentControlSet\Control\Session Manager” is populated with “C:\Windows\System32\poqexec.exe /display_progress \SystemRoot\WinSxS\pending.xml” where Poqexec.exe is the “Primitive Operations Queue Executor” according to the file’s (resource) properties.
  2. The pending.xml file contains the information about what files are to be replaced as well as registry entries to create/set/update.
  3. In “%systemroot%\winsxs\Temp\PendingRenames” you will find a number of *.cdf-ms which are referenced in the pending.xml file and are probably some kind of compiled manifest file which presumably also contain the actual binary content for the file updates.
  4. The SetupExecute value is actually processed at shutdown and the launched poqexec.exe process is presumably responsible for displaying the Windows updates messages. This can be captured with good old Process Monitor (procmon) although is tricky because to view the trace properly, procmon must be terminated cleanly which I achieved by having a cmd prompt running as system on the console logon screen via the sethc “hijack” method and using the /terminate parameter to procmon.
  5. The Windows Modules Installer service (TrustedInstaller.exe) has its startup type changed from “manual” to “automatic” so that it starts up at the next boot.
  6. By the time the system shuts down, the SetupExecute value’s contents are empty suggesting that poqexec has done all it needs to. Indeed, if you use Process Monitor to monitor the subsequent boot, poqexec does not feature at all.
  7. At the next boot, TrustedInstaller.exe then does more work to apply the updates, which can be seen if you enable boot logging in Process Monitor. This also writes to the CBS.log log file and seems to get its run order from “HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Component Based Servicing” and %systemroot%\servicing.

Here is a very small extract from a pending.xml file prior to a reboot after applying some Windows updates:




Logs for the above can be found in “%systemroot%\Logs\CBS\CBS.log” where “CBS” stands for “Component Based Servicing”. Older log files are also found in this folder but are compressed into .cab archives. There is also a log file “%SystemRoot%\WinSxS\poqexec.log” but I have never found anything even vaguely useful in there (yet).

If you try to run poqexec.exe manually from an elevated command prompt, it will fail:


which is because it is a “native” application which needs to be executed in a specific way – see this article from Mark Russinovich for some details on native applications. Native applications are launched using the (undocumented) native API function “RtlCreateUserProcess” found in NTDLL.DLL. I wrote a utility using this API to see if I could manually run poqexec.exe to process the renames and deletes and potentially avoid a reboot but this does not seem to be the case unfortunately even if you can identify the components to be replaced and ensure that they are not in use. If you examine a system after Windows updates are applied and it is shutdown, e.g. if a VM then mount the virtual disk in another VM, typically the PendingRenames folder is empty and the SetupExecute value is also empty so some processing has definitely occurred.

Note that if you load the system registry hive of a machine that is not running, by using the Load Hive option in regedit in another windows machine of at least the same operating system version or higher, then you won’t see a “CurrentControlSet” key. This is because it is created at boot, and deleted at shutdown, from one of the ControlSet* keys in HKLM\System. Look at the “Current” value in HKLM\System\Select to tell you which key will become CurrentControlSet at the next boot but it is usually ControlSet001. This is also how booting into “Last Known Good Configuration” is implemented.

If you want to have a play around with poqexec.exe, entirely at your own risk obviously, then I’ve written a simple tool that will allow you to do so. It is available here and requires the 64 bit Visual C++ Redistributable Package for Visual Studio 2013 available here. Run it as follows:

NativeLauncher.exe \??\c:\Windows\System32\poqexec.exe “/display_progress \SystemRoot\WinSxS\pending.xml”

poqexec util

If anyone can shed any more light on any of this, please feel free to share! I did find some “Quite Interesting” information about Windows servicing here whilst researching for this post, such as how to make CBS logs even more verbose

Part 3 will cover how in use services and device drivers are flagged for removal at the next boot.