First Experiences with XenApp 7.8 & App-V 5.1


I’m currently working on a new XenApp rollout for a customer where we’ve been eagerly awaiting the 7.8 release to have a look at the App-V integration given that it promised to remove the need for separate App-V server infrastructure.

I’m not going to go into details here of how you make App-V applications available natively in XenApp/XenDesktop as that is covered elsewhere such as here. That article also covers troubleshooting and how to enable logging.

How it appears to work

When the Citrix Desktop Service (BrokerAgent) starts on your XenApp server, it communicates with a Delivery Controller and writes the details to the “ApplicationStartDetails” REG_MULTI_SZ value in “HKLM\SOFTWARE\Policies\Citrix\AppLibrary”. Now why it writes to the policies key when we’re not actually setting anything to do with App-V in policies I don’t know but a key is a key (unless it’s a value!). A typical line in this value looks like this:

56c1d895-e3d8-4dcc-a303-b0162a97c87b;\\ourappvserver\appvshare\thisapp\thisapp.appv;de0a5cd1-3264-4418-82dd-4bdf5959a29d;957c71c9-a732-401b-b354-17c493decac8;This App

Where the fields are semicolon delimited thus:

App-V App  GUID;Package UNC;App-V Package GUID;Published App Name

The BrokerAgent then downloads all .appv packages that you’ve added to your delivery groups to the “%SystemRoot%\Temp\CitrixAppVPkgCache” folder. This is regardless of whether App-V has been configured with a Shared Content Store. As this happens at boot, the packages should be locally cached by the time users logon who might want to run one of the published App-V applications so you’re trading system drive disk space with speed of launch. I’ve yet to see how this impacts on PVS cache in RAM so we may look at whether we can pre-populate the cache in the PVS master image so we don’t lose write cache when .appv packages are downloaded when the image is booted into shared mode.

There is a gotcha here though in that because Citrix use PowerShell to integrate with App-V so that if your PowerShell execution policy does not allow local scripts to be run, such as being set to “Restricted” which is the default, then the App-V integration will not work which can be seen in the above cache folder not populating and apps erroring when launched. To get around this, we set the execution policy to “RemoteSigned” in the base PVS image so we didn’t have to rely on group policy getting applied before the BrokerAgent starts.

We’re giving users all of their applications via Receiver generated shortcuts which is where the next small issue arises in that the shortcuts that Receiver (actually SelfService.exe) generates for App-V run SelfService.exe so effectively a new logon session is created, which can be seen by running quser.exe, to host the App-V application. Ultimately, Citrix call their own launcher process, CtxAppVLauncher.exe, which sits in the VDA folder and is installed with the VDA by default. This then uses PowerShell to launch the  App-V application from the %AllUsersProfile%\App-V folder (using sparse files so disk space is efficiently managed). You do still need to have the Microsoft App-V client installed though, since that’s what runs the App-V package, as you’d kind of expect.

This second logon all takes time though so we decided to cut out the middle man, selfservice.exe, and make the shortcut run CtxAppvLauncher.exe directly which takes the App-V app GUID as its single argument. This we do with a PowerShell script, run at logon (actually via AppSense Environment Manager), that was initially designed to check that pinned Receiver shortcuts were still valid and to update their icons as these are dynamically created, and named, at each logon (we’re using mandatory profiles). This was extended to find shortcuts for App-V apps, by matching the application name in the shortcut target with the data found in the “ApplicationStartDetails” registry value, and then changing them to run CtxAppVLauncher.exe, instead of SelfService.exe, with the App-V app GUID found in this registry value.

It does seem slightly strange though that we’ve had to go to these lengths to create locally launched App-V apps although the results are quite impressive in that the apps launch almost instantly due to the caching.

There may be further posts on the App-V integration depending on what else we unearth. Looking at FTAs (File Type Associations) is definitely on the agenda.



The Taming of the Print Server

The Problem

A customer recently reported to me  that sometimes their users complained (users complaining – now there’s a rarity!) that printing was slow.

The Investigation

When I logged on to the print server I observed that the CPU of this four vCPU virtual machine was near constantly at 100% and in digging in with taskmgr saw that it was mostly being consumed by four explorer processes for four different users who were actually disconnected (in some cases, for many days). Getting one of these users to reconnect, I saw that they had a window open on “Devices and Printers” and with over six hundred printers defined on this print server, my theory was that it was spending its time constantly trying to update statuses and the like for all these printers.

The Solution

Logoff the users and set idle and disconnected session timeouts so that this didn’t happen again! Well, that’s great but what if the users are actually logged on, checking print queues and the like. as administrators have a tendency to do?

What we have to remember here is that consumption of CPU isn’t necessarily a bad thing as long as the “right” processes get preferential access to the CPU. So how do we define the “right” processes on this print server? Well, we know it’s probably not the explorer processes hosting the “Devices and Printers” applet so why don’t we ensure that these don’t get CPU so that more deserving processes can have CPU resource?

Therefore, what we do here is to lower the base process priorities of the explorer processes so that they have a lower base priority assigned to them. This means that if a process (technically a thread in a process) arrives in the CPU queue with a higher base priority than it gets on the CPU before the lower priority thread.

You can do this manually with taskmgr by right-clicking on the process and changing its base priority but that’s not the most fun way to spend your day although I may be wrong.

Explorer lower priority

So I wrote a few lines of good old PowerShell to find all processes of a given name for all users and then change their base priorities to that specified on the command line. This must be run elevated since the “-IncludeUserName” parameter requires it. This parameter is used in order to filter out processes owned by SYSTEM or a service account, not that explorer processes are likely to be thus owned, so that the script example can be used for any process since we shouldn’t mess with operating system processes as it can cause deadlocks and similar  catastrophic issues. Also, I would strongly recommend that you never use the “RealTime” priority as that could cause severe resource shortages if the process granted it is CPU hungry.

I first implemented this in the All Users Startup folder so it would run at logon for everyone once their explorer process had launched but I felt slightly nervous about this in case explorer got restarted mid-session.

I, therefore, implemented the script as a scheduled task that ran every ten minutes, under an administrative account whether that user was logged on or not, which looked for all explorer processes and set their base priorities to  “Idle”, which is the lowest priority, so when any spooler thread required CPU resource it would get it in preference to the lower priority explorer threads. However, if these explorer threads needed CPU and nothing else needed it then they would get the CPU, despite their low priority, so potentially there are no losers. Users might experience a slightly unresponsive explorer at times of peak load but that’s a small price to pay to get happier users I hope you’ll agree.

 [string]$processName ,
 [string]$priority = "Idle"
Get-Process -Name $processName -IncludeUserName | ?{ $_.UserName -notlike "*\SYSTEM" -and $_.UserName -notlike "* SERVICE" } | %{ $_.PriorityClass = $priority }

So we just create a scheduled task to run under an administrator account, whether we are logged on or not, passing in a single positional/named parameter of “Explorer” as the base priority defaults to “Idle” if not specified. If you implement this on a non-English system then you may need to change the account names above to match “SYSTEM”, “LOCAL SERVICE” and “NETWORK SERVICE”. Job done, as we say in the trade.

Oh, and I always like to create a folder for my scheduled tasks to keep them separate from the myriad of other, mostly built-in, ones.

task scheduler




The Curious Case of the Slowly Populating Start Menu

The Problem

When a user was logging on to a Citrix XenApp 7.7 session it was taking up to two minutes for Receiver 4.4 to populate their start menu. Given that this was the primary way of delivering applications from that desktop, this was a showstopper.

The Troubleshooting

Frustratingly, the issue was intermittent in that occasionally all of the shortcuts would be present as soon as Explorer made the Start Menu available although more often it was at least twenty seconds before they showed up and sometimes even longer.

I tried all the usual things including:

  1. Opened the configured StoreFront URL for the store in a browser in the session for the user – no problems
  2. Disabling StoreFront load balancing (with a hosts file entry on XenApp)
  3. Disabling the mandatory profile (rename WFDontAppendUserNameToProfile and WFProfilePath values in HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services – no need for messing about with GPOs although watch for Group Policy background refreshes)
  4. Disabling customisations we were making to the Receiver settings
  5. Disabling “Check for Publisher’s certificate revocation” and “Check for server certificate revocation” in Internet Properties
  6. CDF Tracing (resulting in a csv trace with other 138,000 lines!)
  7. Various different command line options to selfservice.exe (the handy SysInternals Strings.exe utility revealed some undocumented ones like -fastConnectLogon) See
  8. SysInternals Process Monitor (procmon)
  9. Renaming the files for some of the launched Citrix processes, like ceip.exe (Customer Experience Improvement Program) and Win7LookAndFeelStartupApp.exe that featured in the procmon trace, so they wouldn’t run

Process Monitor showed that it was only the AuthManSvr.exe process (one of the many that SelfService.exe and SSOnSvr.exe launch for the user) that communicated with the StoreFront server and yet the IIS logs on the StoreFront server showed that this particular XenApp server (via its IPv4 address in the log) wasn’t generating any requests until a good twenty seconds, or more, after SelfService.exe was launched.

So I then set about finding out how to enable logging for AuthManSvr and SelfService which didn’t take long with the help of Google – in HKLM\Software\Wow6432Node\Citrix\AuthManager, add a REG_SZ value “LoggingMode” set to “Verbose” and “TracingEnabled” set to “True”. For SelfService logging, see which was enabled by temporarily adding the required file to the local mandatory profile.

Looking first in the SelfService.txt log file, located in %LocalAppData%\Citrix\SelfService, I looked for large gaps in the time stamps and found the following:

10/16:56:45 authman D CtxsJobL: Created AuthManager connection
10/16:56:45 dservice ? CtxsJobL: Do HTTP Request
10/16:56:45 dservice > CtxsJobL: {
10/16:57:05 dservice ? CtxsJobL: }00:00:20.0422895

Where the “10/” is the date so we have a twenty second gap and this correlates with the “00:00:20.0422895” which is presumably the exact duration of the request.

I then cross referenced this time to the AuthManSvr.txt log file, located in %LocalAppData%\Citrix\AuthManager\Tracing, and found this section:

02/10/16 16:56:45 > T:00002FA0 . . . . . . Trying proxy auto-detect (WPAD)
02/10/16 16:56:45 > T:00002FA0 . . . . . . {
02/10/16 16:56:45 > T:00002FA0 . . . . . . . CWindowsNetworkServices::TryGetAutoProxyForUrl
02/10/16 16:56:45 > T:00002FA0 . . . . . . . {
02/10/16 16:57:03 *WRN* T:00001D90 . . . . . . . WinHttpGetProxyForUrl call failed; last error=12180
02/10/16 16:57:03 < T:00001D90 . . . . . . }
02/10/16 16:57:03 < T:00001D90 . . . . . }
02/10/16 16:57:03 T:00001D90 . . . . . Using manual proxy config.
02/10/16 16:57:03 T:00001D90 . . . . . The manual proxy info settings contains an empty proxy list string.
02/10/16 16:57:03 T:00001D90 . . . . . No proxy info found
02/10/16 16:57:03 < T:00001D90 . . . . }
02/10/16 16:57:05 *WRN* T:00002FA0 . . . . . . . . WinHttpGetProxyForUrl call failed; last error=12180
02/10/16 16:57:05 < T:00002FA0 . . . . . . . }
02/10/16 16:57:05 < T:00002FA0 . . . . . . }

Having troubleshot an issue only last month where the Receiver was stalling whilst making a connection to a  published application that turned out to be caused by “Automatically Detect Settings” in Internet Properties being ticked, I reckoned we were seeing a similar issue since WPAD is mentioned above which is the mechanism used for autodiscovery of proxy settings.

The Solution

I therefore set about proving this theory by getting the “Automatically Detect Settings” tickbox unticked at logon before AuthMgr.exe is launched. It’s not unfortunately straightforward as there isn’t a GPO setting for it and if you monitor the registry when you change the setting manually (using procmon with a filter set on “Operation” is “RegSetValue”) you’ll find it’s changing a single bit in a large REG_BINARY value. As I was using AppSense Environment Manager, I knocked up the following PowerShell in a custom action in a logon node (comments removed for brevity):

[string]$regKey = “HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Connections”
[string]$regValue = “DefaultConnectionSettings”

$current = Get-ItemProperty -Path $regKey -Name $regValue | Select -ExpandProperty $regValue

if( $current -ne $null -and $current.Count -gt 8 )
$old = $current[8]
$current[8] = $current[8] -band 0xF7

if( $current[8] -ne $old )
Set-ItemProperty -Path $regKey -Name $regValue -Value $current

And that folks was how I spent my day today! The joys of being a  consultant!

I hope it helps someone else solve this problem as I did find other reports of this issue in my searches on the web for a ready made solution (if only!).

Poor man’s Hyper-V cloning from VHD/VHDX

If you use Server 2012 Hyper-V but don’t have the luxury of System Center Virtual Machine Manager to do provisioning of new virtual machines for you then I offer here a PowerShell script, with GUI, that will do it for you from a master VHD/VHDX. I’ll also show you how to integrate it into explorer so you can just right click on a VHD/VHDX file and clone it from there.

I use Hyper-V at home and have SysPrep’d VHDs of all the Operating Systems I typically use but let’s not get into the debate, started by one Mark Russinovich, as to whether we actually need to SysPrep anymore – I’m old school so I do it out of habit more than anything.

So what we need to do to use an existing VHD to create a new VM is:

  1. Copy the VHD/VHDX to a new file or create a new differencing disk from it (aka a linked clone)
  2. Create a new VM with the new VHD/VHDX file as its disk
  3. Connect the VM to a network (optional)
  4. Start the VM (optional)

We start this process by right clicking the VHD that we want to copy in explorer:

Right click vhd

and selecting the “Create VM” option which is a registry key I’ve created, which  we’ll cover later, that runs the PowerShell script with the VHD/VHDX file as one of the arguments.

This then gives the user interface constructed on the fly by PowerShell:

hyper-v vhd cloner

All we have to do at this point if we are happy with the defaults is to enter the new virtual machine name and either click the “Create” button or hit Alt C or enter. Note that the VHD/VHDX destination folder is pre-populated with my personal default so I don’t have to type in or use the folder browser – this is done via the registry value we’ll see later and we can set other defaults in there too if we wish. The name of the new VHD/VHDX file will be the machine name that you have entered above or with a “- <number>” suffix if you create more than one VM at a time. If the destination VHD/VHDX file already exists then an error will be shown unless the “-force” option is specified which will cause the VHD/VHDX to be overwritten so use it wisely.

Depending on your storage, the file copy for a non-linked clone may take a while but you’ll find a PowerShell window behind the GUI with some rudiments of a progress indicator in there. You will also get a message box upon completion unless you specify the -quiet switch.

cloned ok

Also, if you run as an administrator but with UAC enabled (what I like to call the “sensible” option when running as an administrator (which I avoid as much as I can anyway)), the script will detect this and run itself again, but elevated, so you should expect to see a UAC elevation prompt. Without it elevated, it will probably fail due to lack of permission.

I’ve tested it just on Server 2012R2 English with PowerShell 4.0 and it certainly works reliably for me but I thought I’d share it lest it helps anyone else save time when creating new VMs from VHD/VHDX files. It won’t work on Server 2008/2008 R2 as they don’t have PowerShell support for Hyper-V (out of the box anyway).

It’s quick to use to as it has been designed to be used via the keyboard so you can tab through the fields and hit “enter” to start the creation process at any point. There are also keyboard accelerators – activated by hitting the “Alt” key. Hitting the escape key will quit immediately.

Note that it won’t work with AVHD/AVHDX files or snapshots as the disk cloning is done completely outside of Hyper-V – it just uses the Hyper-V PowerShell cmdlets to check if a VM of the name you entered exists already and if not to create a new VM, set the generation and number of processors and start it if requested.

The script can be downloaded here and is used entirely at your own risk. Any error messages will be displayed in a message box.

To integrate it into explorer, create a new key with the name you want in your right click explorer menu (mine is “Create VM” as shown above) in “HKEY_CLASSES_ROOT\Windows.VhdFile\shell”. In the key you create, create another key called “command” and set the default value in this “command” key to be something like this (obviously substituting your preferred default destination folder for my G: drive folder example or just leave out the -Folder option if you are happy to always enter or browse to a folder):

powershell.exe -command “& ‘c:\scripts\Clone VHD.ps1’ -Verbose -ShowUI -Folder \”G:\Hyper-V\Hyper-V Hard Drives\” -SourceVHD \”%1\”

So it looks like this in regedit – make sure you get all the single and double quote marks correct otherwise spaces in any argument will stop it working:

vhd association

A .reg file can be downloaded here – just change the path in it to wherever you save the PowerShell script and the argument to the -Folder switch to your default folder for Hyper-V hard drives to be stored.

The script can also be run without the GUI in a PowerShell session, e.g.:

& ‘c:\scripts\Clone VHD.ps1’ -SourceVHD “c:\folder\source.vhdx” -Name “My new VM” -Switch “External virtual switch” -Start -LinkedClone -Folder “d:\hyper-v\hyper-v hard drives”

where you need to change paths and virtual switches to match your system. Full help is available for it via Get-Help.

If the script won’t run due to the PowerShell execution policy then run “Set-ExecutionPolicy -ExecutionPolicy RemoteSigned” as an administrative user but please understand the implications of this before doing so. If it still won’t set the execution policy because of group policy then as an administrator change the “ExecutionPolicy” value in “HKLM\SOFTWARE\Policies\Microsoft\Windows\PowerShell” to “RemoteSigned” although this is only a temporary workaround as it will eventually reapply the original value due to group policy refresh.

Advanced Procmon Part 2 – Filtering inclusions

In part 1 I showed some of the exclusions I would typically add to procmon’s filters to help reduce the volume of data you have to sift through. Having been working on a customer site last week where I had to try to find the cause of a disappearing “My Documents” folder, I think that it’s time to finally write part 2 so here goes.

What I had been asked to troubleshoot was why the hidden attribute on the user’s documents folder, which was being redirected, to a folder on the user’s H: drive (a DFS share, not that this is relevant other than it is not a local folder), was being set during a user’s XenApp session which made it rather difficult for them to work.

The first thing to check was what was doing the folder redirection which is where the first procmon filters were used. I spend a lot of time trying to trace where a Windows application or the Operating System itself stores a particular setting so that I can then automate its setting in someway. When it boils down to it, a setting is almost always going to be either stored in the registry or in a file which means we can use procmon to filter on the setting of the registry value or the writing of a file.

For example, let’s use procmon to see where Notepad (one of my favourite and most used apps) stores the font if you change it. I set my include filters as shown below – note that the excludes don’t really matter at this point:

procmon notepad filters

I then change the font to “Candara” in the Format->Font menu and click “OK”

notepad candara font

What we notice is that procmon doesn’t show anything after I click “OK” so have I got the filters wrong or is it saving it somewhere else (a network database for instance)? No, what we’re seeing is the result of a developer who has decided that changed settings will only get written back when the application exits normally rather than as soon as they are changed. It’s more efficient this way but does mean that changed settings won’t get written back if the application exits abnormally like if it crashes or it is terminated via task manager.

Note that the “WriteFile” operation isn’t always listed in the drop down list so if this is the case, select “IRP_MJ_WRITE” instead.

So after I exit notepad, procmon then shows me the following where I’ve highlighted the line where we can see the font being set to what I picked previously.

notepad font change

And there we have it – how to find where an application/process stores settings if it does persist them in a human readable form of course – it might encrypt them or store them as Unicode although the latter is relatively easy to spot, although not search for, as you’ll see zero byte padding for every character if you look at the data in regedit as shown below for a value in my mail profile.

unicode string data

What you might have to contend with is an application, like a Microsoft Office application for instance, writing a large number of settings in one go, rather than just the one you changed. What I do here, when the entry is a text entity like a folder name or some arbitrary string data, is to specify a string that is unlikely to be used anywhere else so I can either search for it in procmon or in the registry or file system directly. For instance last week I needed to know where Excel 2013 stored the folder that you specify in the “At startup, open all files in” setting in the Advanced->General options so I specified it as “H:\Elephant” and went on an elephant hunt (ok, search) …

So back to the tale of the hiding of the Documents folder. Using the above technique and because I know that redirected folder settings are written to “HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders” (and potentially “Shell Folders” in the same key but the former takes priority if the value exists in both keys), I set the procmon filters to the following in an admin session on my customer’s test XenApp server:

shell folders filter

At this point I didn’t know what the process was that was setting it so I didn’t filter on a specific process. Note that if multiple users had logged on during the procmon trace then it may have confused matters so you can filter on the session id too within procmon should you deem it necessary. You can right-click on the column headers in procmon, select “Select Columns” and then add columns like session id and user name although be careful if filtering on the latter as it may be a system process doing what you’re investigating rather than one running as that user.

procmon columns

This procmon trace told me that it was a process called EMUser.exe that was setting the “Personal” (Documents/My Documents) registry value which is part of the AppSense Environment Manager product. I checked the AppSense configuration and it was indeed setting the redirection but it wasn’t changing the attributes in any way that I could see.

The next part of the puzzle is to figure what I needed to filter on to be able to spot the changing of the attributes to hidden. This I did with the help of the good old attrib utility by creating a test folder and then running attrib to set and unset the hidden attribute on this folder so I could see what procmon reported.

attrib filter

Note my use of “echo %time%” – it can be a good idea to know what time you performed an operation in case there are many entries in a trace – I sometimes wait for the taskbar clock to hit an exact minute before I click something I want to trace the consequences of so I can search for that time in the log (and then exclude lines before that point if necessary by right clicking on that line and selecting “Exclude Events Before”).

So what we learn from the above is that when the hidden attribute is set, it will be the operation “SetBasicInformationFile” with “FileAttributes” in the “Detail” column containing the letter “H” since we see “HSDN” when I set hidden and system and we don’t see “H” when I subsequently remove the hidden attribute.

Back on the trail of the hidden documents folder, my filters became thus:

hidden attribute setting filter

If “SetBasicInformationFile” is not present in the drop down list then pick “IRP_MJ_SET_INFORMATION” instead.

I then logged on to the XenApp server with my test user account and very quickly noticed that the Documents folder was not visible so I stopped the trace at this point and found that it was Lync.exe (Communicator) that had set the hidden attribute. Next I reset the filter and just filtered on Lync.exe so I could try and figure out what Lync was doing just before it decided to set the hidden attribute. What I found was that is was querying the “Cache” value in the afore mentioned “User Shell Folders” key. Looking at this value I noticed that it too was set to “H:\Documents” which didn’t feel right given that it is normally set to the temporary internet files folder in the user’s local profile.

This is where you have to try and second guess developers or at least come up with a theory and then prove or disprove it. My theory was that Lync was querying what the Cache folder was set to, since it needed to use it, but had decided that it was a system folder that a user shouldn’t see so set it to hidden.

So what I did to try and both prove and fix this was to change the AppSense configuration to redirect the cache folder back to the temporary internet files folder in the user’s local profile (so “%USERPROFILE%\AppData\Local\Microsoft\Windows\Temporary Internet Files”). I did this and the documents folder no longer became hidden.

Thank you procmon – it would have been very difficult without you!

Advanced Procmon Part 1 – Filtering exclusions


For those of us who’ve been around the IT block a few times, we can remember life before Procmon, and filemon and regmon its honourable parents, and it was much, much harder to diagnose some issues although it still can’t tell you everything, unlike say an API monitor might. Anyway, this article is hopefully going to teach a few of you some extra filtering techniques that I’ve learned over many years of troubleshooting. The problem frequently being that you can’t see the wood for the trees in that there are so many events captured that you can’t find the ones you want amongst the many thousands of irrelevant ones.

Once you have configured filters, they can be exported to file via the File menu for subsequent importing at a later date or quickly configuring on another machine. I’d also select “Drop Filtered Events” from the Events menu since this will require less storage and resources although it does what it says on the tin so you won’t be able to see any of these dropped events if you later realise that you did actually want them.

Also, I always configure procmon to use a backing file rather than let it use the default of virtual memory as I believe that is usually less impactful of system resources, particularly when traces have to be run for a long time. This is on the File menu.

Results that can (usually) be safely ignored


You are almost certainly not seeing malicious code attempt a stack smashing hack; what is most likely happening is that the developer of the code that has had this result returned is trying to establish how big a buffer he (or she) needs to allocate in their code  in order to have the data returned that they require. This is because a developer doesn’t know in advance, for instance, how many registry values there will be in a key that he needs to enumerate so he’ll call the enumerate API with a zero length buffer which the API will interpret as a request to return the size of buffer needed to hold all of the data. The developer can then dynamically allocate a buffer of this size (and free it later when he’s finished with the data otherwise a memory leak will ensue) and then call the same API again with this buffer. You will usually see a procmon entry with all the same entries very soon after the “buffer overflow” one with a result of “success”. Many Microsoft APIs function this way.

buffer overflow


These will happen when APIs have been used that enumerate entries, such as the keys or values within a specific registry key. It is the API signalling to the developer that he should stop enumerating the item as there is no more data to be had.

In the example below we see that the Receiver process has been enumerating the per user installed products and after the fourth key (indexes start at zero not one) there are no more keys so the RegEnumKey API signals this by returning “no more entries”.

no more entries

Operations that can be ignored


Unless you are troubleshooting a handle leak or a failed access because an operation on a file or registry key fails since the handle has been closed, which is fairly pointless unless you have source code access for the suspect code and I find Process Explorer better for diagnosing handle leaks anyway since it will give you a list of the handles, then close operations can generally be ignored. We therefore can add the following to our filters:

procmon filter

There are probably more depending on exactly what it is you are trying to diagnose but these are the ones I typically start with to try and reduce the number of entries in a trace to make it easier to find what you are looking for. Remember also that you can right click on any item in a trace and select to exclude it which I do when I see an item appearing frequently in a trace that I reckon is nothing to do with what I am troubleshooting:

dynamic exclude 2

I tend to do this with writes to log files although sometimes it can be very useful to tie in a specific log file entry to other procmon activity – see here for a utility I wrote to quickly show you the text in a file at a given offset that you get from procmon.

Once you’ve found what you are looking for, or at least say the start of the area where the event of interest occurs, you can exclude all events before this point by right clicking on a specific line and selecting “Exclude Events Before” from the menu. This can make a large procmon trace quicker to search and filter further.

Processes that can be ignored

Unless investigating something that may be caused by your anti-virus product, I exclude all of it’s processes. Same for other file system filter or hook based products that your gut tells you aren’t relevant, such as the AMAgent.exe process which is part of AppSense’s Application Manager product.

Although the “System” process is ignored by default, I will sometimes unignore it if say I’m troubleshooting a service start issue at boot (where using the “Enable Boot Logging” option from the “Options” menu is handy and also for logon problems on desktop operating systems like Windows 7).

… and don’t forget that Ctrl-L is the shortcut to the filter dialog – use it and it will save you time if you are frequently tweaking filters as I tend to do.

Part 2 will cover filter includes for specific use cases such as trying to find where a particular application setting is stored in the file system or registry.

Teeing up your output

I was on customer site this week running some good old cmd scripts on a Windows Server that output to standard output, or stdout as we say in the Linux/Unix world, so showed in the cmd prompt that I’d run them from. But what I also needed as well as seeing the results as they happened was to record the output to a file in case I needed to refer back to any specific piece of it later given that it was running against nearly 6000 users.

Enter the Unix/Linux “tee” command that outputs to standard output as well as to a file. This is available through packages such as Cygwin but when on customer site you typically need access to a standalone executable that doesn’t need installing that you can just copy to the required system and run. To that end, here is my implementation of the “tee” utility that I wrote many years ago that will serve the purpose required.

At its simplest, it just takes an output file name that is overwritten with whatever it is piped to it so is invoked thus:

yourscript.cmd your parameters | tee outputfile.log

If you also want to catch errors then you need to do the following which is “stolen” directly from Unix shell scripting:

yourscript.cmd your parameters 2>&1 | tee outputfile.log

Since “2” is the file number for standard error (stderr) and “1” is the file number for standard output (stdout).

By default it overwrites the output file but specify a -a argument and it will append to an existing file and use -b if you have long line lengths as it reads a line at a time. Run with -? to see a usage message.

You can download the tool here but as ever it comes with absolutely no warranty and you use it entirely at your own risk (not that there’s any risk but I have to say that, don’t I?).

Yes, I know that tee, or tee-object, exists in PowerShell 3.0 and higher but sadly we still can’t guarantee that we have this on customer systems, much as I like PowerShell.