Install at Boot on Citrix XenApp PVS Servers

The Problem

Whilst designing a XenApp 7.x infrastructure for a customer, a requirement surfaced that they wanted to be able to update their in-house developed applications, of which there were three separate ones, written by three separate teams, on a weekly basis. Given that they did not have App-V packaging skills, so weekly repackaging would have been cost prohibitive, we looked at using Citrix AppDisks so that we could hand Server 2012R2 virtual machines over to the customer with AppDisks in write mode, let them install and test their apps and then hand them back to us for sealing and promotion to UAT and thence Production. Unfortunately, we hit a number of issues with the AppDisk technology which meant that we had to seek an alternative way of delivering these applications.

The Solution

Using a computer startup PowerShell script assigned via Group Policy to the OU where the servers earmarked for the customer’s applications resided, I came up with a robust mechanism which would install the applications during boot. The Server 2012R2 VMs with the Citrix VDA installed, which were providing published applications and desktops to the end users, were booting from Provisioning Services with Cache in device RAM and overflow to local disk so we therefore needed to ensure that the installation of the software didn’t use the RAM cache as this could have had a negative effect on performance. It was deemed too high risk to have the software explicitly installed to a drive other than the system drive, namely C:, even if the installers allowed it, particularly as some of the required prerequisite packages were quite old and out of support. We therefore added another virtual hard disk to the VMs which the installation script would find, format and then create symbolic links to folders on the additional drive from the installation folders on the C: drive so that their software could be installed without using C: drive space.

As the customer’s previous XenApp 6.x implementation had sprawled to over twenty different PVS vDisks, including the ones the developers used which had been continuously in private, writeable, mode for many years so were very different from the rigidly change controlled production images, the solution needed to keep the number of vDisks to an absolute minimum so the install on boot apps would install on the single base image that was also used for COTS (Commercial Off The Shelf) apps.

Solution Detail

Support for Multiple Applications

Since there were three distinct applications which required different installers to be run, with possibly more applications to come, I tried to keep the complexity low, particularly in terms of scripts since trying to maintain multiple scripts was something that could prove problematic for BAU operations. I therefore used AD groups to designate which XenApp servers got which applications where the application name was part of the group name so it was simple, via good old regular expressions, to figure out which application a server should be getting by enumerating its AD group memberships via ADSI (so that the Active Directory PowerShell module isn’t required).

$filter = "(&(objectCategory=computer)(objectClass=computer)(cn=$env:COMPUTERNAME))"
$properties = ([adsisearcher]$filter).FindOne().Properties
## We will get computer group memberships to decide what to install if( $properties )
{
[string[]]$groups = @( $properties.memberof )
if( ! $groups -or ! $groups.Count )
{
## Already determined via computer name that we are a custom app server so having no groups means we do not know what to install
$fatalError = "Failed to get any AD groups via $filter therefore unable to install any custom applications"
throw $fatalError
}
else
{
Write-Verbose "Found $($groups.Count) groups via $filter"
ForEach( $group in $groups )

This was preferred to using different OUs for the different apps so then only one copy of the startup script was required and the chances of a server being in the wrong OU were reduced, especially in Production where a single OU was used for servers providing these custom apps and those not providing them. The script differentiated between the two types of server via NetBIOS name as a different naming convention was used for those providing custom apps so the startup script could exit almost immediately if the NetBIOS name was not that of a custom apps server and thus not delay the boot noticeably.

In order to keep it simple from a Citrix Studio perspective, we used Citrix tags to designate each application so that we could use a single Delivery Group for all of the custom applications and use tag restrictions on Application Groups to direct them to the correct XenApp server. This was in preference to using multiple Delivery Groups. There did not unfortunately appear to be an easy and reliable method of getting tag information on the XenApp server during boot otherwise we would have considered just using tags as opposed to tags and AD groups.

Disk Preparation

The startup script enumerates local disks by way of the Get-Disk cmdlet, filtering on a friendly name of “VMware*”, since that is the hypervisor in use, and where the disk size is 4GB or less so that it does not try to use the non-persistent system drive or the persistent drive used for the PVS overflow disk. These filters are passed to the script as parameters with defaults to make the script more easily repurposed.

$disks = Get-Disk -FriendlyName $diskDriveName | Where-Object { $_.Size -le $agileDriveSize }
if( ! $disks -or $disks.GetType().IsArray )
{
$fatalError = "Unable to find single drive of type $diskDriveName of size $($agileDriveSize / 1GB)GB or less"
throw $fatalError
}

If the disk’s partition style is “raw” then it is initialised otherwise all of its partitions are deleted in order to remove all vestiges of any previous application installation.

if( $disks.PartitionStyle -eq 'raw' )
{
$disks = $disks | Initialize-Disk -PartitionStyle MBR -PassThru
}
else ## partitions exist so delete them
{
$disks|Get-Partition|Remove-Partition -Confirm:$false
}

The drive is then formatted and a drive letter assigned although the drive letter is not hard-coded which removes the chances of clashing with others disks and optical drives.

$formatted = $disks | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel $label -Confirm:$false

if( ! $? -or ! $formatted -or [string]::IsNullOrEmpty( $formatted.DriveLetter ) )
{
$fatalError = "Failed to format drive with label "$label""
throw $fatalError
}

Finally, permissions are changed on the root of the new drive since non-admin users get write access by default which is not desired (or secure!).

As the underlying storage tier had built-in deduplication, we hoped/believed that the overhead of say ten instances of this additional disk for one application was nowhere near ten times the size of the data written to one disk.

If using the XenDesktop Setup Wizard within the PVS console to create one or more VMs with additional disks, ensure that the “UseTemplateCache” registry value is set to 0 where the PVS MMC snap-in is run from and that it is set before the mmc process is started otherwise additional disks in your template VM will not appear in the newly created VMs. See this aticle for more information.

usetemplatecache

Symbolic Links

Before the installation script for the custom applications can be run, symbolic links have to be created from the system (C:) drive to the newly created drive in order to preserve the PVS RAM cache. This was achieved by having a simple text file for each application which had one folder per line where the folder was the name of a location on C: where that app would install software components. For instance, a line in the text file might be “\ProgramData\Chocolatey” which would result in the following PowerShell code being executed where $dest =
“D:\ProgramData\Chocolatey” and $source =
“C:\ProgramData\Chocolatey” :

if( ! ( Test-Path -Path $dest -PathType Container -EA SilentlyContinue  ) )
{
$newDir = New-Item -Path $dest -ItemType Container -Force
if( ! $newDir )
{
$fatalError = "Failed to create folder $dest`n$_"  
throw $fatalError
  }
}
## This requires PowerShell 5.0+
$link = New-Item -ItemType SymbolicLink -Path $source -Value $dest
if( ! $link )
{
  $fatalError = "Failed to create link $source to $dest`n$_"
throw $fatalError
}

Where D: is the drive letter that was assigned to the 4GB additional drive.

One slight hiccup we encountered was that although the startup script was running with administrative privilege, it still didn’t have sufficient privilege to create the symbolic link, which also requires PowerShell version 5.x. I therefore added functionality to the script to take a local copy of the running script and elevate it to run under the system account using the SysInternals psexec utility.

Local Administrator Access

In order for this solution to not be onerous on BAU support and potentially introduce delays for the customer’s developers, specific AD groups were added to the local administrators group during boot if the server was a development one, as opposed to UAT or Production. The fact that a server was a development one, which the customer had full control over in terms of rebooting, since it was booting from a read-only PVS vDisk, was simply gleaned from the fact that the string “_DEV_” appeared at an expected position in the AD group name that was also used to determine which application was to be installed on it. On UAT and Production servers, changes to the local administrators group membership were not done.

There is a script here which uses the same code to change local group membership.

Application Installation

From our perspective, the installation of the customer’s applications was straightforward as they were responsible for the application installation scripts so the startup script simply had to call the customer’s script, checking for any exceptions so these could be percolated up into our error handling mechanism, once the additional drive was ready. Having determined which of the three applications were to be installed on a particular XenApp server by way of its AD group membership as previously described, the startup script would simply invoke the “XenApp.Boot.Time.Installer.ps1” script in the folder for that application or throw a fatal error if the script did not exist, could not be accessed or raised an exception itself.

In the development environment, where the customer had full control over the share containing their install script and Chocolatey packages and also had local admin access to the XenApp servers hosting their applications, the customer could test their own script whenever they wanted by simply rebooting their development servers.

In the UAT and Production environments, the customer had no local administrator access to the servers or write access to the share containing their install script and packages – a change request was required in order for us to copy, albeit via another PowerShell script with simple prompts to reduce the risk of BAU operations making incorrect changes, the files from the development shares into the production ones. The Delivery Groups containing these servers had a weekly reboot set to commence at 0200 on a Sunday.

One of the few limitations of this approach is that any software being installed that requires a server reboot cannot be supported since the reboot would cause the changes to be lost, because they are PVS booted in shared mode, and a cycle of infinite reboots would ensue. However, the nature of the applications, particularly as they would not be in use during installation as the server would be starting up, meant this was unlikely to happen. It would be more likely that a change to a required component, such as the .NET Framework, would require a reboot which is covered later.

Multiple Datacentre Considerations

The whole solution was provided out of two datacentres for resiliency so the solution for these bespoke apps also had to fit within this. To that end, two shares were provided for the UAT and Production environments such that the startup script would examine the IPv4 address of the booting XenApp server and select the installation share local to that datacentre although with a mechanism to fallback to the share in the other datacentre if the local one was not available. Data was replicated between the two shares by way of the script that BAU operations would use when promoting the customers scripts and packages from Development after a change request had been approved. This script had its own sanity checking in such as ensuring that there were actually changes, additions or removals to the files that were being promoted, as it kept a copy of the previous installation files, this being achieved via checksumming files rather than just relying on comparison of file sizes and datestamps.

At the Citrix level, there were two machine catalogues containing the bespoke app servers – one for each datacentre although this did not affect the startup script at all other than as mentioned in the previous paragraph.

Testing PVS vDisk Changes

Since the bespoke app servers are booting off the same vDisk as normal XenApp servers, in order to reduce complexity, regular updating and patching of the base disk is occurring which needs to also be incorporated into the bespoke apps to ensure they continue functioning as expected. In the development environment, the bespoke app servers are not subject to an automated reboot schedule, unlike the normal servers, since the developers are in control of their servers. It was agreed with the customer that there would be a ten working day grace period from when a new vDisk version was put into production before the development bespoke app servers would be forcibly rebooted by BAU operations if any had not been rebooted already. In order to automate this, a “message of the day” (MOTD) mechanism was implemented which launches notepad with a text file when an administrator logs on where the text file contains the date by which the reboot needs to be performed and the reasons why. The creation of this startup item and the contents of the text file are part of the script used to promote vDisks so is all automated and when a server is rebooted the startup item is removed via the startup script so that erroneous reboot notifications are not shown at admin logon. A scheduled task could have been created to perform the forced reboot after ten days, and removed at boot by the startup script, but the customer did not want this functionality automating.

As mentioned earlier, there are also potentially customer instigated PVS base disk changes where the developers need to test a change to one of their components which requires a reboot, such as the .NET Framework or database drivers. This is catered for in development by making the development servers boot off a copy of the main vDisk which most of the time is identical but can have a new version created if required. Once the new version with the potential changes has been created and promoted to a production version, just for the development servers, the customer can then test their applications and subsequently either raise a change request for the change to be incorporated into the main vDisk, which will eventually be promoted to UAT and Production, or will ask for the changes to be reverted, as in they don’t need them, so the development servers are then rebooted off the previous PVS production version and the newer disk version discarded.

Error Handling and SCOM Integration

Everything is error/double checked with exception handlers writing to a log file which is the first point of call for troubleshooting. If a fatal error, as in exception, is detected, the Citrix Broker Agent service is stopped and disabled by the script so that the affected server could not be accessed by end users, since it cannot register with a Delivery Controller, in case the application wasn’t available at all or in an inconsistent state.

A pair of events was implemented such that a successful boot and installation would log the one, information event, and a fatal error would give rise to an error event, with descriptive error test, with a different event id, being raised locally on the server. As SCOM was being used, a custom monitor and thence alert was implemented so that the BAU support team had pro-active notifications if any installation failures occurred as these would need investigating and rectifying (which would generally mean another reboot was required once the issue had been identified and resolved).

Finding Zombie Processes

Apologies in advance if this article gets a tad scary but not because of the threat of flesh-eating zombies appearing but because of some of the tools I’ll be using although I’ll show you how easy they actually are to use.

So What is a Zombie Process?

There seem to be a few different views out there as to what constitutes a zombie process on Windows operating systems.

There’s the school of thought here that they are processes which have open handles to non-existent items such as processes which have exited. The article links to a handy tool on GitHub which can identify these processes. This led me to write a PowerShell script which opens handles via the OpenProcess() API to a given process and then waits for user input or a specified amount of time before closing them. Whilst it is waiting, if the process(s) to which it has the handles open are terminated then the PowerShell process running my script is deemed by the tool as being a Zombie.

self induced zombies

It’s definitely a useful tool but my interpretation of a Zombie process is one that should be dead and thus gone completely but isn’t so this doesn’t fit the case here in my opinion since my PowerShell process is alive and well, and living in West Yorkshire, but it has handles to what are dead processes, so almost Zombies.

Helge Klein wrote a great post on finding the causes for lack of session re-use on RDS/Citrix servers, so where  session ids returned by the quser.exe command are relatively high numbers unless it has just been rebooted, which is a subject close to my heart as I see this frequently in my consultancy work and this is what initially led to my Zombie hunt. I’ve had some success with this to find Zombie sessions, mainly where processes show in task manager with a status of “Suspended”. I haven’t had any joy though in using SysInternals handle.exe utility to close all open handles and thus free up the session (which is also potentially dangerous). The “suspended” processes are closer to zombies though, in my view of the world, as they should be dead but aren’t as they are generally present in sessions which have been logged off so all processes in that session should have been terminated.

I’ve yet to find a way in PowerShell to find processes that task manager shows as suspended as that doesn’t seem to be in any property returned by Get-Process or from a WMI/CIM query on the win32_process class. The latter has an “ExecutionState” property which ought to be what I need but it is always empty so appears to not be implemented. However, I have had some success in looking for processes whose session id is for a session that no longer exists or which have no threads or open handles as that seems to be a sympton of some of these “Suspended” processes. I thought it would be a case of finding that all threads for a given process were suspended but how does that then relate to processes flagged as suspended in task manager but which have no threads at all unless task manager assumes that any process with no threads must be suspended since it has nothing that could be scheduled to run on a CPU?

Microsoft have a debugger extension command “!zombies” but I’ve never got that to show any Zombie processes as yet.

My own definition of a Zombie process is where they are not visible in any user mode tools like task manager or Process Explorer but still have an EPROCESS block in the kernel address space although these are tricky to find as you have to look for symptoms of their existence, such as Zombie sessions causing a lack of session reuse, given their inherent lack of visibility.

Finding Zombie Processes

So how do we see these EPROCESS blocks which reside in the kernel address space? Why, with a debugger of course. However, we would normally look at this in kernel dumps which we usually get after a BSoD (although there are some neat ways of converting hypervisor snapshot memory files and saved state files into dump files for VMware and Hyper-V that I’ve used).

We can point a debugger at a running machine but only if it has been started in debug mode which isn’t generally how they should be booted in BAU situations.

Fortunately, SysInternals comes to the rescue, again, in that they have a tool called livekd that in conjunction with Windows debuggers like windbg (a debugger with a GUI) or kd (a command line debugger), can debug a live system. It’s easy to use; we just need to tell it where windbg or kd are (they are in the same folder as each other when installed) and also decide what we are going to do about symbols which are what are required to convert raw memory addresses into meaningful function names. If you don’t do anything about symbols then livekd will prompt with some defaults but I prefer to first set the _NT_SYMBOL_PATH environment variable to “srv*c:\Symbols*http://msdl.microsoft.com/download/symbols” which uses Microsoft’s online symbol server and stores the downloaded symbols in the c:\Symbols folder.

So to start windbg to debug the local system, we run the following, elevated, obviously changing folders depending on where livekd.exe and windbg.exe live; where I’m using a PowerShell session, not a (legacy) cmd:

& 'C:\Program Files\sysinternals\livekd.exe' -k 'C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\windbg.exe' -w -b

This will then start windbg which should look something like this:

windbg via livekd

Don’t worry about the error for symbols for livekd.sys as we can safely ignore this.

We’re now ready to view the process entries contained in EPROCESS blocks which we achieve via running the “!process 0 0″ debugger extension command where the first zero instructs it to dump all processes rather than the current process or a specific pid (which we would specify in hex, not decimal) and the second is a flags value which tells it here to just dump basic information. We type this in at the bottom where it shows the “0: kd>” prompt.

This should then give something similar to the following although it may take some time to complete if there are a lot of processes (if you’re on a remote session then minimising the windbg window can make it complete faster since it has to do less screen updating).

windbg livekd bang process

To quit the debugger, enter “q”.

So there we have it, all of the processes as far as the kernel sees it. But hang on, that’s not exactly easy to digest is it now and what does all the data mean? The Cid is actually the process id, albeit in hex, and thus the ParentCid is the parent process id for the process.

Rather than manually wading through this data, I wrote a quick PowerShell script that uses regular expressions to parse this data, which can be copied and pasted into a text file or saved directly to file via the “Write Window Text to File” option from the windbg Edit menu, correlates these processes against currently running processes via the Get-Process cmdlet and then outputs the data as below where I’ve filtered the grid view where “Running” is false:

selfservice zombies

Ignore the fact that it flags the System process as being a Zombie as it isn’t – I’ll filter that out presently.

So here we can see lots of Citrix SelfService.exe Zombie processes since there are none shown in task manager and yet the processed !process output has over 25 instances in the EPROCESS table. We don’t have any Zombie sessions here though, since the Zombie processes all exist in active or disconnected sessions. I have discovered Zombie sessions too using this technique where there are processes shown, and typically the same executables, for sessions that no longer exist and aren’t shown in task manager, etc.

You will potentially get a few false positives for processes that have exited between when the !process was executed and then when the script was run to process its output. However, I’m working on a script to automate the whole process, (weak) pun intended, by using the command line debugger kd.exe, rather than windbg, to run the !process command and then pipe its output straight into a PowerShell script to process it immediately.

The script will be made available in the publicly accessible ControlUp Script Based Actions (SBA) library although all of the available SBAs there can be used standalone without ControlUp software.

 

Where did that download come from?

I was investigating something completely unrelated recently when I came across the fact that the Zone.Identifier information for downloaded files, on Windows 10, which is stored in NTFS Alternate Data Streams (ADS) on each downloaded file, contains the URL from which the file came. Yes, the whole URL so could potentially be very useful and/or very embarrassing. It’s this Zone.Identifier file that Windows Explorer checks when it puts restrictions on files that it deems could be unsafe because they have come from the internet zone.

Let me illustrate this with an example  where I have downloaded a theme from Microsoft using Chrome version 68 on Windows 10 and saved it into C:\Temp. One can then easily examine the ADS on this downloaded file using PowerShell version 3.0 or higher:

zone info chrome

The ZoneId is 3, which is the “Internet” zone as can be checked by looking at the “DisplayName” value in “HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\3”, and notice that it gives the actual path to where the file came from, which is actually different to the URL that I clicked. I reckon that could be handy if you forget where a particular file came from but also potentially embarrassing/incriminating depending on what you download where clearing your browser history and cache will only delete some of the evidence.

I’ve been aware of the Zone.Identifier ADS for a long time but I only ever remember seeing the zone number in there, not URLs, so I went back to a 2008R2 system, downloaded the same file with IE11 and sure enough there was only the ZoneId line. I then tried IE11 on Windows 10 and it too only had the ZoneId in the ADS file which gave rise to this table for my Windows 10 laptop since the behaviour is browser specific:

Browser Version Captures URL in ADS
Internet Explorer 11 No
Edge 42.17134 Yes
Chrome 68 Yes
Firefox 61 No
Tor 7.5.6 No

Although both Chrome and Edge don’t put the URL in the Zone.Identifier ADS when browsing in Incognito and InPrivate modes respectively.

This got me sufficiently interested to write a PowerShell script which finds files with a Zone.Identifier ADS in a given folder, and sub-folders if the -recurse option is specified. The script just outputs the data found so you can pipe it through cmdlets like Export-CSV or Out-GridView – below is an example of piping it through Out-GridView:

zone info script

The script also has -remove and -scrub options which will either completely remove the Zone.Identifier ADS file or just remove the URLs from it, so keeping the zone information, respectively.

The script is available here and you use it entirely at your own risk.

Outlook Draft Email Reminder

How many times have you either sat there wondering why someone hasn’t responded to an email you’ve sent or someone chases you asking why you haven’t replied to a certain email and in both cases the partial response is actually still in your Outlook drafts folder? Of course, you had every intention of sending that email but you got sidetracked and  then either Outlook got restarted after exiting or crashing, you logged off and back on, shutdown, etc. In both cases, that once open email is then no longer open on your desktop but hidden away in your drafts waiting for you to remember to send it – out of sight, out of mind!

Yes, it has happened to me on more than one occasion so I therefore decided to script a solution to it, or at least something that would politely remind you that you had draft emails that perhaps you might want to finish. I started off writing in VBA but I couldn’t get it to trigger at startup or asynchronously so I switched to PowerShell, which I much prefer anyway.

The script has a number of options but I would suggest that the easiest way to use it is to have it run at logon and give it parameters -waitForOutlook and -wait which  mean that it will wait for an Outlook process to start before it starts checking, although it doesn’t have to since it uses COM to instantiate an Outlook instance of its own anyway, and the -wait means that it will loop around rather than performing one check and exiting.

If it finds draft emails created in the last seven days, although this can be changed via the -withinDays option, a popup will be displayed, which will be on top of all other windows, asking if you want to open them:

outlook drafts

Clicking “Yes” will result in the emails being opened, giving you the opportunity to finally finish and send them. Selecting “No” will either cause the script to exit if the -nowait option isn’t specified or put it to sleep until either a new Outlook instance appears, for instance because you close the current one and at some point start another one, or until the nag timer expires. The nag option, triggered by using the -nag parameter with a value in minutes, will cause the script to remind you, via the popup, that there are drafts that could probably do with your attention.

As I believe the best way to run this is to have it run at logon and then continue to check for draft emails, I added options to install and uninstall it into the registry so that it will be run at logon to save you the hassle of doing this yourself. If you run the following command line, it will create a registry value “Outlook draft nagger” in HKCU\Software\Microsoft\Windows\CurrentVersion\Run, or HKLM  if you want it to run for all users and the -allusers option is specified:

& '.\Find Outlook drafts.ps1' -waitForOutlook -withinDays 7 -wait -install "Outlook Drafts Checker" -nag 120

This will nag the user if there are drafts created in the last seven days as soon as Outlook is launched and then nag again either if Outlook is relaunched in that session or every two hours. Alternatively, it could be setup as a scheduled task if preferred but you lose some of its responsiveness such as being able to nag immediately if a new Outlook process for that user is detected.

If you need to remove this autorun, simply run with -uninstall “Outlook draft nagger”.

The script is available on GitHub here and you use it entirely at your own risk although there’s not exactly a great deal of damage that it can wreak. None in fact, other than perhaps you finally finishing and sending an email that perhaps you shouldn’t but don’t blame the script for that, after all you can always delete draft emails rather than send them!


	

Memory Control Script – Fine Tuning Process Memory Usage

In part 1 of this series I introduced a script which consists of over 900 lines of PowerShell, although over 20% of that is comments, that ultimately just calls a single Windows API, namely SetProcessWorkingSetSizeEx , in order to make more memory available on a Windows computer by reducing the working set sizes of targeted processes. This is known as memory trimming but I’ve always had issue with this term since the dictionary definition of trimming means to remove a small part of something whereas default memory trimming, if we use a hair cutting analogy, is akin to scalping the victim.

This “scalping” of working sets can be counter productive since although more memory becomes available for other processes/users, the scalped processes quickly require some of this memory which has potentially been paged out which can lead to excessive hard page faults on a system, when the trimmed memory is mapped back to the processes, and thus performance degradation despite there being more memory available.

So how do we address this such that we actually do trim excessive memory from processes but leave sufficient for it to continue operating without needing to retrieve that trimmed memory? Well unfortunately it is not an exact science but there are options to the script which can help prevent the negative effects of over trimming. This is in addition to the inactivity points mentioned in part 1 where the user’s processes are unlikely to be active so hopefully shouldn’t miss any of their memory – namely idle, disconnected or when the screen is locked.

Firstly, there is the parameter -above which will only trim processes whose working set exceeds the value given. The script has a default of 10MB for this value as my experience points to this being a sensible value below which there is no benefit to trimming. Feel free to play around with this figure although not on a production system.

Secondly, there is the -available parameter which will only trim processes when the available memory is below the given figure which can be an absolute value such as 500MB or a percentage such as 10%. The available memory figure is the ‘Available MBytes’ performance counter in the ‘Memory’ category. Depending on why you are trimming, this option can be used to only trim when available memory is relatively low although not so low that Windows itself indiscriminately trims processes. If I was trying to increase the user density on a Citrix XenApp or RDS server then I wouldn’t use this parameter.

Thirdly, there is a -background option which will only trim the processes for the current user, so can only be used in conjunction with the -thisSession argument, which are not the foreground window, as returned by the GetForeGroundWindow API, where the theory is that the non-foreground windows are hosting processes which are not actively being used so shouldn’t have a problem with their memory being trimmed.

Lastly, we can utilise the working set limit feature built into Windows and accessed via the same SetProcessWorkingSetSizeEx API. Two of the parameters passed to this function are the minimum and maximum working set sizes for the process being affected. When trimming, or scalping as I tend to refer to it as, both of these are passed as -1 which tells Windows to remove as many pages as possible from the working set. However, when they are positive integers, this sets a limit instead such that working sets are adjusted to meet those limits. These limits can be soft or hard – soft limits effectively just apply that limit when the API is called but the limits can then be exceeded whereas hard limits can never be breached. We therefore can use soft limits to reduce a working set to a given value without scalping it. Hard limits can be used to cap processes that leak memory which will be covered in the next article although there is a video here showing it for those who simply can’t wait.

Here is a an example of using soft working set limits for an instance of the PowerShell_ISE process. We start with the process consuming around 286MB of memory as I have been using it (in anger, as you do!):

powershell ise before trim

If we just use a regular trim, aka scalp, on it then the working set reduces to almost nothing:

powershell ise just after trimThe -above parameter is actually superfluous here but I thought I’d demonstrate its use although zero is not a sensible value to use in my opinion.

However, having trimmed it, if I return to the PowerShell_ISE window and have a look at one of my scripts in it, the working set rapidly increases by fetching memory from the page file (or the standby list if it hasn’t yet been written to the page file – see this informative article for more information):

powershell ise after trim and usage

If I then actually run and debug a script the working set goes yet higher again. However, I then switch to Microsoft Edge, to write this blog post, so PowerShell_ISE is still open but not being used. I therefore reckon that a working set of about 160MB is ample for it and thus I can set that via the following where the OS trims the working set, by removing enough least recently used pages, to reach the working set figure passed to the SetProcessWorkingSetSizeEx API that the script calls:

powershell ise soft max working set limit

However, because I have not also specified the -hardMax parameter then the limit is a soft one and therefore can be exceeded if required but I have still saved around 120MB from that one process working set trimming.

Useful but are you really going to watch to see what the “resting” working set is for every process? Well I know that I wouldn’t so use this last technique for your main apps/biggest consumers or just use one of the first three techniques. When I get some time, I may build this monitoring feature into the script so that it can trim even more intelligently but since the script is on GitHub here, please feel free to have a go yourself.

Next time in this series I promise that I’ll show how the script can be used to stop leaky processes from consuming more memory than you believe they should.

Memory Control Script – Reclaiming Unused Memory

This is the first in a series of articles which describes the operation of a script I have written for controlling process memory use on Windows.

Here we will cover the use of the script to trim working sets of processes such that more memory becomes available in order to run more processes or, in the case of Citrix XenApp and Microsoft RDS, to run more user sessions without having them use potentially slower page file memory (not to be confused with “virtual” memory!). The working set of a process is defined here which defines it as “the set of pages in the virtual address space of the process that are currently resident in physical memory”. Great, but what relevance does that have here? Well, what it means is that processes can grab memory but not necessarily actually need to use it. I’m not referring to memory leaks, although this script can deal with them too as we’ll see in a later article, but buffers and other pieces of memory that the developer(s) of an application have requested but, for whatever reasons, aren’t currently using. That memory could be used by other processes, for other users on multi-session systems, but until the application returns it to the operating system, it can’t be-reused. Queue memory trimming.

Memory trimming is where the OS forces processes to empty their working sets. They don’t just discard this memory, since the processes may need it at a later juncture and it could already contain data, instead the OS writes it to the page file for them such that it can be retrieved at a later time if required. Windows will force memory trimming if available memory gets too low but at that point it may be too late and it is indiscriminate in how it trims.

Ok, so I reckon that it’s about time to introduce the memory control script that I’ve written, is available here and requires PowerShell version 3.0 or higher. So what does it do? Trims memory from processes. How? Using the Microsoft  SetProcessWorkingSetSizeEx  API. When? Well when do you think it should trim the memory? Probably not when the user is using the application because that may cause slow response times if the memory trimmed is actually required such that it has to now be retrieved from the page file via hard page faults. So how do we know when the user (probably) isn’t using the application. Well I’ve defined it as the following:

  1. No keyboard or mouse input for a certain time (the session is idle)
  2. The session is locked
  3. The session has become disconnected in the case of XenApp and RDS

As in these are supported/built-in but you are obviously at liberty to call the script whenever you want. They are achieved by calling the script via scheduled tasks but do not fret dear reader as the script itself will create, and delete these scheduled tasks for you. They are created per user since the triggers for these only apply to a single user’s session. The idea here is that on XenApp/RDS, a logon action of some type, e.g. via GPO, would invoke the script with the right parameters to create the scheduled task and automatically remove it at logoff. In it’s simplest form we would run it at logon thus:

.\Trimmer.ps1 -install 600 -logoff

Where the argument to -install is in seconds and is the idle period that when exceeded will cause memory trimming to occur for that session. The scheduled tasks created will look something like this:

trimmer scheduled tasks

Note that they actually call wscript.exe with a vbs script to invoke the PowerShell because I found that even invoking powershell.exe with the “-WindowStyle Hidden” argument still causes a window to very briefly popup when the task runs whereas this does not happen with the vbs approach as it uses the Run method of WScript.Shell and explicitly tells it not to show a window. The PowerShell script will create the vbs script in the same folder as it exists in.

The -logoff argument causes the script to stay running but all it is doing is waiting for the logoff to occur such that it can delete the scheduled tasks for this user.

By default it will only trim processes whose working sets are higher than 10MB since trimming memory from processes using less than this probably isn’t worthwhile although this can be changed by specifying a value with the -above argument.

So let’s see it working – here is a screenshot of task manager sorted on descreasing working set sizes when I have just been using Chrome.

processes before

I then lock the screen and pretty much immediately unlock it and task manager now shows these as the highest memory consumers:

processes after

If we look for the highest consuming process, pid 16320, we can see it is no longer at the top but is quite a way down the list as its working set is now 48MB, down from 385MB.

chrome was big

This may grow when it is used again but if it doesn’t grow to the same level as it was prior to the trimming then we have some extra memory available. Multiply that by the number of processes trimmed, which here will just be those for the one user session since it is on Windows 10, and we can start to realise some savings. With tens of users on XenApp/RDS, or more, the savings can really mount up.

If you want to see what is going on in greater detail, run the script with -verbose and for the scheduled tasks, also specify the -logfile parameter with the name of a log file so the verbose output, plus any warnings or errors, will get written to this file. Add -savings to get a summary of how much memory has been saved.

Running it as a scheduled task is just one way to run it – you can simply run it on the command line without any parameters at all and it will trim all processes that it has access to.

In the next article in the series, I’ll go through some of the other available command line options which gives more granularity/flexibility to the script and can cap leaky processes.

 

Send To Mail recipient replacement

It has bugged me for years that Outlook, up to and including 2016, blocks the main Outlook window if you right-click on files in explorer and choose “Mail recipient” from the Send To menu. So if you leave this draft email open, you won’t get any new emails (which isn’t necessarily a bad thing, for a while!). I therefore decided to fix it, or rather provide an alternative, by writing a PowerShell script.

The script is available here and it can be used to install itself into the running user’s SendTo menu by creating a shortcut that invokes powershell.exe on the script:

& '.\Send Outlook Email.ps1' -install "Mail recipient"

This will replace the existing “Mail recipient” shortcut, as long as you also specify the -overwrite option, but you can leave it in place and create a new shortcut for this script simply by giving a different name to the -install argument.

Should you want to, the original shortcut can be restored thus:

& '.\Send Outlook Email.ps1' -install "Mail recipient" -restore

or if you’ve created an additional shortcut, rather than replacing the default one, use -delete instead of -restore.

The additional features it provides are:

  1. It will make copies to “.txt” files of scripts, which would normally be blocked, such as .vbs or .cmd (the list of file types can be changed in the script)
  2. It will put binary files such as .exe files, which would normally be blocked, into a zip file and attach the zip file instead (these may still be blocked though)
  3. It will warn you if your total size of all attachments exceeds 20MB (this limit can be changed in the script) which may cause the send to fail depending on your mail server’s limits

The script can take various other parameters such as -subject, -body and -recipients but these can only be used when calling the script manually from a PowerShell session or another script since using it from explorer just calls the script with a list of file names to attach. However, you can change the default values of these variables in the script so that they will work when you use the send to menu item within explorer. There are comments in the script explaining how to achieve this.

By default, it uses the same subject and body text as the built-in “Email recipient” shortcut but if you don’t like this text, find the declaration of the “$noBody” variable in the script and change it from “$false” to “$true”. Here’s a screenshot of the email generated with that tweak made and with three files selected, an .exe, .txt and .ps1, when the Send To option was invoked:

outlook-send-to-email

Note that the .ps1 file has been morphed into a .txt file and the .exe has been placed into a zip file.

Oh, and my Outlook main window is fully operational whilst you add whatever body text, etc you want to this email before sending it.

 

 

Manipulating autorun logon entries with PowerShell

As I came to manually create a registry value for a script I wanted to run at logon, I thought that I may as well write a script that would ass the item to save people having to remember where to create the entries. It then morphed into something that would list or remove entries too, depending on the command line parameters specified.

The script is available here and can be run in a number of different ways, e.g.:

  • Show all existing autorun entries in the registry for the current user (if -registry is not specified then the file system will be used):
& "C:\Scripts\Autorun.ps1" -list -registry
  • Delete an existing autorun entry for the current user that runs calc.exe either by the name of the entry (the shortcut name or registry value name) or by regular expression matching on the command run:
& "C:\Scripts\Autorun.ps1" -name "Calculator" -remove
& "C:\Scripts\Autorun.ps1" -run Calc.* -remove
  • Add a new autorun entry to the file system that runs calc.exe for the current user:
& "C:\Scripts\Autorun.ps1" -name "Calculator" -run "calc.exe" -description "Calculator" 

The -allusers flag can be used with any of the above to make them operate for all users but the user running the script must have administrative rights.

When creating an autorun item in the file system, the icon for the shortcut can be specified via the -icon option.

Deletions will prompt for confirmation unless you specify -Confirm:$false and it will also prompt for confirmation when creating a new entry if the name of the entry already exists.

Specifying -verify will generate an error if the executable does not exist when creating a new entry.

If you are on a 64 bit system then you can specify the -wow6432node option when using -registry to work in the wow6432node area of the registry.

Exporting validated shortcuts to CSV file

In a recent Citrix XenApp 7.x consulting engagement, the customer wanted to have a user’s start menu on their XenApp desktop populated entirely via shortcuts created dynamically at logon by Citrix Receiver. Receiver has a mechanism whereby a central store can be configured to be used for shortcuts such that when the user logs on, the shortcuts they are allowed are copied from this central share to their start menu. For more information on this see https://www.citrix.com/blogs/2015/01/06/shortcut-creation-for-locally-installed-apps-via-citrix-receiver/

Before not too long there were in excess of 100 shortcuts in this central store so to check these individually by examining their properties in explorer was rather tedious to say the least so I looked to automate it via good old PowerShell. As I already had logon scripts that were manipulating shortcuts, it was easy to adapt this code to enumerate the shortcuts in a given folder and then write the details to a csv file so it could easily be filtered in Excel to find shortcuts whose target didn’t exist although it will also output the details of these during the running of the script.

I then realised that the script could have more uses than just this, for instance to check start menus on desktop machines so decided to share it with the wider community.

The get-help cmdlet can be used to see all the options but in its simplest form, just specify a -folder argument to tell it where the parent folder for the shortcuts is and a -csv with the name of the csv file you want it to write (it will overwrite any existing file of that name so be careful).

It can also check that the shortcut’s target exists on a remote machine. For instance, you can run the script on a Citrix Delivery Controller but have it check the targets on a XenApp server, via its administrative shares, by using the -computername option.

If you run it on a XenApp server with the “PreferTemplateDirectory” registry value set, use the -registry option instead of -folder and it will read this value from the registry and use that folder.

If you’re running it on a desktop to check that there are no bad shortcuts in the user’s own start menu, whether it is redirected or not, or in the all users start menu then specify the options -startmenu or -allusers respectively.

Finally, it can email the resultant csv file via an SMTP mail server using the -mailserver and -recipients options.

The script is available for download here.