Memory Control Script – Reclaiming Unused Memory

This is the first in a series of articles which describes the operation of a script I have written for controlling process memory use on Windows.

Here we will cover the use of the script to trim working sets of processes such that more memory becomes available in order to run more processes or, in the case of Citrix XenApp and Microsoft RDS, to run more user sessions without having them use potentially slower page file memory (not to be confused with “virtual” memory!). The working set of a process is defined here which defines it as “the set of pages in the virtual address space of the process that are currently resident in physical memory”. Great, but what relevance does that have here? Well, what it means is that processes can grab memory but not necessarily actually need to use it. I’m not referring to memory leaks, although this script can deal with them too as we’ll see in a later article, but buffers and other pieces of memory that the developer(s) of an application have requested but, for whatever reasons, aren’t currently using. That memory could be used by other processes, for other users on multi-session systems, but until the application returns it to the operating system, it can’t be-reused. Queue memory trimming.

Memory trimming is where the OS forces processes to empty their working sets. They don’t just discard this memory, since the processes may need it at a later juncture and it could already contain data, instead the OS writes it to the page file for them such that it can be retrieved at a later time if required. Windows will force memory trimming if available memory gets too low but at that point it may be too late and it is indiscriminate in how it trims.

Ok, so I reckon that it’s about time to introduce the memory control script that I’ve written, is available here and requires PowerShell version 3.0 or higher. So what does it do? Trims memory from processes. How? Using the Microsoft  SetProcessWorkingSetSizeEx  API. When? Well when do you think it should trim the memory? Probably not when the user is using the application because that may cause slow response times if the memory trimmed is actually required such that it has to now be retrieved from the page file via hard page faults. So how do we know when the user (probably) isn’t using the application. Well I’ve defined it as the following:

  1. No keyboard or mouse input for a certain time (the session is idle)
  2. The session is locked
  3. The session has become disconnected in the case of XenApp and RDS

As in these are supported/built-in but you are obviously at liberty to call the script whenever you want. They are achieved by calling the script via scheduled tasks but do not fret dear reader as the script itself will create, and delete these scheduled tasks for you. They are created per user since the triggers for these only apply to a single user’s session. The idea here is that on XenApp/RDS, a logon action of some type, e.g. via GPO, would invoke the script with the right parameters to create the scheduled task and automatically remove it at logoff. In it’s simplest form we would run it at logon thus:

.\Trimmer.ps1 -install 600 -logoff

Where the argument to -install is in seconds and is the idle period that when exceeded will cause memory trimming to occur for that session. The scheduled tasks created will look something like this:

trimmer scheduled tasks

Note that they actually call wscript.exe with a vbs script to invoke the PowerShell because I found that even invoking powershell.exe with the “-WindowStyle Hidden” argument still causes a window to very briefly popup when the task runs whereas this does not happen with the vbs approach as it uses the Run method of WScript.Shell and explicitly tells it not to show a window. The PowerShell script will create the vbs script in the same folder as it exists in.

The -logoff argument causes the script to stay running but all it is doing is waiting for the logoff to occur such that it can delete the scheduled tasks for this user.

By default it will only trim processes whose working sets are higher than 10MB since trimming memory from processes using less than this probably isn’t worthwhile although this can be changed by specifying a value with the -above argument.

So let’s see it working – here is a screenshot of task manager sorted on descreasing working set sizes when I have just been using Chrome.

processes before

I then lock the screen and pretty much immediately unlock it and task manager now shows these as the highest memory consumers:

processes after

If we look for the highest consuming process, pid 16320, we can see it is no longer at the top but is quite a way down the list as its working set is now 48MB, down from 385MB.

chrome was big

This may grow when it is used again but if it doesn’t grow to the same level as it was prior to the trimming then we have some extra memory available. Multiply that by the number of processes trimmed, which here will just be those for the one user session since it is on Windows 10, and we can start to realise some savings. With tens of users on XenApp/RDS, or more, the savings can really mount up.

If you want to see what is going on in greater detail, run the script with -verbose and for the scheduled tasks, also specify the -logfile parameter with the name of a log file so the verbose output, plus any warnings or errors, will get written to this file. Add -savings to get a summary of how much memory has been saved.

Running it as a scheduled task is just one way to run it – you can simply run it on the command line without any parameters at all and it will trim all processes that it has access to.

In the next article in the series, I’ll go through some of the other available command line options which gives more granularity/flexibility to the script and can cap leaky processes.



Finding Citrix PVS or Studio orphans

I recently released a script, which I use almost daily when working with PVS servers at version 7.7 or higher since that’s when a native PowerShell interface appeared, that cross references Citrix Provisioning Services device information to Delivery Controller and Active Directory. See here for the original post. This allows me to easily and quickly health check and potentially fix issues that would otherwise need a lot of manual work and jumping around in various consoles. Whilst the script could already easily identify devices that only existed in PVS, by filtering in the grid view or Excel where the DDC (Desktop Delivery Controller) field/column is empty, I realised I could extend the script to identify devices that exist on Delivery Controllers, so visible in Studio, but don’t exist in PVS. You may of course expect to find some devices in PVS but not present on a DDC, and hence Studio, such as devices used for updating vDisks via booting in maintenance mode since you won’t want to make those available via StoreFront or Receiver.

Once you have the on screen grid view or csv file open in Excel (or Google Sheets), show PVS devices not present on DDCs by simply filtering where the “DDC” column is empty, by clicking on the “Add Criteria” button. To show devices which are known to a DDC, so visible in Studio, but not in PVS, filter where the “PVS Server” column is empty.

pvs orphans

This of course assumes that you have specified the correct server names for your DDC and PVS servers via the -ddcs and -pvsservers options respectively. There’s no need to specify multiple servers for each if they share the same SQL database; only if they use different ones such as you might have for completely separate test and production environments. Comma separate them if you do specify multiple servers.

If you’ve got a mixture of PVS and MCS (or manual) machine catalogues then it will only display machines found on the DDCs you specify which are in PVS linked machine catalogues, unless you specify the -provisioningType parameter.

I’ve also added to the actions menu so that these potential orphans can then be removed from PVS or DDC if you select them in the grid view and then click “OK”.

remove orphans

I’ve also sneaked in a potentially handy feature where you can save the PVS and DDC servers to the registry so that you don’t have to specify them on the command line ever again (on that machine at least). This helps me, if nobody else, as I use the script at many different customers and I can’t always remember their specific server names, or sometimes specify the wrong ones. Save with -save and use these saved values with -registry, and an optional server set name via -serverSet so you can have different sets of servers, e.g. pre-production and production.

For example:

& '.\Get PVS device info.ps1' -ddcs ddc001 -pvsServers pvs001 -save

So next time you just need to run:

& '.\Get PVS device info.ps1' -registry

They are stored in HKCU so are per-user.

The script, amongst others, is available on GitHub here. It has to be run on a machine which has both the PVS and DDC PowerShell cmdlets available; such as one with PVS and Studio consoles installed. Also the ActiveDirectory PowerShell module, particularly if you want to include AD group membership information via the -ADGroups option.

Citrix StoreFront Log Viewer Tool

Have you ever had the need to debug StoreFront? I have on a couple of occasions and it wasn’t the easiest debugging exercise I’d ever undertaken unfortunately. Changing the logging level is easy enough with the PowerShell cmdlet Set-DSTraceLevel. For example, run the following on a StoreFront server to enable verbose logging (see

Add-PSSnapin Citrix.DeliveryServices.Framework.Commands

Set-DSTraceLevel –All –TraceLevel Verbose

Which will restart the Citrix services, having updated web.config files, and various log files will be created in “C:\Program Files\Citrix\Receiver StoreFront\Admin\trace”. It will also cause debug statements to be produced which can be picked up with tools like SysInternals dbgview.

The problem, in my experience, is that reading the log files, of which there are many, can be a bit of a chore. The log files are almost XML format but they are not fully compliant as they don’t have a top level node, since presumably adding this would have a performance hit. Even if you can get them into XML, working with them in XML isn’t particularly easy although that may depend on what XML tool you use (I would typically use Internet Explorer since that’s all I can rely on being on customer machines where I don’t want to start installing third party software).

Fortunately, PowerShell comes to our rescue (yet again) since it’s very easy in scripts to make this almost XML be properly formed so that this can then be quickly parsed and each log record output to a csv file or an on-screen grid view where filtering and/or searching can then take place.

The script is available here and can extract logs from multiple StoreFront servers, by accessing the logs via their C$ share, and splicing them together based on the time of each event. You can either specify a starting and ending date/time range via –start and –end respectively, specify –sinceBoot to include all entries since the last boot of each server or use –last with a number and a specifier such as ‘d’ for days, ‘m’ for minutes and ‘s’ for seconds so “-last 8h” means “in the last eight hours”. For example run the following to see all errors in the last two hours on the two specified StoreFront servers and display on-screen in a filterable grid view:

& '.\parse storefront log files.ps1' -computers storefront01,storefront02 -last 2h -subtypes error

If you have verbose logging enabled but only want to show warning and error entries then specify “-subtypes error,warning” since the default is to include all entries, including verbose ones.

Clicking “OK” at the bottom right of the grid view will copy any selected log lines into the clipboard, e.g. for web searches or logging with Citrix.

Specifying the -Verbose option gives information on what logs are being parsed, from which servers and for what time ranges.

Finally, don’t forget to change the StoreFront logging level back to something like “Error” rather than leaving it at “Verbose” as that is unlikely to help performance!

Citrix PVS device detail viewer with action pane


I find myself frequently using the script I wrote, see here, to check the status of PVS devices and then sometimes I need to perform power actions on them, turn maintenance mode on or off or maybe message users on them before performing power actions (I am a nice person after all). Whilst we can perform most of those actions in the PVS console, if you are dealing with devices across multiple collections, sites or PVS instances then that can involve a lot of jumping around in the PVS console. Plus if you want to change maintenance mode settings or message logged on users then you need to do this from Citrix Studio so you’ll need to launch that and go and find the PVS devices in there.

So I decided to put my WPF knowledge to work and built a very simple user interface in Visual Studio and then inserted it into the PVS device detail viewer script. Once you’ve run the script and got a list of the PVS devices, sorted and/or filtered as you desire, select those devices and then click on the “OK” button down in the bottom right hand side of the grid view. Ctrl-A will select all devices which can be useful if you’ve filtered on something like “Booted off latest” so you only have devices displayed which aren’t booting off the latest production vdisk. This will then fire up a user interface that looks like this, unless you’ve run the script with the -noMenu option or hit “Cancel” in the grid view.

pvs device actioner gui

All the devices you selected in the grid view will be selected automatically for you but you can deselect any before clicking on the button for an action. It will ask you to confirm the action before undertaking it.

pvs device viewer confirm

If you select the “Message Users” option then an additional dialog will be shown asking you for the text, caption and level of the message although you can pass these on the command line via -messageText and -MessageCaption options.

pvs device viewer message box

The “Boot” and “Power Off” options use PVS cmdlets rather than Delivery Controller ones since the devices may not be known to the DDC. “Shutdown” and “Restart” use the “Stop-Computer” and “Restart-Computer” cmdlets respectively and I have deliberately not used the -force parameter with them so if users are logged on, the commands will fail. Look in the window you invoked the script from for errors.

You can keep clicking the action buttons until you exit the user interface so, for instance, it can be used to enable maintenance mode, message users asking them to logoff, reboot the devices when they have logged off or you have had enough of waiting for them to do so and then turning off maintenance mode, if you want to put it back in to service after the reboot.

I hope you find it as useful as I do but note that you use the script entirely at your own risk. It is available here and requires version 7.7 or higher of PVS, XenApp 7.x and PowerShell 3.0 or later where those consoles are installed on the machine where you will run the script from (so that their PowerShell cmdlets are available too).


Update to Citrix PVS device detail viewer

In using the script, introduced here, at a customer this week, I found a few bugs, as you do, and also added a few new features to make my life easier.

In terms of new features, I’ve added a -name command line option which will only show information for devices that match the regular expression you specify. Now don’t run away screaming because I’ve mentioned regular expressions as, contrary to popular belief, they can be straightforward (yes, really!). For instance, if you’ve got devices CTXUAT01, CTXUAT02 and so on that you just want to report on then a regex that will match that is “CTXUAT” – we can forget about matching the numbers unless you specifically need to only match certain of those devices.

Another option I needed was to display Citrix tag information since I am providing a subset of servers, using the same naming convention as the rest of the servers, where there are tag restrictions so specific applications only run off specific servers. Using tags means I don’t have to create multiple delivery groups which makes maintenance and support easier. Specify a -tags option and a column will be added with  the list of tags for each device, if present.

However, adding the -tags option was “interesting” because the column didn’t get added. A bug in my code – surely not! What I then found, thanks to web searches, is that versions of PowerShell prior to 5 have a limit of 30 columns so any more than that and they silently get dropped. The solution? Upgrade to PowerShell version 5 or if that’s not possible and you want the tag information, remove one of the other columns by changing the $columns variable. Yes,  30 columns is a lot for the script to produce but I decided it was better to produce too much information, rather than too little, and then let columns be removed later in Excel or the grid view.

I also found a bug, yes really, where if the vDisk configured for a device had been changed since it was booted then it would not be identified as not booting off the latest. That’s fixed so remember you can quickly find all devices not booting off the latest production version of the vDisk or booting off the wrong vDisk by filtering on the “Booted off Latest” column:

booted off latest

The script is still available here (GitHub? never heard of it :-))

Citrix Provisioning Services device detail viewer

Whilst struggling to find some devices in the PVS console that I thought that I’d just added to a customer’s PVS server via the XenDesktop Setup wizard, I reckoned it should be relatively easy to knock up something that would quickly show me all the devices, their device collection, disk properties and then also cross reference to a Citrix Delivery Controller to show machine catalogue, delivery group, registration state and so on. Note that I’m not trying to reinvent that wheel thing here as I know there are already some great PVS documentation scripts such as those from Carl Webster (available here).

What I wanted was something that would let me quickly view and filter the information from multiple PVS servers, such as development and production instances. Whilst PowerShell can easily export to csv and you can then use Excel, or Google Sheets, to sort and filter, that is still a little bit of a faff so I use PowerShell’s great Out-GridView cmdlet which gives you an instant graphical user interface with zero effort (not that using WPF in PowerShell is particularly difficult!) which can be sorted and filtered plus columns you don’t want can be removed without having to modify the script.

The script takes two parameters which it will prompt for if not specified as they are mandatory:



Both take comma separated lists of PVS servers and Desktop Delivery Controllers respectively although you can just specify a single server for each. If you’ve got multiple PVS servers using the same database then you only need to specify one of them. Ditto for the DDCs.

You can also specify a -csv argument with the name of a csv file if you do want output to got to a csv file but if you don’t then it will default to a filterable and sortable grid view.

Some hopefully useful extra information includes “Booted off latest” where devices with “false” in this column are those which have not been booted off the latest production version of their vDisk so may need rebooting. There’s also “Boot Time” which you can sort on in the grid view to find devices which are overdue a reboot, perhaps because they are not (yet) subject to a scheduled reboot. Plus you can quickly find those that aren’t in machine catalogues or delivery groups or where there is no account for them in Active Directory. You can also filter on devices which are booting off an override version of a vDisk which may be unintentional.

The script is available here and requires version 7.7 or higher of PVS since that is when the PowerShell cmdlets it uses were introduced. Run it from somewhere where you have installed the Citrix PVS and Studio consoles, like a dedicated management server – I’m a firm believer in not running these on their respective servers since that can starve those servers of resource  and thus adversely affect the environment. Ideally, also have the Active Directory PowerShell module (ActiveDirectory) installed too so that the device’s status in AD can be checked.

I’ve just picked out the fields from PVS, Delivery Controllers and AD that are of interest to me but you should be able to add others in the script if you need to.
Continue reading “Citrix Provisioning Services device detail viewer”

Scripted Reporting & Alerting of Citrix Provisioning Services Boot Times

Citrix PVS, formerly Ardence, is still one of my favourite software products. When it works, which is the vast majority of the time if it is well implemented, it’s great but how do you tell how well it is performing? If you’ve enabled event log generation for your PVS servers thus:

pvs event log server

then the Citrix Streaming Service will write boot times of your target devices to the application event log:

pvs boot event

So we can filter in the event log viewer or use the script I’ve written which searches the event log for these entries and finds the fastest, slowest, average, median and mode values from one or more PVS servers and optionally creates a single csv file with the results. A time range can also be specified, such as the last 7 days.

The script lends itself to being run via a scheduled task as it can either email the results to a specified list of recipients or it can send an email only when specific thresholds are exceeded, such as the average time being greater than say 2 minutes.

For instance, running the following:

& '.\Get PVS boot time stats.ps1' -last 7d -output c:\boot.times.csv -gridview

Will write the boot times to file, in seconds, for the last seven days on the PVS server where you are running the script. It will also display the results in a sortable and filterable gridview and output a summary like this:

Got 227 events from 1 machines : fastest 21 s slowest 30 s mean 25 s median 25 s mode 26 s (39 instances)

Or we could run the following to query more servers and send an email via an SMTP mail server if the slowest time exceeds 5 minutes in the last week:

& '.\Get PVS boot time stats.ps1' -last 7d -output c:\boot.times.csv -mailserver yourmailserver -recipients -slowestAbove 300 -computers pvsserver1,pvsserver2

The script has integrated help, giving details on all the command line options available, and can be run standalone or via scheduled tasks.

The script can be downloaded from here. Full help is built in and can be accessed via F1 in PowerShell ISE or Get-Help.

Update 13/02/18

Now with -chartView and -gridView options to give an on screen  chart and grid view respectively.