Showing Current & Historical User Sessions

One of my pet hates, other than hamsters, is when people logon to infrastructure servers, which provide a service to users either directly or indirectly, to run a console or command when that item is available on another server which isn’t providing user services. For instance, I find people logon to Citrix XenApp Delivery Controllers to run the Studio console where, in my implementations, there will always be a number of management servers where all of the required consoles and PowerShell cmdlets are installed. They compound the issue by then logging on to other infrastructure servers to run additional consoles which is actually more effort for them than just launching the required console instance(s) on the aforementioned management server(s). To make matters even worse, I find they quite often disconnect these sessions rather than logoff and have the temerity to leave consoles running in these disconnected sessions! How not to be in my good books!

Even if I have to troubleshoot an issue on one of these infrastructure servers, I will typically remotely access their event logs, services, etc. via the Computer Management MMC snap-in connected remotely and if I need to run non-GUI commands then I’ll use PowerShell’s Enter-PSSession cmdlet to remote to it which is much less of an impact than getting a full blown interactive session via mstsc or similar.

To find these offenders, I used to run quser .exe, which is what the command “query user” calls, with the /server argument against various servers to check if people were logged on when they shouldn’t have been but I thought that I really ought to script it to make it easier and quicker to run. I then also added the ability to select one or more of these sessions and log them off.

It also pulls in details of the “offending” user’s profile lest that’s too big and needs trimming or deleting. I have written a separate script for user profile analysis and optional deletion which is also available in my GitHub repository.

For instance, running the following command:

 & '.\Show users.ps1' -name '^cxt2[05]\d\d' -current

will result in a grid view similar to the one below:

show users ordered

 

It works by querying Active Directory via the Get-ADComputer cmdlet, runs quser.exe against all machines named CTX20xx and CTX25yy, where xx and yy are numerical, and display them in a grid view. Sessions selected in this grid view when the “OK” button is pressed will be logged off although PowerShell’s built in confirmation mechanism is used so if “OK” is accidentally pressed, the world probably won’t end because of it.

The script can also be used to show historical logons on a range of servers where the range can be specified in one of three ways:

  1. -last x[smhdwy] where x is a number and s=seconds, m=minutes, h=hours, d=days, w=weeks and y=years. For example, ‘-last 7d’ will show sessions logged on in the preceding 7 days
  2. -sinceboot
  3. -start “hh:mm:ss dd/MM/yyyy” -end “hh:mm:ss dd/MM/yyyy” (if the date is omitted then the current date is used)

For example, running the following:

& '.\Show users.ps1' -ou 'contoso.com/Servers/Citrix XenApp/Production/Infrastructure Servers' -last 7d

gives something not totally unlike the output below where the columns can be sorted by clicking on the headings and filters added by clicking “Add criteria”:

show users aged

Note that the OU is specified in this example as a canonical name, so can be copied and pasted out of the properties tab for an OU in AD Users and Computers rather than you having to write it in distinguished name form, although it will accept that format too. It can take a -group option instead of -ou and will recursively enumerate the given group to find all computers and the -name option can be used with both -ou and -group to further restrict what machines are interrogated.

The results are obtained from the User Profile Service operational event log and can be written to file, rather than being displayed in a grid view, by using the -csv option.

Sessions selected when “OK” is pressed will again be logged off although a warning will be produced instead if a session has already been logged off.

If you are looking for a specific user, then this can be specified via the -user option which takes a regular expression as the argument. For instance adding the following to the command line:

-user '(fredbloggs|johndoe)'

will return only sessions for usernames containing “fredbloggs” or “johndoe”

Although I wrote it for querying non-XenApp/RDS servers, as long as the account you use has sufficient privileges, you can point it at these rather than using tools like Citrix Director or Edgesight.

The script is available on GitHub here and use of it is entirely at your own risk although if you run it with the -noprofile option it will not give the OK and Cancel buttons so logoff cannot be initiated from the script. It requires a minimum of version 3.0 of PowerShell, access to the Active Directory PowerShell module and pulls data from servers from 2008R2 upwards.

If you are querying non-English operating systems, there may be an issue since the way the script parses the output from the quser command is to use the column headers, namely ‘USERNAME’,’SESSIONNAME’,’ID’,’STATE’,’IDLE TIME’,’LOGON TIME’ on an English OS, since the output is fixed width. You may need to either edit the script or specify the column names via the -filedNames option.

Advertisements

Profile Cleaner Utility

We EUC consultants can spend a considerable amount of time deciding on and building the most suitable user profile mechanism for our Citrix, VMware and RDS deployments but very little, if any, time is spent doing the same for infrastructure servers. I’m not saying that this is an issue – it isn’t generally – as most people take the out of the box default which is local profiles. However, over time as people leave, we can get disk space issues caused by these stale profiles and even when people haven’t left, their profiles can become large without them realising which can potentially impact the performance of these servers since a machine with a full file system generally doesn’t function well. It can of course also be used on persistent XenApp/RDS servers to check for and delete stale or oversize profiles there.

Having checked this manually for rather too long, I decided to write a script to give visibility of local profiles across a range of machines pulled from Active Directory where the machines to interrogate can be selected by a regular expression matching their name, an organisational unit (e.g. copied to the clipboard from the properties of an OU in the AD Users and Computers MMC snap in) or an AD group.

This actually turned out to be easier than I anticipated, for once, in that I didn’t have to go anywhere near the ProfileList registry key directly since there is a WMI class Win32_UserProfile which contains the required information, albeit with the profile owner as a SID rather than username but in PowerShell it’s easy to get the username for a SID. I’ve pulled out what I think are the most useful fields but if you were to use it, say, for persistent XenApp servers using roaming profiles then you might want to pull more of the fields out.

The script requires the Active Directory PowerShell module to be present wher the script is run from since it will query AD and retrieve various AD properties for the domain users associated with profiles to make it easy to spot users who may have left because their AD account is disabled or their last AD logon was a long time ago.

Thanks to the great PowerShell Out-GridView cmdlet, it was straightforward to take the list of user profiles which were selected when the “OK” button was clicked in the grid view and then delete those profiles, albeit with PowerShell prompting for confirmation before deletions. The deletion is achieved by calling the Delete() method of the win32_userprofile WMI object previously returned for that profile. Obviously the script will need to be run under an account that has the rights to remotely delete profiles.

It’s very simple to use, for example running the script with the following  options will result in a grid view where any profiles that you want to delete can be selected and then the OK button pressed to delete them:

& '.\Profile Cleaner.ps1' -excludeLocal -excludeUsers [^a-z]SVC-[a-z] -name '^CTX\d{4}'

profiles tp delete

This will exclude all local, as in non-domain, accounts and any accounts that start with SVC- as these may be service accounts that are best left well alone, unless the profile size is of a concern. This will be on all servers named CTXxxxx where xxxx is numerical, specified by regular expression, aka regex, which really aren’t that scarey, honest!

An OU, either in canonical or distinguished name format, or AD group can be specified via the -OU and -group options respectively. The -name option can also be specified with either of these to restrict what machines are returned from the OU or group specified.

It will write the profile information to a csv file if the -csv option is specified instead of displaying it in a grid view.

Run with -verbose to get more detail as it runs such as what machine it is querying. It may seem to run slowly but that is most likely to be because it has to traverse each user’s profile in order to determine its size.

The script is available for download from GitHub here and you use it entirely at your own risk.

This is very much an interactive tool – if you need an automated mechanism for removing profiles then I would recommend looking at the delprof2 tool from Helge Klein which is available here.

Memory Control Script – Capping Leaky Processes

In the third part of the series covering the features of a script I’ve written to control process working sets (aka “memory”), I will show how it can be used to prevent leaky processes from consuming more memory than you believe they should.

First off, what is a memory leak? For me, it’s trying to remember why I’ve gone into a room but in computing terms, it is when a developer has dynamically allocated memory in their programme but then not subsequently informed the operating system that they have finished with that memory. Older programming languages, like C and C++, do not have built in garbage collection so they are not great at automatically releasing memory which is no longer required. Note that just because a process’s memory increases but never decreases doesn’t actually mean that it is leaking – it could be holding on to the memory for reasons that only the developer knows.

So how do we stop a process from leaking? Well short of terminating it, we can’t as such but we can limit the impact by forcing it to relinquish other parts of its allocated memory (working set) in order to fulfil memory allocation requests. What we shouldn’t do is to deny the memory allocations themselves, which we could actually do with hooking methods like Microsoft’s Detours library. This is because the developer, if they even bother checking the return status of a memory allocation request before using it, which would result in the infamous error “the memory referenced at 0x00000000 could not be read/written” (aka a null pointer dereference), probably can’t do a lot if the memory allocation fails other than outputting an error to that effect and exiting.

What we can do, or rather the OS can do, is to apply a hard maximum working set limit to the process. What this means is that the working set cannot increase above the limit so if more memory is required, part of the existing working set must be paged out. The memory paged out is the least recently used so is very likely to be the memory the developer forgot to release so they won’t be using it again and it can sit in the page file until the process exits. Thus increased page file usage but decreased RAM usage which should help performance and scalability and reduce the need for reboots or manual intervention.

Applying a hard working set limit is easy with the script, the tricky part is knowing what value to set as the limit – too low and it might not just be leaked memory that is paged out so performance could be negatively affected due to hard page faults. Too high a limit and the memory savings, if the limit is ever hit, may not be worth the effort.

To set a hard working set limit on a process we run the script thus:

.\trimmer.ps1 -processes leakprocess -hardMax -maxWorkingSet 100MB

or if the process has yet to start we can use the waiting feature of the script along with the -alreadyStarted option in case the process has actually already started:

.\trimmer.ps1 -processes leakprocess -hardMax -maxWorkingSet 100MB -waitFor leakyprocess -alreadyStarted

You will then observe in task manager that its working set never exceeds 100MB.

To check that hard limits are in place, you can use the reporting option of the script since tools like task manager and SysInternals Process Explorer won’t show whether any limits are hard ones. Run the following:

.\trimmer.ps1 -report -above 0

which will give a report similar to this where you can filter where there is a hard working set limit in place:

hard working set limit

There is a video here which demonstrates the script in action and uses task manager to prove that the working set limit is adhered to.

One way to implement this for a user, would be to have a logon script that uses the -waitFor  option as above, together with -loop so that the script keeps running and picks up further new instances of the process to be controlled, to wait for the process to start. To implement for system processes, such as a leaky third party service or agent, use the same approach but in a computer start-up script.

Once implemented, check that hard page fault rates are not impacting performance because the limit you have imposed is too low.

The script is available here and use of it is entirely at your own risk.

Changing/Checking Citrix StoreFront Logging Settings

Enabling, capturing and diagnosing StoreFront logs is not something I have to do often but when I do, I found it was time consuming to enable, and disable, logging across multiple StoreFront servers and also to check on the status of logging since Citrix provide cmdlets to change tracing levels but not to query them as far as I can tell.

After looking at reported poor performance of several StoreFront servers at one of my customers, I found that two of them were set for verbose logging which wouldn’t have been helping. I therefore set about writing a script that would allow the logging (trace) level to be changed across multiple servers and also to report on the current logging levels. I use the plural as there are many discrete modules within StoreFront and each can have its own log level and log file.

So which module needs logging enabled? The quickest way, which is all the script currently supports, is to enable logging for all modules. The Citrix cmdlet that changes trace levels, namely  Set-DSTraceLevel, can be used more granularly it seems but I have found insufficient details in order to be able to implement it in my script.

The script works with clustered StoreFront servers in that you can specify just one of the servers in the cluster via the -servers option together with the -cluster option which will (remotely) read the registry on that server to find where StoreFront is installed so that it can load the required cmdlets to retrieve the list of all servers in the cluster.

To set the trace level on all servers in a StoreFront cluster run the following:

& '.\StoreFront Log Levels.ps1' -servers storefront01 -cluster -traceLevel Verbose

The available trace levels are:

  • Off
  • Error
  • Warning
  • Info
  • Verbose

To show the trace levels, without changing them, on these servers and check that they are consistent on each server and across them, run the following:

& '.\StoreFront Log Levels.ps1' -servers storefront01 -cluster -grid

Which will give a grid view similar to this:

storefront log settings

It will also report the version of StoreFront installed although the -cluster option must be used and all servers in the cluster specified via -servers if you want to display the version for all servers.

The script is available here and you use it entirely at your own risk although I do use it myself on production StoreFront servers. Note that it doesn’t need to run on a StoreFront server as it will remote commands to them via the Invoke-Command cmdlet. It has so far been tested on StoreFront versions 3.0 and 3.5 and requires a minimum of PowerShell version 3.0.

Once you have the log files, there’s a script introduced here that stitches the many log files together and then displays them in a grid view, or csv, for easy filtering to hopefully quickly find anything relevant to the issue being investigated.

For those of an inquisitive nature, the retrieval side of the script works by calling the Get-DSWebSite cmdlet to get the StoreFront web site configuration which includes the applications and for each of these it finds the settings by examining the XML in each web.config file.

Don’t forget to return logging levels to what they were prior to your troubleshooting although I would recommend leaving them set as “Error” as opposed to “Off”.

Memory Control Script – Fine Tuning Process Memory Usage

In part 1 of this series I introduced a script which consists of over 900 lines of PowerShell, although over 20% of that is comments, that ultimately just calls a single Windows API, namely SetProcessWorkingSetSizeEx , in order to make more memory available on a Windows computer by reducing the working set sizes of targeted processes. This is known as memory trimming but I’ve always had issue with this term since the dictionary definition of trimming means to remove a small part of something whereas default memory trimming, if we use a hair cutting analogy, is akin to scalping the victim.

This “scalping” of working sets can be counter productive since although more memory becomes available for other processes/users, the scalped processes quickly require some of this memory which has potentially been paged out which can lead to excessive hard page faults on a system, when the trimmed memory is mapped back to the processes, and thus performance degradation despite there being more memory available.

So how do we address this such that we actually do trim excessive memory from processes but leave sufficient for it to continue operating without needing to retrieve that trimmed memory? Well unfortunately it is not an exact science but there are options to the script which can help prevent the negative effects of over trimming. This is in addition to the inactivity points mentioned in part 1 where the user’s processes are unlikely to be active so hopefully shouldn’t miss any of their memory – namely idle, disconnected or when the screen is locked.

Firstly, there is the parameter -above which will only trim processes whose working set exceeds the value given. The script has a default of 10MB for this value as my experience points to this being a sensible value below which there is no benefit to trimming. Feel free to play around with this figure although not on a production system.

Secondly, there is the -available parameter which will only trim processes when the available memory is below the given figure which can be an absolute value such as 500MB or a percentage such as 10%. The available memory figure is the ‘Available MBytes’ performance counter in the ‘Memory’ category. Depending on why you are trimming, this option can be used to only trim when available memory is relatively low although not so low that Windows itself indiscriminately trims processes. If I was trying to increase the user density on a Citrix XenApp or RDS server then I wouldn’t use this parameter.

Thirdly, there is a -background option which will only trim the processes for the current user, so can only be used in conjunction with the -thisSession argument, which are not the foreground window, as returned by the GetForeGroundWindow API, where the theory is that the non-foreground windows are hosting processes which are not actively being used so shouldn’t have a problem with their memory being trimmed.

Lastly, we can utilise the working set limit feature built into Windows and accessed via the same SetProcessWorkingSetSizeEx API. Two of the parameters passed to this function are the minimum and maximum working set sizes for the process being affected. When trimming, or scalping as I tend to refer to it as, both of these are passed as -1 which tells Windows to remove as many pages as possible from the working set. However, when they are positive integers, this sets a limit instead such that working sets are adjusted to meet those limits. These limits can be soft or hard – soft limits effectively just apply that limit when the API is called but the limits can then be exceeded whereas hard limits can never be breached. We therefore can use soft limits to reduce a working set to a given value without scalping it. Hard limits can be used to cap processes that leak memory which will be covered in the next article although there is a video here showing it for those who simply can’t wait.

Here is a an example of using soft working set limits for an instance of the PowerShell_ISE process. We start with the process consuming around 286MB of memory as I have been using it (in anger, as you do!):

powershell ise before trim

If we just use a regular trim, aka scalp, on it then the working set reduces to almost nothing:

powershell ise just after trimThe -above parameter is actually superfluous here but I thought I’d demonstrate its use although zero is not a sensible value to use in my opinion.

However, having trimmed it, if I return to the PowerShell_ISE window and have a look at one of my scripts in it, the working set rapidly increases by fetching memory from the page file (or the standby list if it hasn’t yet been written to the page file – see this informative article for more information):

powershell ise after trim and usage

If I then actually run and debug a script the working set goes yet higher again. However, I then switch to Microsoft Edge, to write this blog post, so PowerShell_ISE is still open but not being used. I therefore reckon that a working set of about 160MB is ample for it and thus I can set that via the following where the OS trims the working set, by removing enough least recently used pages, to reach the working set figure passed to the SetProcessWorkingSetSizeEx API that the script calls:

powershell ise soft max working set limit

However, because I have not also specified the -hardMax parameter then the limit is a soft one and therefore can be exceeded if required but I have still saved around 120MB from that one process working set trimming.

Useful but are you really going to watch to see what the “resting” working set is for every process? Well I know that I wouldn’t so use this last technique for your main apps/biggest consumers or just use one of the first three techniques. When I get some time, I may build this monitoring feature into the script so that it can trim even more intelligently but since the script is on GitHub here, please feel free to have a go yourself.

Next time in this series I promise that I’ll show how the script can be used to stop leaky processes from consuming more memory than you believe they should.

VMware integration added to Citrix PVS device detail viewer & actioner

You may be familiar with the script I wrote, previously covered here and available on GitHub here, that allows you to get a single pane view, either in csv or on-screen in a filterable and sortable grid view, of all your Provisioning Services devices together with information from Delivery controllers, such as machine catalogue and delivery group membership as well as registration and maintenance mode status. When using the grid view, you can select any number of devices to then get a GUI that allows operations like booting or shutting them down and removing from PVS and/or DDC.

When working at a customer recently I came across a number of VMs in VMware that were named using the XenApp worker naming scheme but weren’t being shown in the PVS or Studio consoles. Being the inherently lazy person that I am, I didn’t fancy deleting these individually in VMware and Active Directory, if they even existed in the latter, so I decided that it would be useful to add extra functionality to the script by getting it to add VMs that matched a specific naming pattern, so as not to pull in infrastructure VMs for example, that hadn’t already been pulled from Citrix PVS and DDC data. So I implemented this, utilising VMware  PowerCLI, and then also added a “Remove from Hypervisor” button to the action GUI so that these orphans can be removed in one go, including their hard drives.

To show VMs that don’t exist in either PVS or DDC in the grid view, simply add filters for where the DDC and PVS servers are empty.

show orphaned VMs

It will try to get AD account details too, such as the account creation and last logon dates and the description, in order to try and help you figure out what they are and if they have recently been used. They may not exist in AD, of course though, but that will be apparent in the data displayed, unless you don’t have domain connectivity/rights or the ActiveDirectory PowerShell module available.

This additional functionality is enabled by specifying the -hypervisors argument on the command line and passing it a comma separated list of your vCenter servers. If you do not have cached credentials (e.g. via New-VICredentialStoreItem) or pass through authentication working then it will prompt for credentials for each connection. You must have already installed the VMware PowerCLI package corresponding to the version of vSphere that you are using. There are examples of the command line usage in the help built into the script.

I then realised that in addition to the information already gathered that allows easy identification of devices booting off the wrong vDisk/vDisk version and devices that are overdue a reboot for example, that I could also pull in the following VMware details, again to help identify where VMs are incorrectly configured:

  • Number of CPUs
  • Memory
  • Hard drives (the size of each assigned)
  • NICs (the type of each assigned, e.g. “vmxnet3”)
  • Hypervisor

You can then sort or filter in the grid view or csv to uncover misconfigured VMs.

vmware info

The downside to all this extra information is that there are now up to 42 (a coincidence!) columns of information to be displayed in the grid view but, unfortunately, versions of PowerShell prior to 5.0 can only display a maximum of 30 columns. Csv exports aren’t affected by this limitation though. As I am often heard saying to my kids, it’s better to have something and not need it rather than need something and not have it – you can remove columns in the grid view, by right clicking on any column header, or in Excel, or whatever you use to view the csv. If this will impact you, consider upgrading as there are a whole load more PowerShell features that you’re missing.

To restrict what VMs are returned by the Get-VM cmdlet, you will probably need to use the -name argument together with a regular expression (aka regex) which will only match your XenApp/XenDesktop workers. For instance, if your VMs are called CTX1001 through CTX1234 and also CTX5001 onwards then use something like the following:

'^CTX[15]\d{3}$'

The -name parameter is also used to restrict what PVS devices are included so you can just include a subset if you have, say, a sub-naming convention to name development XenApp servers differently to production ones, e.g. CTXD1234 versus CTXP4567, which will make it quicker.

To check that a regular expression you build matches what you expect before you run the script, there are on-line regex checkers available but I just use PowerShell. For instance, typing the following in a PowerShell session will display “True”:

'CTX1042' -match '^CTX[15]\d{3}$'

I also decided to add a progress indicator since, with hundreds of devices, it can take several minutes to collect all of the relevant data although data is cached where possible to minimise the number of remote calls required. This can be disabled with -noProgress.

If you do have orphaned VMs and you want to remove them, highlight them in the grid view and then click “OK” down in the bottom right hand corner. Ctrl-A can be used to select all items in the grid view. This will then give you the action GUI (ok, not the prettiest user interface ever but it does work!):

pvs device actioner gui vm

where you can power off the VMs if they are on and then delete them from the hypervisor and from AD, all without having to go to any product consoles assuming that you are running the script under an account which has the necessary rights. When you quit this GUI, the devices that you originally selected in the grid view, will be placed into the clipboard in case you need to paste them into a document, etc.

Using -save, -registry and, optionally, -serverset will also save/retrieve  the server(s) specified by -hypervisors to the registry. This means that you don’t have remember server names every time you run the script – handy when you deal with lots of different customers like I do.

Be aware that it needs to be run where the PVS and DDC cmdlets are available so I would recommend installing on a dedicated management server which does not host the PVS or DDC roles so you can also use those consoles, and others you install, on there so that you don’t risk degrading performance of key infrastructure servers. Also, don’t forget VMware PowerCLI and the AD PowerShell module (part of the RSAT feature).

Whilst I have checked the operation of this script as much as one man in West Yorkshire can, if you use it then you do so entirely at your own risk and I cannot be held responsible for any unintentional, or intentional, undesired effects. Always double, and even triple, check before you delete anything!

Having said that, I hope it is as useful for you as it is for me – for a reporting and status tool, I use it daily (weekends included!).

Memory Control Script – Reclaiming Unused Memory

This is the first in a series of articles which describes the operation of a script I have written for controlling process memory use on Windows.

Here we will cover the use of the script to trim working sets of processes such that more memory becomes available in order to run more processes or, in the case of Citrix XenApp and Microsoft RDS, to run more user sessions without having them use potentially slower page file memory (not to be confused with “virtual” memory!). The working set of a process is defined here which defines it as “the set of pages in the virtual address space of the process that are currently resident in physical memory”. Great, but what relevance does that have here? Well, what it means is that processes can grab memory but not necessarily actually need to use it. I’m not referring to memory leaks, although this script can deal with them too as we’ll see in a later article, but buffers and other pieces of memory that the developer(s) of an application have requested but, for whatever reasons, aren’t currently using. That memory could be used by other processes, for other users on multi-session systems, but until the application returns it to the operating system, it can’t be-reused. Queue memory trimming.

Memory trimming is where the OS forces processes to empty their working sets. They don’t just discard this memory, since the processes may need it at a later juncture and it could already contain data, instead the OS writes it to the page file for them such that it can be retrieved at a later time if required. Windows will force memory trimming if available memory gets too low but at that point it may be too late and it is indiscriminate in how it trims.

Ok, so I reckon that it’s about time to introduce the memory control script that I’ve written, is available here and requires PowerShell version 3.0 or higher. So what does it do? Trims memory from processes. How? Using the Microsoft  SetProcessWorkingSetSizeEx  API. When? Well when do you think it should trim the memory? Probably not when the user is using the application because that may cause slow response times if the memory trimmed is actually required such that it has to now be retrieved from the page file via hard page faults. So how do we know when the user (probably) isn’t using the application. Well I’ve defined it as the following:

  1. No keyboard or mouse input for a certain time (the session is idle)
  2. The session is locked
  3. The session has become disconnected in the case of XenApp and RDS

As in these are supported/built-in but you are obviously at liberty to call the script whenever you want. They are achieved by calling the script via scheduled tasks but do not fret dear reader as the script itself will create, and delete these scheduled tasks for you. They are created per user since the triggers for these only apply to a single user’s session. The idea here is that on XenApp/RDS, a logon action of some type, e.g. via GPO, would invoke the script with the right parameters to create the scheduled task and automatically remove it at logoff. In it’s simplest form we would run it at logon thus:

.\Trimmer.ps1 -install 600 -logoff

Where the argument to -install is in seconds and is the idle period that when exceeded will cause memory trimming to occur for that session. The scheduled tasks created will look something like this:

trimmer scheduled tasks

Note that they actually call wscript.exe with a vbs script to invoke the PowerShell because I found that even invoking powershell.exe with the “-WindowStyle Hidden” argument still causes a window to very briefly popup when the task runs whereas this does not happen with the vbs approach as it uses the Run method of WScript.Shell and explicitly tells it not to show a window. The PowerShell script will create the vbs script in the same folder as it exists in.

The -logoff argument causes the script to stay running but all it is doing is waiting for the logoff to occur such that it can delete the scheduled tasks for this user.

By default it will only trim processes whose working sets are higher than 10MB since trimming memory from processes using less than this probably isn’t worthwhile although this can be changed by specifying a value with the -above argument.

So let’s see it working – here is a screenshot of task manager sorted on descreasing working set sizes when I have just been using Chrome.

processes before

I then lock the screen and pretty much immediately unlock it and task manager now shows these as the highest memory consumers:

processes after

If we look for the highest consuming process, pid 16320, we can see it is no longer at the top but is quite a way down the list as its working set is now 48MB, down from 385MB.

chrome was big

This may grow when it is used again but if it doesn’t grow to the same level as it was prior to the trimming then we have some extra memory available. Multiply that by the number of processes trimmed, which here will just be those for the one user session since it is on Windows 10, and we can start to realise some savings. With tens of users on XenApp/RDS, or more, the savings can really mount up.

If you want to see what is going on in greater detail, run the script with -verbose and for the scheduled tasks, also specify the -logfile parameter with the name of a log file so the verbose output, plus any warnings or errors, will get written to this file. Add -savings to get a summary of how much memory has been saved.

Running it as a scheduled task is just one way to run it – you can simply run it on the command line without any parameters at all and it will trim all processes that it has access to.

In the next article in the series, I’ll go through some of the other available command line options which gives more granularity/flexibility to the script and can cap leaky processes.