Install at Boot on Citrix XenApp PVS Servers

The Problem

Whilst designing a XenApp 7.x infrastructure for a customer, a requirement surfaced that they wanted to be able to update their in-house developed applications, of which there were three separate ones, written by three separate teams, on a weekly basis. Given that they did not have App-V packaging skills, so weekly repackaging would have been cost prohibitive, we looked at using Citrix AppDisks so that we could hand Server 2012R2 virtual machines over to the customer with AppDisks in write mode, let them install and test their apps and then hand them back to us for sealing and promotion to UAT and thence Production. Unfortunately, we hit a number of issues with the AppDisk technology which meant that we had to seek an alternative way of delivering these applications.

The Solution

Using a computer startup PowerShell script assigned via Group Policy to the OU where the servers earmarked for the customer’s applications resided, I came up with a robust mechanism which would install the applications during boot. The Server 2012R2 VMs with the Citrix VDA installed, which were providing published applications and desktops to the end users, were booting from Provisioning Services with Cache in device RAM and overflow to local disk so we therefore needed to ensure that the installation of the software didn’t use the RAM cache as this could have had a negative effect on performance. It was deemed too high risk to have the software explicitly installed to a drive other than the system drive, namely C:, even if the installers allowed it, particularly as some of the required prerequisite packages were quite old and out of support. We therefore added another virtual hard disk to the VMs which the installation script would find, format and then create symbolic links to folders on the additional drive from the installation folders on the C: drive so that their software could be installed without using C: drive space.

As the customer’s previous XenApp 6.x implementation had sprawled to over twenty different PVS vDisks, including the ones the developers used which had been continuously in private, writeable, mode for many years so were very different from the rigidly change controlled production images, the solution needed to keep the number of vDisks to an absolute minimum so the install on boot apps would install on the single base image that was also used for COTS (Commercial Off The Shelf) apps.

Solution Detail

Support for Multiple Applications

Since there were three distinct applications which required different installers to be run, with possibly more applications to come, I tried to keep the complexity low, particularly in terms of scripts since trying to maintain multiple scripts was something that could prove problematic for BAU operations. I therefore used AD groups to designate which XenApp servers got which applications where the application name was part of the group name so it was simple, via good old regular expressions, to figure out which application a server should be getting by enumerating its AD group memberships via ADSI (so that the Active Directory PowerShell module isn’t required).

$filter = "(&(objectCategory=computer)(objectClass=computer)(cn=$env:COMPUTERNAME))"
$properties = ([adsisearcher]$filter).FindOne().Properties
## We will get computer group memberships to decide what to install if( $properties )
[string[]]$groups = @( $properties.memberof )
if( ! $groups -or ! $groups.Count )
## Already determined via computer name that we are a custom app server so having no groups means we do not know what to install
$fatalError = "Failed to get any AD groups via $filter therefore unable to install any custom applications"
throw $fatalError
Write-Verbose "Found $($groups.Count) groups via $filter"
ForEach( $group in $groups )

This was preferred to using different OUs for the different apps so then only one copy of the startup script was required and the chances of a server being in the wrong OU were reduced, especially in Production where a single OU was used for servers providing these custom apps and those not providing them. The script differentiated between the two types of server via NetBIOS name as a different naming convention was used for those providing custom apps so the startup script could exit almost immediately if the NetBIOS name was not that of a custom apps server and thus not delay the boot noticeably.

In order to keep it simple from a Citrix Studio perspective, we used Citrix tags to designate each application so that we could use a single Delivery Group for all of the custom applications and use tag restrictions on Application Groups to direct them to the correct XenApp server. This was in preference to using multiple Delivery Groups. There did not unfortunately appear to be an easy and reliable method of getting tag information on the XenApp server during boot otherwise we would have considered just using tags as opposed to tags and AD groups.

Disk Preparation

The startup script enumerates local disks by way of the Get-Disk cmdlet, filtering on a friendly name of “VMware*”, since that is the hypervisor in use, and where the disk size is 4GB or less so that it does not try to use the non-persistent system drive or the persistent drive used for the PVS overflow disk. These filters are passed to the script as parameters with defaults to make the script more easily repurposed.

$disks = Get-Disk -FriendlyName $diskDriveName | Where-Object { $_.Size -le $agileDriveSize }
if( ! $disks -or $disks.GetType().IsArray )
$fatalError = "Unable to find single drive of type $diskDriveName of size $($agileDriveSize / 1GB)GB or less"
throw $fatalError

If the disk’s partition style is “raw” then it is initialised otherwise all of its partitions are deleted in order to remove all vestiges of any previous application installation.

if( $disks.PartitionStyle -eq 'raw' )
$disks = $disks | Initialize-Disk -PartitionStyle MBR -PassThru
else ## partitions exist so delete them
$disks|Get-Partition|Remove-Partition -Confirm:$false

The drive is then formatted and a drive letter assigned although the drive letter is not hard-coded which removes the chances of clashing with others disks and optical drives.

$formatted = $disks | New-Partition -AssignDriveLetter -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel $label -Confirm:$false

if( ! $? -or ! $formatted -or [string]::IsNullOrEmpty( $formatted.DriveLetter ) )
$fatalError = "Failed to format drive with label "$label""
throw $fatalError

Finally, permissions are changed on the root of the new drive since non-admin users get write access by default which is not desired (or secure!).

As the underlying storage tier had built-in deduplication, we hoped/believed that the overhead of say ten instances of this additional disk for one application was nowhere near ten times the size of the data written to one disk.

If using the XenDesktop Setup Wizard within the PVS console to create one or more VMs with additional disks, ensure that the “UseTemplateCache” registry value is set to 0 where the PVS MMC snap-in is run from and that it is set before the mmc process is started otherwise additional disks in your template VM will not appear in the newly created VMs. See this aticle for more information.


Symbolic Links

Before the installation script for the custom applications can be run, symbolic links have to be created from the system (C:) drive to the newly created drive in order to preserve the PVS RAM cache. This was achieved by having a simple text file for each application which had one folder per line where the folder was the name of a location on C: where that app would install software components. For instance, a line in the text file might be “\ProgramData\Chocolatey” which would result in the following PowerShell code being executed where $dest =
“D:\ProgramData\Chocolatey” and $source =
“C:\ProgramData\Chocolatey” :

if( ! ( Test-Path -Path $dest -PathType Container -EA SilentlyContinue  ) )
$newDir = New-Item -Path $dest -ItemType Container -Force
if( ! $newDir )
$fatalError = "Failed to create folder $dest`n$_"  
throw $fatalError
## This requires PowerShell 5.0+
$link = New-Item -ItemType SymbolicLink -Path $source -Value $dest
if( ! $link )
  $fatalError = "Failed to create link $source to $dest`n$_"
throw $fatalError

Where D: is the drive letter that was assigned to the 4GB additional drive.

One slight hiccup we encountered was that although the startup script was running with administrative privilege, it still didn’t have sufficient privilege to create the symbolic link, which also requires PowerShell version 5.x. I therefore added functionality to the script to take a local copy of the running script and elevate it to run under the system account using the SysInternals psexec utility.

Local Administrator Access

In order for this solution to not be onerous on BAU support and potentially introduce delays for the customer’s developers, specific AD groups were added to the local administrators group during boot if the server was a development one, as opposed to UAT or Production. The fact that a server was a development one, which the customer had full control over in terms of rebooting, since it was booting from a read-only PVS vDisk, was simply gleaned from the fact that the string “_DEV_” appeared at an expected position in the AD group name that was also used to determine which application was to be installed on it. On UAT and Production servers, changes to the local administrators group membership were not done.

There is a script here which uses the same code to change local group membership.

Application Installation

From our perspective, the installation of the customer’s applications was straightforward as they were responsible for the application installation scripts so the startup script simply had to call the customer’s script, checking for any exceptions so these could be percolated up into our error handling mechanism, once the additional drive was ready. Having determined which of the three applications were to be installed on a particular XenApp server by way of its AD group membership as previously described, the startup script would simply invoke the “XenApp.Boot.Time.Installer.ps1” script in the folder for that application or throw a fatal error if the script did not exist, could not be accessed or raised an exception itself.

In the development environment, where the customer had full control over the share containing their install script and Chocolatey packages and also had local admin access to the XenApp servers hosting their applications, the customer could test their own script whenever they wanted by simply rebooting their development servers.

In the UAT and Production environments, the customer had no local administrator access to the servers or write access to the share containing their install script and packages – a change request was required in order for us to copy, albeit via another PowerShell script with simple prompts to reduce the risk of BAU operations making incorrect changes, the files from the development shares into the production ones. The Delivery Groups containing these servers had a weekly reboot set to commence at 0200 on a Sunday.

One of the few limitations of this approach is that any software being installed that requires a server reboot cannot be supported since the reboot would cause the changes to be lost, because they are PVS booted in shared mode, and a cycle of infinite reboots would ensue. However, the nature of the applications, particularly as they would not be in use during installation as the server would be starting up, meant this was unlikely to happen. It would be more likely that a change to a required component, such as the .NET Framework, would require a reboot which is covered later.

Multiple Datacentre Considerations

The whole solution was provided out of two datacentres for resiliency so the solution for these bespoke apps also had to fit within this. To that end, two shares were provided for the UAT and Production environments such that the startup script would examine the IPv4 address of the booting XenApp server and select the installation share local to that datacentre although with a mechanism to fallback to the share in the other datacentre if the local one was not available. Data was replicated between the two shares by way of the script that BAU operations would use when promoting the customers scripts and packages from Development after a change request had been approved. This script had its own sanity checking in such as ensuring that there were actually changes, additions or removals to the files that were being promoted, as it kept a copy of the previous installation files, this being achieved via checksumming files rather than just relying on comparison of file sizes and datestamps.

At the Citrix level, there were two machine catalogues containing the bespoke app servers – one for each datacentre although this did not affect the startup script at all other than as mentioned in the previous paragraph.

Testing PVS vDisk Changes

Since the bespoke app servers are booting off the same vDisk as normal XenApp servers, in order to reduce complexity, regular updating and patching of the base disk is occurring which needs to also be incorporated into the bespoke apps to ensure they continue functioning as expected. In the development environment, the bespoke app servers are not subject to an automated reboot schedule, unlike the normal servers, since the developers are in control of their servers. It was agreed with the customer that there would be a ten working day grace period from when a new vDisk version was put into production before the development bespoke app servers would be forcibly rebooted by BAU operations if any had not been rebooted already. In order to automate this, a “message of the day” (MOTD) mechanism was implemented which launches notepad with a text file when an administrator logs on where the text file contains the date by which the reboot needs to be performed and the reasons why. The creation of this startup item and the contents of the text file are part of the script used to promote vDisks so is all automated and when a server is rebooted the startup item is removed via the startup script so that erroneous reboot notifications are not shown at admin logon. A scheduled task could have been created to perform the forced reboot after ten days, and removed at boot by the startup script, but the customer did not want this functionality automating.

As mentioned earlier, there are also potentially customer instigated PVS base disk changes where the developers need to test a change to one of their components which requires a reboot, such as the .NET Framework or database drivers. This is catered for in development by making the development servers boot off a copy of the main vDisk which most of the time is identical but can have a new version created if required. Once the new version with the potential changes has been created and promoted to a production version, just for the development servers, the customer can then test their applications and subsequently either raise a change request for the change to be incorporated into the main vDisk, which will eventually be promoted to UAT and Production, or will ask for the changes to be reverted, as in they don’t need them, so the development servers are then rebooted off the previous PVS production version and the newer disk version discarded.

Error Handling and SCOM Integration

Everything is error/double checked with exception handlers writing to a log file which is the first point of call for troubleshooting. If a fatal error, as in exception, is detected, the Citrix Broker Agent service is stopped and disabled by the script so that the affected server could not be accessed by end users, since it cannot register with a Delivery Controller, in case the application wasn’t available at all or in an inconsistent state.

A pair of events was implemented such that a successful boot and installation would log the one, information event, and a fatal error would give rise to an error event, with descriptive error test, with a different event id, being raised locally on the server. As SCOM was being used, a custom monitor and thence alert was implemented so that the BAU support team had pro-active notifications if any installation failures occurred as these would need investigating and rectifying (which would generally mean another reboot was required once the issue had been identified and resolved).

Guide to my GitHub Scripts

This article, which will be updated as new scripts are added, serves as an index to the scripts I have uploaded to GitHub with a quick summary of what the script can do and links to explanatory blog articles. The scripts are split logically into a number of GitHub repositories, namely:


  1. DailyChecks.ps1 – allows you to get a summary of your Citrix XenApp/XenDesktop 7.x deployment emailed to you via a scheduled task to help spot issues. Blog Post
  2. End Disconnected sessions.ps1 – finds sessions disconnected over a given duration and logs them off, optionally terminating specified processes in case they are preventing logoff.
  3. Get PVS boot time stats.ps1 – pull PVS target device boot times from PVS server event logs to show fastest, slowest, mean, median and mode values with the option to send an email if thresholds are breached. Blog Post
  4. Get PVS device info.ps1 – retrieve PVS target device information from PVS servers and display their configuration along with corresponding data from Citrix Studio, Active Directory, VMware and the devices themselves such as last boot time & IP address. Selected devices can then have operations performed on them such as deleting from PVS/AD/Studio or rebooting. Blog Post
  5. Ghost Hunter.ps1 – find disconnected Citrix XenApp sessions which Studio/Director say still exist but do not and mark them such that they cannot prevent affected users from launching further published applications. Blog Post
  6. Show PVS audit trail.ps1 – collect PVS auditing events in a given date/time range and show on-screen or export to a csv file. Can also enable auditing if it is not already enabled.
  7. Show Studio Access.ps1 – show all users granted access to Citrix Studio and their access levels and optionally export to a csv file. It will recursively enumerate AD groups to show each individual user with Studio access.
  8. StoreFront Log Levels.ps1 – display and/or change the logging levels on Citrix StoreFront servers. It can operate on multiple servers from a single script invocation. Blog Post
  9. Parse storefront log files.ps1 – show Citrix StoreFront log files in a sortable and filterable consolidated view, optionally filtering on entry type and date ranges. Selected lines will be placed in the clipboard to enable further research. Blog Post
  10. Get Citrix admin logs.ps1 – retrieve the logs viewable in Studio in a given time window and write to a csv file or display in an on screen sortable/filterable grid view. The logs can be filtered on the user who performed the action, where the action was performed from, either Studio or Director, whether it was an admin or config change action and the type of action such as logoff or shadow.
  11. Get Citrix OData.ps1 – query the OData interface exposed by Citrix Delivery Controllers to retrieve information on sessions, errors, machines, etc. This is where Citrix Director gets its information from and also means that you don’t have to query SQL (which is unsupported). See here for information on what is available.
  12. Modify and launch file.ps1 – make modifications to a text file such as an ICA file, e.g. to change window sizes, and launch the newly created file. Can also install itself as an explorer SendTo context menu shortcut.
  13. Recreate PVS XML manifest.ps1 – create the XML manifest that PVS needs in order to import disks which have multiple versions. Can import from orphaned SQL data or examination of specified *.(a)vhd(x) files. Use when a disk has disappeared from the PVS console.
  14. Direct2Events.ps1 – Uses OData (like Citrix Director) to retrieve Citrix Virtual Apps and Desktops Session information from a Delivery Controller and displays in a WPF GUI allowing troubleshooting and remediation without needing to go to different tools such as PVS Console, VMware vSphere and Active Directory or connecting to the end-points


  1. Change CPU priorities.ps1 – dynamically change the base priorities of processes which over consume CPU so other processes get preferential access to the CPU. If a process stops over consuming then its original base priority will be restored. Can include/exclude specific users, processes and sessions.
  2. Trimmer.ps1 – trim the working sets of processes to make more memory available for other processes/users on a system. Can trim on demand or when processes are unlikely to need the memory such as when a session is idle, disconnected or locked. Can also set hard working set limits to cap leaky processes. Blog Post Blog Post Blog Post
  3. Get installed software.ps1 – show the installed software on one or more computers where the computers are specified on the command line or via a csv file. Queries the registry rather than the win32_product WMI/CIM class which is faster and gives more complete results. Output can be to a csv file, an on screen grid view or standard output for piping into something else. If -uninstall is specified, items selected when OK is clicked in the grid view will be uninstalled. Similarly, a -remove option takes a comma separated list of package names or regular expressions and will run the uninstaller for them, silently if -silent is specified and the uninstall program is msiexec.exe.
  4. Group Membership Modifier.ps1 – add or remove a specified list of user accounts from local groups, such as Administrators or Remote Desktop Users, on one or more machines.
  5. Clone VHD.ps1 – create a new Hyper-V virtual machine from a .vhd/.vhdx file containing an existing VM, selecting the VM configuration in a GUI. Will integrate itself into Windows Explorer so you right-click on a virtual disk file and run it, elevating itself if required. Can make linked clones which can reduce disk space. Blog Post
  6. Fix Sysprep Appx errors.ps1 – parses sysprep logs looking for failures due to AppX packages causing sysprep to fail, removes them and runs sysprep again until successful.
  7. Show NTFS zone info.ps1 – Google Chrome and Internet Explorer store the URL of where downloaded files have come from in an NTFS Alternate Data Stream (ADS). This script shows these and optionally removes this information. Blog Post
  8. Profile Cleaner.ps1 – retrieve local profile information from one or more machines, queried from Active Directory OU, group or name, present them in an on-screen filterable/sortable grid view and delete any selected after prompting for confirmation. Options to include or exclude specific users and write the results to a csv file. Blog Post
  9. Show users.ps1 – Show current and historic logins including profile information, in a given time range or since boot, across a number of machines queried from Active Directory OU, group or name, write to csv file or display in an on-screen sortable/filterable grid view and logoff any selected sessions after confirmation. Works on RDS and infrastructure servers as well as XenApp. Blog Post
  10. Profile.ps1 – a PowerShell profile intended to be used on Server Core machines, with PowerShell set as the shell, which reports key configuration and status information during logon.
  11. Add firewall rules for dynamic SQL ports.ps1 – find all SQL instances and create firewall rules for them to work with dynamic ports
  12. Find Outlook drafts.ps1 – find emails in your Outlook drafts folder of a given age, prompt with the information with the option to open the draft. Designed to help you stop forgetting to complete and send emails. Has options to install & uninstall itself to launch at logon. Blog Post
  13. Outlook Leecher.ps1 – find SMTP email addresses in all your Outlook folders including calendars and write them to a csv file including context such as the subject and date of the email.
  14. Check Outlook recipient domains – an Outlook macro/function which will check the recipient addresses when sending an email and will warn if the email is going to more than a single external domain. Designed to help prevent accidental information leakage where someone may pick the wrong person when composing.
  15. Fix reminders – an Outlook macro/function which will find any non-all day Outlook meetings which have no reminder set, display the details in a popup and add a reminder for a number of minutes before the event as selected by the user. Blog Post.
  16. Check Skype Signed in.ps1 – uses the Lync 2013 SDK to check Skype for Business is signed in and will alert if it is not via a popup and playing an optional audio file. Can also pop up an alert if the client has been in “Do Not Disturb” in excess of a given period of time.
  17. Redirect Folders.ps1 – show existing folder redirections for the user running the script or set one or more folder redirections with a comma separated list of items of the form specialfolder=path. For example Music=H:\Music
  18. Check and start processes.ps1 – check periodically if each of a given list of processes is running and if not optionally start it, after an optional prompt is displayed. Any necessary parameters for each process can be specified after an optional semicolon character in the process name argument. Can install or uninstall itself to the per user or per machine registry run key so it runs at logon. Use it to launch and monitor key processes such as Outlook or Skype for Business (lync.exe).
  19. Autorun.ps1 – list, remove or add logon autoruns entries in the file system or registry for the user running the script or all users if the user has permissions. Can also operate on the RunOnce key and wow6432node on x64 systems. Uses regular expressions for matching the shortcut/registry value name and/or the command so knowing the exact names or commands is not required. Uses PowerShell’s built in confirmation mechanism before overwriting/deleting anything.
  20. Find and check IIS server certs.ps1– find IIS servers via OUs or AD groups or specify via regular expression, specific servers or from the contents of a text file. Check the expiry date of any certificates in use and present a list of those expiring within a specified number of days in a grid view, write to csv file or send via email.
  21. wcrasher.cs.ps1 – compiles embedded C# code to produce an exe file (32 or 64 bit or even Itanium) which will crash when the “OK” button of the displayed dialogue box is clicked. Use it to check that the OS is configured the desired way for handling application crashes or to produce dumps for practicing analysis.
  22. WTSApi.ps1 – provides the function Get-WTSSessionInformation which is a wrapper for the WTSQuerySessionInformationW function from wtsapi32.dll with the WTSSessionInfoEx class parameter. This returns an array of session information items for the one or more computers passed to it which can be used in place of running quser.exe (“query user”) and having to parse its somewhat inconsistent output.
  23. Trim run history.ps1 – Remove items from the history of Explorer’s Start->Run menu, and task Manager’s  File->Run new task, either by specifying what to keep or what to remove via regular expression (which can be as simple as something like ‘mstsc’). Uses PowerShell’s builtin confirmation mechansim so by default will prompt before each deletion.
  24. Get Process Durations.ps1 – Retrieve process creation and termination events from the security event log, if auditing of these is enabled, and show the start and end times of the processes and command lines if that auditing is enabled too. Can optionally show how long after logon and/or boot processes started and can filter on specific processes and/or users. Output to csv format file, sortable/filterable grid view or the PowerShell pipeline.
  25. Analyse IIS log files.ps1 – Analyse IIS log files to show requests/min/sec, and min, max, average and median response times per time interval, usually seconds to aid in finding busy/overloaded periods for capacity planning, troubleshooting, etc.
  26. Check AD account expiry.ps1 – Find AD accounts with passwords or accounts expiring within the specified number of days or are locked out or disabled and optionally send an email containing the information. To help spot problems where account expiry could cause issues such as when used as service accounts.
  27. Check SQL account expiry.ps1 – Find SQL accounts with passwords expiring within the specified number of days and optionally send an email containing the information. Useful where these accounts are used as service accounts. Can also be used to send an email alert if the specified SQL server cannot be connected to.
  28. Download and Install Office 365 via ODT.ps1 – Download the latest version of the Office Deployment Kit and use that, once the executable has been extracted and its certificate checked, to download and install Office 365.
  29. Find loaded modules.ps1 – Examine loaded modules all or specific processes by name or pid and show those where the module name/path or company name match a specified string/regex. Designed to help spot processes hooked by 3rd party software like Citrix, Ivanti, Lakeside, etc. Shows module versions so can also be used to play spot the difference between processes.
  30. Get Remote User Logon Times.ps1 – Use WMI to query computers to find out, since boot, when any remote desktop connections logged on. Gives finer granularity than “query user” (quser) and works on multiple computers in a single invocation.
  31. Kill elevated processes.ps1 – Check already running processes and then watch for process created events and if the process is in a specified list and have been launched elevated then terminate them and audit to the event log.
  32. Monitor process start stop.ps1 – Uses WMI/CIM to register for notifications when processes are started or stopped so effectively a process watcher.
  33. Network Profile Actioner.ps1 – Check network connection profiles and if any are connected on a public network, or nothing is connected so the computer is offline, set a registry value differently compared with private/domain network. Defaults to setting the registry such that the username is not displayed on the lock screen if the computer is on a public network or offline to aid with privacy protection.
  34. Power Watcher.ps1 – Designed to help set the most suitable power scheme when using an external power bank for a laptop as the laptop sees it as still being powered by an external power source so does not implement any power saving (on a Dell laptop).
  35. Show FSlogix volumes.ps1 – Show FSLogix currently mounted volume details & cross reference to FSLogix session information in the registry.
  36. Check and fix domain membership.ps1 – Check domain membership of the machine the script is running on and try to repair using Test-ComputerSecureChannel. Can be placed in a computer startup script with encrypted password.
  37. Convert graphics files.ps1 – bulk convert graphics files from one format to another
  38. Get file bitness.ps1 – show the file bitness of specified files or files in a folder including the .NET CPU specifications
  39. event aggregator.ps1 – retrieve all events from the 300+ event logs on one or more computers and show in sortable/filterable gridview and/or write to csv. Various filter in/out option available.
  40. Set Foreground Window.ps1 – find the main window for given one or more processes, by id or name, and optional argument matching, and set as the foreground window or perform another operation on them such as minimising or maximising. Written to solve a problem when a running process refused to show its window or even a taskbar icon for it.
  41. Get MSI Properties.ps1 – get any MSI property, such as ProductVersion, from one or more MSI files by reading the contents of the MSI file. Useful for finding out the version of an MSI file. ALso gets summary information such as bitness
  42. Get files modified since boot.ps1 – Find files modified or created since the last boot time without following symbolic links and junction points. Useful to find out what has consumed the Citrix Provisioning Services write cache but can also take arbitrary start and end times to find files modified/created in a given time window for troubleshooting.
  43. Pause Resume Processes.ps1 – Pause or resume processes using debugger API functions. Useful to stop applications being used outside of approved hours, stop resource guzzling applications impacting other processes so it can be examined later. Note that the process resuming a process must be the same one that paused it otherwise it will fail. Also if the pausing process exits, the paused processes will exit. The script caters for this.
  44. Bincoder GUI.ps1 – base 64 encode and decode data to and from any file to allow the data to be copied over the Windows clipboard, e.g. to or from a remote session where file sharing sites, email, etc are not available.
  45. Get Extended File Properties.ps1 – Retrieve specified or all extended properties from a file, not just those in the version resource of the file
  46. Add computers to perfmon xml.ps1 – Take an XML template exported from perfmon with a single machine and duplicate all counters for a specified set of machines. This creates a new XML file which can be imported back in to perfmon to capture performance data across all the machines.
  47. Delete profile for group member.ps1 – Delete local user profiles for members of an Active Directory group which are not currently in use
  48. Fix shortcuts.ps1 – Find shortcuts with target or icon path or arguments matching a given regular expression and change to a new string.
  49. Get Account Lockout details.ps1 – Find all domain controllers and show account lockouts in a given time range and/or for a specific user including the machine where the lockout occurred.
  50. Get info via CIM.ps1 – Gather info from one or more computers via CIM and write to CSV files to aid health checking. A list of nearly 50 CIM classes is built in to assist relevant information gathering.
  51. Send to Clipboard.ps1 – Put contents of text or graphics files onto the clipboard – designed for use as a shortcut in explorer’s right click send to menu

General Scripts

  1. Regrecent.ps1 – find registry keys modified in a given time/date window and write the results to a csv file or in an on-screen sortable/filterable grid view. Can include and/or exclude keys by name/regular expression. Blog Post
  2. Leaky.ps1 – simulate a leaky process by causing the PowerShell host process for the script to consume working set memory at a rate and quantity specified on the command line.
  3. Twitter Statistics.ps1 – fetch Twitter statistics, such as the number of followers and tweets, for one or more Twitter handles without using the Twitter API
  4. Sendto Checksummer.ps1 – when a shortcut to this script, by setting the shortcut target to ‘powershell.exe -file “path_to_the_script.ps1”, is added to the user’s Explorer SendTo folder, a right-click option for calculating file checksums/hashes is available. The user will be prompted for which hashing algorithm to use and then the checksums of all selected files will be calculated and shown in a grid view where selected items will be copied to the clipboard when “OK” is clicked.
  5. Zombie Handle Generator.ps1 – opens handles to a given list of processes and then closes them after a given time period or after keyboard input. Used to simulate handle leaks to test other software. Can open process or thread handles.
  6. Sendto folder size.ps1 – shows the sizes of each folder/file selected in explorer, or passed directly on the command line. For each item then selected in the grid view, it will show the largest 50 files. If any files are selected when OK is pressed in that grid view, a prompt to delete will be shown and if Yes is clicked, the files will be deleted via the recycle bin. To install for explorer right-click use and add a shortcut to this script via Powershell.exe -file in the shell:sendto folder.
  7. Compare files in folders.ps1 – compare file attributes and checksums between files in two specified folders, and sub folders. Files selected in the grid view when OK is clicked will then have their differences shown in separate grid views.
  8. Query SQLite database.ps1 – query data from a SQLite database file or show all of the table names. Queries can be qualified with a “where” clause, the columns to return specified, or it defaults to all, and the results output to a csv file or are displayed in an on-screen filterable/sortable grid view.
  9. Find file type.ps1 – Looks at the content of files specified to determine what the type of a file actually is. File types identifiable include various zip formats, image and video formats and executables. It will also seek out files stored in Alternate Data Streams on NTFS volumes.
  10. Set photo dates.ps1  – Get the date/time created from image file metadata and set as the file’s creation date/time which can make it easier to see/sort picture files by the creation date of the image itself, not when the file was copied to the current folder it resides in.
  11. Shortcuts to csv.ps1 – Produce csv reports of the shortcuts in a given folder and sub-folders and optionally email the resulting csv file. Can check shortcuts locally (default) or on a remote server, e.g. for checking centralised Citrix XenApp/XenDesktop shortcuts. By default it will check that the target and working directory exist for a shortcut so the resulting csv file can be filtered on these columns to easily find bad shortcuts.
  12. Update dynamic dns.ps1 – Update dynamic DNS provider if the external IP address has changed (stored in the registry) to update the address or email the details to a given list of recipients.
  13. Find JSON attribute by name.ps1 – Find JSON attributes via name or regex and return the value(s). Saves having to navigate a potentially unknown object structure.
  14. Get chunk at offset.ps1 – display the text from a given file at a given offset within the file. Used with SysInternals Process Monitor (procmon) to see what is being written to a log file for any given procmon trace line.
  15. Digital Clock.ps1 – display a digital clock, stop watch (with 0.1 second granularity) or countdown timer with the ability to “mark” specific points, e.g. when timing a logon clock


  1. AMC configuration exporter.ps1 – Export the configuration of one or more AppSense/Ivanti DesktopNow Management Servers to csv or xml file.
  2. Get process module info.ps1 – Interrogate running processes to extract file and certificate information for their loaded modules which can be useful in composing Ivanti Application Control configurations.
  3. Ivanti UWM EM event processor.ps1 – Get Ivanti UWM EM event log entries and split into sortable table for durations to aid logon analysis. Display on screen in a sortable/filterable grid view or export to a CSV file.


  1. ESXi cloner.ps1 – Create one or more new VMware ESXi virtual machines from existing VMs nominated as templates. For use when not using vCenter which has a built in templating mechansim. Can created linked clones to save on disk space and drastically speed up new VM creation. Can be used with or without a GUI.
  2. Get VMware or Hyper-V powered on vm details.ps1 – Retrieves details of all powered on virtual machines, or just those matching a name pattern, from either VMware vSphere/ESXi or Hyper-V and either displays them in an on screen sortable & filterable grid view, standard output for further processing or writes to a text file that can be used in a custom field in SysInternals BGinfo tool to show IP addresses of these VMs on your desktop wallpaper which is useful when they are on an isolated network or not registered in DNS.
  3. Power state change running VMs.ps1 – Pause or shutdown running VMs and the ESXi host – designed to be run by UPS shutdown software. Requires the VMware PowerCLI module.
  4. VMware GUI.ps1 – Allow users to view VMs and their details that they have access to in a WPF grid view and perform the following actions if they have permissions in VMware as well as being able to launch mstsc and VMware consoles:
        • Snapshots – take, delete, revert, consolidate
        • Power – on, off, suspend, shutdown/restart guest
        • Reconfigure – number of CPUs, amount of memory and change notes
        • Delete
        • Screenshot
        • Run scripts/cmdlets/exes
        • Mount/Unmount CDs
        • Connect/Disconnect NICs
        • Show events
        • Backup
  1. Set VMware guest info.ps1– Set VM guest information, by connecting to vCenter or ESXi directly, so it can be retrieved in VMs. For example, set the VMware host running the VM in the guest so it knows who its parent is.

Memory Control Script – Capping Leaky Processes

In the third part of the series covering the features of a script I’ve written to control process working sets (aka “memory”), I will show how it can be used to prevent leaky processes from consuming more memory than you believe they should.

First off, what is a memory leak? For me, it’s trying to remember why I’ve gone into a room but in computing terms, it is when a developer has dynamically allocated memory in their programme but then not subsequently informed the operating system that they have finished with that memory. Older programming languages, like C and C++, do not have built in garbage collection so they are not great at automatically releasing memory which is no longer required. Note that just because a process’s memory increases but never decreases doesn’t actually mean that it is leaking – it could be holding on to the memory for reasons that only the developer knows.

So how do we stop a process from leaking? Well short of terminating it, we can’t as such but we can limit the impact by forcing it to relinquish other parts of its allocated memory (working set) in order to fulfil memory allocation requests. What we shouldn’t do is to deny the memory allocations themselves, which we could actually do with hooking methods like Microsoft’s Detours library. This is because the developer, if they even bother checking the return status of a memory allocation request before using it, which would result in the infamous error “the memory referenced at 0x00000000 could not be read/written” (aka a null pointer dereference), probably can’t do a lot if the memory allocation fails other than outputting an error to that effect and exiting.

What we can do, or rather the OS can do, is to apply a hard maximum working set limit to the process. What this means is that the working set cannot increase above the limit so if more memory is required, part of the existing working set must be paged out. The memory paged out is the least recently used so is very likely to be the memory the developer forgot to release so they won’t be using it again and it can sit in the page file until the process exits. Thus increased page file usage but decreased RAM usage which should help performance and scalability and reduce the need for reboots or manual intervention.

Applying a hard working set limit is easy with the script, the tricky part is knowing what value to set as the limit – too low and it might not just be leaked memory that is paged out so performance could be negatively affected due to hard page faults. Too high a limit and the memory savings, if the limit is ever hit, may not be worth the effort.

To set a hard working set limit on a process we run the script thus:

.\trimmer.ps1 -processes leakprocess -hardMax -maxWorkingSet 100MB

or if the process has yet to start we can use the waiting feature of the script along with the -alreadyStarted option in case the process has actually already started:

.\trimmer.ps1 -processes leakprocess -hardMax -maxWorkingSet 100MB -waitFor leakyprocess -alreadyStarted

You will then observe in task manager that its working set never exceeds 100MB.

To check that hard limits are in place, you can use the reporting option of the script since tools like task manager and SysInternals Process Explorer won’t show whether any limits are hard ones. Run the following:

.\trimmer.ps1 -report -above 0

which will give a report similar to this where you can filter where there is a hard working set limit in place:

hard working set limit

There is a video here which demonstrates the script in action and uses task manager to prove that the working set limit is adhered to.

One way to implement this for a user, would be to have a logon script that uses the -waitFor  option as above, together with -loop so that the script keeps running and picks up further new instances of the process to be controlled, to wait for the process to start. To implement for system processes, such as a leaky third party service or agent, use the same approach but in a computer start-up script.

Once implemented, check that hard page fault rates are not impacting performance because the limit you have imposed is too low.

The script is available here and use of it is entirely at your own risk.

Changing/Checking Citrix StoreFront Logging Settings

Enabling, capturing and diagnosing StoreFront logs is not something I have to do often but when I do, I found it was time consuming to enable, and disable, logging across multiple StoreFront servers and also to check on the status of logging since Citrix provide cmdlets to change tracing levels but not to query them as far as I can tell.

After looking at reported poor performance of several StoreFront servers at one of my customers, I found that two of them were set for verbose logging which wouldn’t have been helping. I therefore set about writing a script that would allow the logging (trace) level to be changed across multiple servers and also to report on the current logging levels. I use the plural as there are many discrete modules within StoreFront and each can have its own log level and log file.

So which module needs logging enabled? The quickest way, which is all the script currently supports, is to enable logging for all modules. The Citrix cmdlet that changes trace levels, namely  Set-DSTraceLevel, can be used more granularly it seems but I have found insufficient details in order to be able to implement it in my script.

The script works with clustered StoreFront servers in that you can specify just one of the servers in the cluster via the -servers option together with the -cluster option which will (remotely) read the registry on that server to find where StoreFront is installed so that it can load the required cmdlets to retrieve the list of all servers in the cluster.

To set the trace level on all servers in a StoreFront cluster run the following:

& '.\StoreFront Log Levels.ps1' -servers storefront01 -cluster -traceLevel Verbose

The available trace levels are:

  • Off
  • Error
  • Warning
  • Info
  • Verbose

To show the trace levels, without changing them, on these servers and check that they are consistent on each server and across them, run the following:

& '.\StoreFront Log Levels.ps1' -servers storefront01 -cluster -grid

Which will give a grid view similar to this:

storefront log settings

It will also report the version of StoreFront installed although the -cluster option must be used and all servers in the cluster specified via -servers if you want to display the version for all servers.

The script is available here and you use it entirely at your own risk although I do use it myself on production StoreFront servers. Note that it doesn’t need to run on a StoreFront server as it will remote commands to them via the Invoke-Command cmdlet. It has so far been tested on StoreFront versions 3.0 and 3.5 and requires a minimum of PowerShell version 3.0.

Once you have the log files, there’s a script introduced here that stitches the many log files together and then displays them in a grid view, or csv, for easy filtering to hopefully quickly find anything relevant to the issue being investigated.

For those of an inquisitive nature, the retrieval side of the script works by calling the Get-DSWebSite cmdlet to get the StoreFront web site configuration which includes the applications and for each of these it finds the settings by examining the XML in each web.config file.

Don’t forget to return logging levels to what they were prior to your troubleshooting although I would recommend leaving them set as “Error” as opposed to “Off”.

VMware integration added to Citrix PVS device detail viewer & actioner

You may be familiar with the script I wrote, previously covered here and available on GitHub here, that allows you to get a single pane view, either in csv or on-screen in a filterable and sortable grid view, of all your Provisioning Services devices together with information from Delivery controllers, such as machine catalogue and delivery group membership as well as registration and maintenance mode status. When using the grid view, you can select any number of devices to then get a GUI that allows operations like booting or shutting them down and removing from PVS and/or DDC.

When working at a customer recently I came across a number of VMs in VMware that were named using the XenApp worker naming scheme but weren’t being shown in the PVS or Studio consoles. Being the inherently lazy person that I am, I didn’t fancy deleting these individually in VMware and Active Directory, if they even existed in the latter, so I decided that it would be useful to add extra functionality to the script by getting it to add VMs that matched a specific naming pattern, so as not to pull in infrastructure VMs for example, that hadn’t already been pulled from Citrix PVS and DDC data. So I implemented this, utilising VMware  PowerCLI, and then also added a “Remove from Hypervisor” button to the action GUI so that these orphans can be removed in one go, including their hard drives.

To show VMs that don’t exist in either PVS or DDC in the grid view, simply add filters for where the DDC and PVS servers are empty.

show orphaned VMs

It will try to get AD account details too, such as the account creation and last logon dates and the description, in order to try and help you figure out what they are and if they have recently been used. They may not exist in AD, of course though, but that will be apparent in the data displayed, unless you don’t have domain connectivity/rights or the ActiveDirectory PowerShell module available.

This additional functionality is enabled by specifying the -hypervisors argument on the command line and passing it a comma separated list of your vCenter servers. If you do not have cached credentials (e.g. via New-VICredentialStoreItem) or pass through authentication working then it will prompt for credentials for each connection. You must have already installed the VMware PowerCLI package corresponding to the version of vSphere that you are using. There are examples of the command line usage in the help built into the script.

I then realised that in addition to the information already gathered that allows easy identification of devices booting off the wrong vDisk/vDisk version and devices that are overdue a reboot for example, that I could also pull in the following VMware details, again to help identify where VMs are incorrectly configured:

  • Number of CPUs
  • Memory
  • Hard drives (the size of each assigned)
  • NICs (the type of each assigned, e.g. “vmxnet3”)
  • Hypervisor

You can then sort or filter in the grid view or csv to uncover misconfigured VMs.

vmware info

The downside to all this extra information is that there are now up to 42 (a coincidence!) columns of information to be displayed in the grid view but, unfortunately, versions of PowerShell prior to 5.0 can only display a maximum of 30 columns. Csv exports aren’t affected by this limitation though. As I am often heard saying to my kids, it’s better to have something and not need it rather than need something and not have it – you can remove columns in the grid view, by right clicking on any column header, or in Excel, or whatever you use to view the csv. If this will impact you, consider upgrading as there are a whole load more PowerShell features that you’re missing.

To restrict what VMs are returned by the Get-VM cmdlet, you will probably need to use the -name argument together with a regular expression (aka regex) which will only match your XenApp/XenDesktop workers. For instance, if your VMs are called CTX1001 through CTX1234 and also CTX5001 onwards then use something like the following:


The -name parameter is also used to restrict what PVS devices are included so you can just include a subset if you have, say, a sub-naming convention to name development XenApp servers differently to production ones, e.g. CTXD1234 versus CTXP4567, which will make it quicker.

To check that a regular expression you build matches what you expect before you run the script, there are on-line regex checkers available but I just use PowerShell. For instance, typing the following in a PowerShell session will display “True”:

'CTX1042' -match '^CTX[15]\d{3}$'

I also decided to add a progress indicator since, with hundreds of devices, it can take several minutes to collect all of the relevant data although data is cached where possible to minimise the number of remote calls required. This can be disabled with -noProgress.

If you do have orphaned VMs and you want to remove them, highlight them in the grid view and then click “OK” down in the bottom right hand corner. Ctrl-A can be used to select all items in the grid view. This will then give you the action GUI (ok, not the prettiest user interface ever but it does work!):

pvs device actioner gui vm

where you can power off the VMs if they are on and then delete them from the hypervisor and from AD, all without having to go to any product consoles assuming that you are running the script under an account which has the necessary rights. When you quit this GUI, the devices that you originally selected in the grid view, will be placed into the clipboard in case you need to paste them into a document, etc.

Using -save, -registry and, optionally, -serverset will also save/retrieve  the server(s) specified by -hypervisors to the registry. This means that you don’t have remember server names every time you run the script – handy when you deal with lots of different customers like I do.

Be aware that it needs to be run where the PVS and DDC cmdlets are available so I would recommend installing on a dedicated management server which does not host the PVS or DDC roles so you can also use those consoles, and others you install, on there so that you don’t risk degrading performance of key infrastructure servers. Also, don’t forget VMware PowerCLI and the AD PowerShell module (part of the RSAT feature).

Whilst I have checked the operation of this script as much as one man in West Yorkshire can, if you use it then you do so entirely at your own risk and I cannot be held responsible for any unintentional, or intentional, undesired effects. Always double, and even triple, check before you delete anything!

Having said that, I hope it is as useful for you as it is for me – for a reporting and status tool, I use it daily (weekends included!).

Finding Citrix PVS or Studio orphans

I recently released a script, which I use almost daily when working with PVS servers at version 7.7 or higher since that’s when a native PowerShell interface appeared, that cross references Citrix Provisioning Services device information to Delivery Controller and Active Directory. See here for the original post. This allows me to easily and quickly health check and potentially fix issues that would otherwise need a lot of manual work and jumping around in various consoles. Whilst the script could already easily identify devices that only existed in PVS, by filtering in the grid view or Excel where the DDC (Desktop Delivery Controller) field/column is empty, I realised I could extend the script to identify devices that exist on Delivery Controllers, so visible in Studio, but don’t exist in PVS. You may of course expect to find some devices in PVS but not present on a DDC, and hence Studio, such as devices used for updating vDisks via booting in maintenance mode since you won’t want to make those available via StoreFront or Receiver.

Once you have the on screen grid view or csv file open in Excel (or Google Sheets), show PVS devices not present on DDCs by simply filtering where the “DDC” column is empty, by clicking on the “Add Criteria” button. To show devices which are known to a DDC, so visible in Studio, but not in PVS, filter where the “PVS Server” column is empty.

pvs orphans

This of course assumes that you have specified the correct server names for your DDC and PVS servers via the -ddcs and -pvsservers options respectively. There’s no need to specify multiple servers for each if they share the same SQL database; only if they use different ones such as you might have for completely separate test and production environments. Comma separate them if you do specify multiple servers.

If you’ve got a mixture of PVS and MCS (or manual) machine catalogues then it will only display machines found on the DDCs you specify which are in PVS linked machine catalogues, unless you specify the -provisioningType parameter.

I’ve also added to the actions menu so that these potential orphans can then be removed from PVS or DDC if you select them in the grid view and then click “OK”.

remove orphans

I’ve also sneaked in a potentially handy feature where you can save the PVS and DDC servers to the registry so that you don’t have to specify them on the command line ever again (on that machine at least). This helps me, if nobody else, as I use the script at many different customers and I can’t always remember their specific server names, or sometimes specify the wrong ones. Save with -save and use these saved values with -registry, and an optional server set name via -serverSet so you can have different sets of servers, e.g. pre-production and production.

For example:

& '.\Get PVS device info.ps1' -ddcs ddc001 -pvsServers pvs001 -save

So next time you just need to run:

& '.\Get PVS device info.ps1' -registry

They are stored in HKCU so are per-user.

The script, amongst others, is available on GitHub here. It has to be run on a machine which has both the PVS and DDC PowerShell cmdlets available; such as one with PVS and Studio consoles installed. Also the ActiveDirectory PowerShell module, particularly if you want to include AD group membership information via the -ADGroups option.

Citrix Provisioning Services device detail viewer

Whilst struggling to find some devices in the PVS console that I thought that I’d just added to a customer’s PVS server via the XenDesktop Setup wizard, I reckoned it should be relatively easy to knock up something that would quickly show me all the devices, their device collection, disk properties and then also cross reference to a Citrix Delivery Controller to show machine catalogue, delivery group, registration state and so on. Note that I’m not trying to reinvent that wheel thing here as I know there are already some great PVS documentation scripts such as those from Carl Webster (available here).

What I wanted was something that would let me quickly view and filter the information from multiple PVS servers, such as development and production instances. Whilst PowerShell can easily export to csv and you can then use Excel, or Google Sheets, to sort and filter, that is still a little bit of a faff so I use PowerShell’s great Out-GridView cmdlet which gives you an instant graphical user interface with zero effort (not that using WPF in PowerShell is particularly difficult!) which can be sorted and filtered plus columns you don’t want can be removed without having to modify the script.

The script takes two parameters which it will prompt for if not specified as they are mandatory:



Both take comma separated lists of PVS servers and Desktop Delivery Controllers respectively although you can just specify a single server for each. If you’ve got multiple PVS servers using the same database then you only need to specify one of them. Ditto for the DDCs.

You can also specify a -csv argument with the name of a csv file if you do want output to got to a csv file but if you don’t then it will default to a filterable and sortable grid view.

Some hopefully useful extra information includes “Booted off latest” where devices with “false” in this column are those which have not been booted off the latest production version of their vDisk so may need rebooting. There’s also “Boot Time” which you can sort on in the grid view to find devices which are overdue a reboot, perhaps because they are not (yet) subject to a scheduled reboot. Plus you can quickly find those that aren’t in machine catalogues or delivery groups or where there is no account for them in Active Directory. You can also filter on devices which are booting off an override version of a vDisk which may be unintentional.

The script is available here and requires version 7.7 or higher of PVS since that is when the PowerShell cmdlets it uses were introduced. Run it from somewhere where you have installed the Citrix PVS and Studio consoles, like a dedicated management server – I’m a firm believer in not running these on their respective servers since that can starve those servers of resource  and thus adversely affect the environment. Ideally, also have the Active Directory PowerShell module (ActiveDirectory) installed too so that the device’s status in AD can be checked.

I’ve just picked out the fields from PVS, Delivery Controllers and AD that are of interest to me but you should be able to add others in the script if you need to.
Continue reading “Citrix Provisioning Services device detail viewer”

Scripted Reporting & Alerting of Citrix Provisioning Services Boot Times

Citrix PVS, formerly Ardence, is still one of my favourite software products. When it works, which is the vast majority of the time if it is well implemented, it’s great but how do you tell how well it is performing? If you’ve enabled event log generation for your PVS servers thus:

pvs event log server

then the Citrix Streaming Service will write boot times of your target devices to the application event log:

pvs boot event

So we can filter in the event log viewer or use the script I’ve written which searches the event log for these entries and finds the fastest, slowest, average, median and mode values from one or more PVS servers and optionally creates a single csv file with the results. A time range can also be specified, such as the last 7 days.

The script lends itself to being run via a scheduled task as it can either email the results to a specified list of recipients or it can send an email only when specific thresholds are exceeded, such as the average time being greater than say 2 minutes.

For instance, running the following:

& '.\Get PVS boot time stats.ps1' -last 7d -output c:\boot.times.csv -gridview

Will write the boot times to file, in seconds, for the last seven days on the PVS server where you are running the script. It will also display the results in a sortable and filterable gridview and output a summary like this:

Got 227 events from 1 machines : fastest 21 s slowest 30 s mean 25 s median 25 s mode 26 s (39 instances)

Or we could run the following to query more servers and send an email via an SMTP mail server if the slowest time exceeds 5 minutes in the last week:

& '.\Get PVS boot time stats.ps1' -last 7d -output c:\boot.times.csv -mailserver yourmailserver -recipients -slowestAbove 300 -computers pvsserver1,pvsserver2

The script has integrated help, giving details on all the command line options available, and can be run standalone or via scheduled tasks.

The script can be downloaded from here. Full help is built in and can be accessed via F1 in PowerShell ISE or Get-Help.

Update 13/02/18

Now with -chartView and -gridView options to give an on screen  chart and grid view respectively.


Getting the PVS RAM cache usage as a percentage

Citrix Provisioning Services has long been one of my favourite products (or Ardence as it originally was before being purchased by Citrix and that name still appears in many places in the product). It has steadily improved over time and the cache to RAM with overflow to disk feature is great but how do you know how much of the RAM cache has been used? We care about this because if our overflow disk isn’t on SSD storage then using the overflow file could cause us performance degradation.

The PVS status tray program doesn’t tell us this as it just displays the sum of the free disk space available where the overflow disk resides (vdiskdif.vhdx) plus the RAM cache size and the usage of the overflow disk, not the RAM cache usage.

PVS cache

There are a number of articles out there that show you either how to get the non-paged pool usage, which gives a rough indication, or to use the free Microsoft Poolmon utility to retrieve the non-paged pool usage for the device driver that implements the cache. There’s a great article here on how to do this. Poolmon is also very useful for finding what drivers are causing memory leaks although now that most servers are 64 bit, there isn’t the problem there used to be where non-paged pool memory could become exhausted and cause BSoDs.

However, once we have learnt what the RAM cache usage is, how do we get that as a percentage of the RAM cache configured for this particular vdisk ? I looked at the C:\personality.ini file on a PVS booted VM (where the same information is also available in “HKLM\System\CurrentControlSet\services\bnistack\PVSAgent”) but it doesn’t have anything that correctly tells us the cache size. There is a “WriteCacheSize” value but this doesn’t seem to bear any relation to the actual cache size so I don’t use it.

With the release of PVS 7.7 came a full object based PowerShell interface so it is now very easy to interrogate PVS to find (and change!) all sorts of information including the properties of a vdisk such as its RAM cache size (if it is set for Cache to RAM, overflow to disk which is type 9 if you look at the $WriteCacheType entry in personality.ini). So in a health reporting script that I’m running for all XenApp servers (modified from the script available here) I run the following to build a hash table of the RAM cache sizes for all vdisks:

[string]$PVSServer = 'Your_PVS_Server_name'
[string]$PVSSiteName = 'Your_PVS_Site_name'
[hashtable]$diskCaches = ${}
Invoke-Command -ComputerName $PVSServer { Add-PSSnapin Citrix.PVS.SnapIn ; `
	Get-PvsDiskInfo -SiteName $Using:PVSSiteName } | %{ `
    if( $_.WriteCacheType -eq 9 ) ## cache in RAM, overflow to disk
        $diskCaches.Add( $_.Name , $_.WriteCacheSize )

Note that this requires PowerShell 3.0 or higher because of the “$using:” syntax.

Later on when I am processing each XenApp server I can run poolmon.exe on that server remotely and then calculate the percentage of RAM cache used by retrieving the cache size from the hash table I’ve built by using the vdisk for the XenApp server as the key into the table.

## $vdisk is the vdisk name for this particular XenApp server
## $server is the XenApp server we are processing
$thisCache = $diskCaches.Get_Item( $vdisk ) ## get cache size from our hash table
[string]$poolmonLogfile = 'D:\poolmon.log'
$results = Invoke-Command -ComputerName $server -ScriptBlock `
	{ Remove-Item $using:poolmonLogfile -Force -EA SilentlyContinue ; `
	C:\tools\poolmon.exe -n $using:poolmonLogfile ; `
	Get-Content $using:poolmonLogfile -EA SilentlyContinue | `
		?{ $_ -like '*VhdR*' } }

if( ! [string]::IsNullOrEmpty( $results ) )
        $PVSCacheUsedActual = [math]::Round( ($results -split "\s+")[6] / 1MB  )
        $PVSCacheUsed = [math]::Round( ( $PVSCacheUsedActual / $thisCache ) * 100 )
        ## Now do what you want with $PVSCacheUsed

Finding out the usage of the overflow to disk file is just a matter of getting the size of the vdiskdif.vhdx file which is achieved in PowerShell using the Get-ChildItem cmdlet and then accessing the “Length” attribute.

(Get-ChildItem "\\$server\d$\vdiskdif.vhdx" -Force).Length

We can then get the free space figure for the drive containing the overflow file using the following:

Get-WmiObject Win32_LogicalDisk -ComputerName $Server 
	-Filter "DeviceID='D:'" | Select-Object -ExpandProperty FreeSpace

So now I’ve got a script I can run as a scheduled task to email a report of the status of all XenApp servers including their PVS cache usage.

citrix health

First Experiences with XenApp 7.8 & App-V 5.1


I’m currently working on a new XenApp rollout for a customer where we’ve been eagerly awaiting the 7.8 release to have a look at the App-V integration given that it promised to remove the need for separate App-V server infrastructure.

I’m not going to go into details here of how you make App-V applications available natively in XenApp/XenDesktop as that is covered elsewhere such as here. That article also covers troubleshooting and how to enable logging.

How it appears to work

When the Citrix Desktop Service (BrokerAgent) starts on your XenApp server, it communicates with a Delivery Controller and writes the details to the “ApplicationStartDetails” REG_MULTI_SZ value in “HKLM\SOFTWARE\Policies\Citrix\AppLibrary”. Now why it writes to the policies key when we’re not actually setting anything to do with App-V in policies I don’t know but a key is a key (unless it’s a value!). A typical line in this value looks like this:

56c1d895-e3d8-4dcc-a303-b0162a97c87b;\\ourappvserver\appvshare\thisapp\thisapp.appv;de0a5cd1-3264-4418-82dd-4bdf5959a29d;957c71c9-a732-401b-b354-17c493decac8;This App

Where the fields are semicolon delimited thus:

App-V App  GUID;Package UNC;App-V Package GUID;Published App Name

The BrokerAgent then downloads all .appv packages that you’ve added to your delivery groups to the “%SystemRoot%\Temp\CitrixAppVPkgCache” folder. This is regardless of whether App-V has been configured with a Shared Content Store. As this happens at boot, the packages should be locally cached by the time users logon who might want to run one of the published App-V applications so you’re trading system drive disk space with speed of launch. I’ve yet to see how this impacts on PVS cache in RAM so we may look at whether we can pre-populate the cache in the PVS master image so we don’t lose write cache when .appv packages are downloaded when the image is booted into shared mode.

There is a gotcha here though in that because Citrix use PowerShell to integrate with App-V so that if your PowerShell execution policy does not allow local scripts to be run, such as being set to “Restricted” which is the default, then the App-V integration will not work which can be seen in the above cache folder not populating and apps erroring when launched. To get around this, we set the execution policy to “RemoteSigned” in the base PVS image so we didn’t have to rely on group policy getting applied before the BrokerAgent starts.

We’re giving users all of their applications via Receiver generated shortcuts which is where the next small issue arises in that the shortcuts that Receiver (actually SelfService.exe) generates for App-V run SelfService.exe so effectively a new logon session is created, which can be seen by running quser.exe, to host the App-V application. Ultimately, Citrix call their own launcher process, CtxAppVLauncher.exe, which sits in the VDA folder and is installed with the VDA by default. This then uses PowerShell to launch the  App-V application from the %AllUsersProfile%\App-V folder (using sparse files so disk space is efficiently managed). You do still need to have the Microsoft App-V client installed though, since that’s what runs the App-V package, as you’d kind of expect.

This second logon all takes time though so we decided to cut out the middle man, selfservice.exe, and make the shortcut run CtxAppvLauncher.exe directly which takes the App-V app GUID as its single argument. This we do with a PowerShell script, run at logon (actually via AppSense Environment Manager), that was initially designed to check that pinned Receiver shortcuts were still valid and to update their icons as these are dynamically created, and named, at each logon (we’re using mandatory profiles). This was extended to find shortcuts for App-V apps, by matching the application name in the shortcut target with the data found in the “ApplicationStartDetails” registry value, and then changing them to run CtxAppVLauncher.exe, instead of SelfService.exe, with the App-V app GUID found in this registry value.

It does seem slightly strange though that we’ve had to go to these lengths to create locally launched App-V apps although the results are quite impressive in that the apps launch almost instantly due to the caching.

There may be further posts on the App-V integration depending on what else we unearth. Looking at FTAs (File Type Associations) is definitely on the agenda.