XenApp/XenDesktop 7.x Availability & Health Summary Script

This script started life because I became aware that my (former) colleagues in customer technical support were performing manual checks for customers at the start of the working day so it seemed obvious to me to automate as much as possible.

There are already some great scripts out there that will give you very detailed machine by machine health but I wanted something that would give an overview of the environment(s) given that many I work in have many hundreds of machines so one or two being unavailable at any one time isn’t necessarily a disaster but wading through an email with a list of 200+ machines trying to get a feel for overall health can be error prone.

The email that the script sends starts with a summary:
citrix daily checks health summary
and then below that there are a series of tables to give specific details on each of these items as well as a per-delivery group summary, including scheduled reboot information, but separately for XenApp and XenDesktop since you probably want to see different information for these.

health check dg summary

In addition it will also show the following in separate tables together with delivery group and catalogue information for each machine:

  • PVS devices with the highest number of retries, which might suggest problems with storage, networking or both if the numbers are high.
  • File share usage and percentage free space for a list of UNCs passed to the script.
  • Availability metrics for application groups and desktops which are tag restricted since the high level per-delivery group statistics can’t give this information.
  • Machines not powered on (a -excludedMachines option is available if you want/need to exclude machine names which are expected to be powered off such as PVS maintenance mode masters).
  • Unregistered powered on machines which are not in maintenance mode.
  • Machines with the highest number of sessions.
  • Machines with the highest load balancing indexes.

The “powered on machines failed to return boot time” table may indicate where machines are in a bad state of health such as having fallen off the domain, stuck at boot, hung, etc.

The “users disconnected more than xxx minutes” table is designed to show users whose sessions have failed to be terminated by settings in Citrix policy, which I have seen at some customers, and I have a separate script to help rectify this, available on GitHub. It will also show, by cross referencing the user’s session to the User Profile Service event log on the server where Citrix thinks they have their disconnected session to see if they do still have that session as I have seen issues where this session has already been logged off. I call these “ghost” sessions and this can cause a problem if an affected user tries to launch another application that would session share on that server as they will get an error since there is no session to share. I came across a workaround for this, by setting the “hidden” flag for that session which means that it won’t try and session share in that specific session and, yes, there is a script for that on GitHub too.

If your machines are not power managed by Citrix, so the Power State shows as “unmanaged” in Studio, the -vCentres option can be used, along with a comma separated list of vCentres, which allows the script to get the power state from VMware instead. VMware PowerCLI must be installed in order for this to work.

Options wise, the script accepts the following although not all are mandatory and many take defaults (there are a few others but I’ve omitted these as they’re not especially interesting) plus you can tab complete them if running interactively and only need to specify enough of the option for it to not be ambiguous:

Option Purpose
-ddcs Comma separated list of Delivery Controllers (only specify one per SQL connection)
-pvss Comma separated list of PVS servers (only specify one per SQL connection)
-vCentres Comma separated list of VMware vCentres
-UNCs Comma separated list of file shares to report on capacity & free space
-mailserver Address of SMTP server to use to send the email
-proxyMailServer If the SMTP server does not allow relaying via the machine where you run the script, use this option to proxy it via an allowed machine
-from The sender of the email. The default is the machine running the script but this may fail as it isn’t a valid email address
-subject The subject of the email. The default includes the date/time
-qualifier Prepended to the subject. E.g. “Production” or “Test”
-recipients Comma separated list of email recipients
-excludedMachines A regular expression where matching machines are excluded
-disconnectedMinutes Report sessions disconnected over this time which should be greater than any setting in Citrix policy. Default is 480 (8 hours)
-lastRebootedDaysAgo Report on machines which have not been rebooted in more than this number of days. The default is 7 days
-topCount Report this number of machines per category. Default is 5
-excludedTags Comma separated list of Citrix tags to exclude if machines are tagged

It must be run where the Citrix Delivery Controller and PVS PowerShell cmdlets are available locally which can be anywhere where the Studio and PVS consoles are installed. I tend to have these installed on dedicated management servers so as not to risk compromising the performance of production servers like Delivery Controllers.

If you don’t have scheduled reboots set and don’t want to report on workers not rebooted in a given timeframe then  pass zero to the -lastRebootedDaysAgo option.

I tend to schedule it to run it at least a couple of times a day for customers – once early in the morning so issues spotted can be rectified before the busier periods and again at just before midday when I think usage will be at its maximum so overloaded servers, etc can more easily be spotted and capacity increased if necessary. A typical command line to run it as a scheduled task is:

-ddcs ctxddc001 -pvss ctxpvs001 -UNCs \\clus01\AppV,\\clus01\commonfiles,\\clus01\usersettings -mailserver smtp.org.uk -recipients guy.leech@somewhere.uk -excludedMachines "\\(win10|win7)"

The script is available on GitHub here , requires version 3.0 of PowerShell as a minimum and is purely passive, other than sending an email, so risks associated with it are very low although you do use it entirely at your own risk. Note that it also requires the “Guys.Common.Functions.psm1” module which should be placed in the same folder as the script itself and is available ion the same GitHub repository.

Advertisements

VMware integration added to Citrix PVS device detail viewer & actioner

You may be familiar with the script I wrote, previously covered here and available on GitHub here, that allows you to get a single pane view, either in csv or on-screen in a filterable and sortable grid view, of all your Provisioning Services devices together with information from Delivery controllers, such as machine catalogue and delivery group membership as well as registration and maintenance mode status. When using the grid view, you can select any number of devices to then get a GUI that allows operations like booting or shutting them down and removing from PVS and/or DDC.

When working at a customer recently I came across a number of VMs in VMware that were named using the XenApp worker naming scheme but weren’t being shown in the PVS or Studio consoles. Being the inherently lazy person that I am, I didn’t fancy deleting these individually in VMware and Active Directory, if they even existed in the latter, so I decided that it would be useful to add extra functionality to the script by getting it to add VMs that matched a specific naming pattern, so as not to pull in infrastructure VMs for example, that hadn’t already been pulled from Citrix PVS and DDC data. So I implemented this, utilising VMware  PowerCLI, and then also added a “Remove from Hypervisor” button to the action GUI so that these orphans can be removed in one go, including their hard drives.

To show VMs that don’t exist in either PVS or DDC in the grid view, simply add filters for where the DDC and PVS servers are empty.

show orphaned VMs

It will try to get AD account details too, such as the account creation and last logon dates and the description, in order to try and help you figure out what they are and if they have recently been used. They may not exist in AD, of course though, but that will be apparent in the data displayed, unless you don’t have domain connectivity/rights or the ActiveDirectory PowerShell module available.

This additional functionality is enabled by specifying the -hypervisors argument on the command line and passing it a comma separated list of your vCenter servers. If you do not have cached credentials (e.g. via New-VICredentialStoreItem) or pass through authentication working then it will prompt for credentials for each connection. You must have already installed the VMware PowerCLI package corresponding to the version of vSphere that you are using. There are examples of the command line usage in the help built into the script.

I then realised that in addition to the information already gathered that allows easy identification of devices booting off the wrong vDisk/vDisk version and devices that are overdue a reboot for example, that I could also pull in the following VMware details, again to help identify where VMs are incorrectly configured:

  • Number of CPUs
  • Memory
  • Hard drives (the size of each assigned)
  • NICs (the type of each assigned, e.g. “vmxnet3”)
  • Hypervisor

You can then sort or filter in the grid view or csv to uncover misconfigured VMs.

vmware info

The downside to all this extra information is that there are now up to 42 (a coincidence!) columns of information to be displayed in the grid view but, unfortunately, versions of PowerShell prior to 5.0 can only display a maximum of 30 columns. Csv exports aren’t affected by this limitation though. As I am often heard saying to my kids, it’s better to have something and not need it rather than need something and not have it – you can remove columns in the grid view, by right clicking on any column header, or in Excel, or whatever you use to view the csv. If this will impact you, consider upgrading as there are a whole load more PowerShell features that you’re missing.

To restrict what VMs are returned by the Get-VM cmdlet, you will probably need to use the -name argument together with a regular expression (aka regex) which will only match your XenApp/XenDesktop workers. For instance, if your VMs are called CTX1001 through CTX1234 and also CTX5001 onwards then use something like the following:

'^CTX[15]\d{3}$'

The -name parameter is also used to restrict what PVS devices are included so you can just include a subset if you have, say, a sub-naming convention to name development XenApp servers differently to production ones, e.g. CTXD1234 versus CTXP4567, which will make it quicker.

To check that a regular expression you build matches what you expect before you run the script, there are on-line regex checkers available but I just use PowerShell. For instance, typing the following in a PowerShell session will display “True”:

'CTX1042' -match '^CTX[15]\d{3}$'

I also decided to add a progress indicator since, with hundreds of devices, it can take several minutes to collect all of the relevant data although data is cached where possible to minimise the number of remote calls required. This can be disabled with -noProgress.

If you do have orphaned VMs and you want to remove them, highlight them in the grid view and then click “OK” down in the bottom right hand corner. Ctrl-A can be used to select all items in the grid view. This will then give you the action GUI (ok, not the prettiest user interface ever but it does work!):

pvs device actioner gui vm

where you can power off the VMs if they are on and then delete them from the hypervisor and from AD, all without having to go to any product consoles assuming that you are running the script under an account which has the necessary rights. When you quit this GUI, the devices that you originally selected in the grid view, will be placed into the clipboard in case you need to paste them into a document, etc.

Using -save, -registry and, optionally, -serverset will also save/retrieve  the server(s) specified by -hypervisors to the registry. This means that you don’t have remember server names every time you run the script – handy when you deal with lots of different customers like I do.

Be aware that it needs to be run where the PVS and DDC cmdlets are available so I would recommend installing on a dedicated management server which does not host the PVS or DDC roles so you can also use those consoles, and others you install, on there so that you don’t risk degrading performance of key infrastructure servers. Also, don’t forget VMware PowerCLI and the AD PowerShell module (part of the RSAT feature).

Whilst I have checked the operation of this script as much as one man in West Yorkshire can, if you use it then you do so entirely at your own risk and I cannot be held responsible for any unintentional, or intentional, undesired effects. Always double, and even triple, check before you delete anything!

Having said that, I hope it is as useful for you as it is for me – for a reporting and status tool, I use it daily (weekends included!).

Finding Citrix PVS or Studio orphans

I recently released a script, which I use almost daily when working with PVS servers at version 7.7 or higher since that’s when a native PowerShell interface appeared, that cross references Citrix Provisioning Services device information to Delivery Controller and Active Directory. See here for the original post. This allows me to easily and quickly health check and potentially fix issues that would otherwise need a lot of manual work and jumping around in various consoles. Whilst the script could already easily identify devices that only existed in PVS, by filtering in the grid view or Excel where the DDC (Desktop Delivery Controller) field/column is empty, I realised I could extend the script to identify devices that exist on Delivery Controllers, so visible in Studio, but don’t exist in PVS. You may of course expect to find some devices in PVS but not present on a DDC, and hence Studio, such as devices used for updating vDisks via booting in maintenance mode since you won’t want to make those available via StoreFront or Receiver.

Once you have the on screen grid view or csv file open in Excel (or Google Sheets), show PVS devices not present on DDCs by simply filtering where the “DDC” column is empty, by clicking on the “Add Criteria” button. To show devices which are known to a DDC, so visible in Studio, but not in PVS, filter where the “PVS Server” column is empty.

pvs orphans

This of course assumes that you have specified the correct server names for your DDC and PVS servers via the -ddcs and -pvsservers options respectively. There’s no need to specify multiple servers for each if they share the same SQL database; only if they use different ones such as you might have for completely separate test and production environments. Comma separate them if you do specify multiple servers.

If you’ve got a mixture of PVS and MCS (or manual) machine catalogues then it will only display machines found on the DDCs you specify which are in PVS linked machine catalogues, unless you specify the -provisioningType parameter.

I’ve also added to the actions menu so that these potential orphans can then be removed from PVS or DDC if you select them in the grid view and then click “OK”.

remove orphans

I’ve also sneaked in a potentially handy feature where you can save the PVS and DDC servers to the registry so that you don’t have to specify them on the command line ever again (on that machine at least). This helps me, if nobody else, as I use the script at many different customers and I can’t always remember their specific server names, or sometimes specify the wrong ones. Save with -save and use these saved values with -registry, and an optional server set name via -serverSet so you can have different sets of servers, e.g. pre-production and production.

For example:

& '.\Get PVS device info.ps1' -ddcs ddc001 -pvsServers pvs001 -save

So next time you just need to run:

& '.\Get PVS device info.ps1' -registry

They are stored in HKCU so are per-user.

The script, amongst others, is available on GitHub here. It has to be run on a machine which has both the PVS and DDC PowerShell cmdlets available; such as one with PVS and Studio consoles installed. Also the ActiveDirectory PowerShell module, particularly if you want to include AD group membership information via the -ADGroups option.

Updated Hyper-V Clone from Disk Script

Back in 2015 I wrote a PowerShell script, which has a GUI, to create new Hyper-V VMs from existing VHD and VHDX files. The original blog post, “Poor man’s Hyper-V cloning from VHD/VHDX”, is available here.

Now that I’ve finally got a Windows 10 laptop with Hyper-V capability, I’ve made a few additions to the script since I’m using Hyper-V, and thus the script, pretty much daily now. For example, I have SysPrep’d Windows 10 and Server 2016 disks so I can quickly create new fresh VMs simply by right or double clicking on the relevant VHDX file in explorer. If you select the linked clone option, so the whole source disk does not have to be copied, the creation, and optional automatic starting, takes only a few seconds.

These enhancements include:

  1. Install (and uninstall) option to integrate into explorer’s right click menu without having to go near the registry yourself (apparently it scares some people!).
  2. Addition of a notes field which will go into the notes for the VM, if specified.
  3. Option to hide the parent PowerShell script window so it looks cleaner although may make any errors more difficult to track down.

I hope you find it as useful as I do.

The updated script is available here.

When even Process Monitor isn’t enough

I was recently tasked to investigate why an App-V 5.1 application was giving a license error at launch on XenApp 7.8 (on Server 2008R2) when the same application installed locally worked fine. I therefore ran up the trusty Process Monitor (procmon) tool to get traces on the working and non-working systems so I could look for differences. As I knew what the licence file was called, I honed in quickly on this in the traces. In the working trace, you could see it open the licence file, via a CreateFile operation, and then read from the file. However, in the App-V version it wasn’t reading from the file (a ReadFile operation) but no CreateFile operation was failing so I couldn’t understand why it wasn’t even attempting to read from the file when it didn’t appear to be unable to access it. The same happened when running as an administrator so it didn’t look like a file permission issue.

Now whilst procmon is a simply awesome tool, such that life without it would be an unimaginably difficult place, it does unfortunately only tell you about a few of the myriad of Microsoft API calls. In order to understand even more of what a process is doing under the hood, you need to use an API monitor program that has the ability to hook any API call available. To this end I used WinAPIOverride (available here). What I wanted was to find the calls to CreateFile for the licence file and then see what happened after that, again comparing good and bad procmon traces.

WinAPIOverride can launch a process but it needs to be inside the App-V bubble for the app in order for it to be able to function correctly. We therefore run the following PowerShell to get a PowerShell prompt inside the bubble for our application which is called “Medallion”:

$app = Get-AppvClientPackage | ?{ $_.Name -eq 'Medallion' };
Start-AppvVirtualProcess -AppvClientObject $app powershell.exe

We can then launch WinAPIOverride64.exe in this new PowerShell prompt, tell it what executable to run and then run it:

winapioverride-launch

Note that you may not be able to browse to the executable name so you may have to type it in manually.

Once we tell it to run, it will allow us to specify what APIs we want to get details on by clicking on the “Monitoring Files Library” button before we click “Resume”.

api-monitor-hook

You need to know the module (dll) which contains the API that you want to monitor. In this case it is kernel32.dll which we can glean from the MSDN manual page for the CreateFile API call (see here).

api-monitor-kernel32

Whilst you can use the search facility to find the specific APIs that you want to monitor and just tick those, I decided initially to monitor everything in kernel32.dll, knowing that it would generate a lot of data but we can search for what we want if necessary.

So I resumed the process, saw the usual error about the licence file being corrupt, stopped the API monitor trace and set about finding the CreateFile API call for the licence file to see what it revealed. What I actually found was that CreateFile was not being called for the licence file but when I searched for the licence file in the trace, it revealed that it was being opened by a legacy API called OpenFile instead. Looking at the details for this API (here), it says the following:

you cannot use the OpenFile function to open a file with a path length that exceeds 128 characters

Guess how long the full path for our licence file is? 130 characters! So we’re doomed it would seem with this API call which we could see was failing in API monitor anyway:

medallion-open-file

I suspect that we don’t see this in procmon as the OpenFile call fails before it gets converted to a CreateFile call and thence hits the procmon filter driver.

The workaround, as we found that the installation wouldn’t work in any other folder than c:\Medallion so we couldn’t install it to say C:\M, was to shorten the package installation root by running the following as an admin:

Set-AppvClientConfiguration -PackageInstallationRoot '%SystemDrive\A'

This changes the folder where App-V packages are cached from “C:\ProgramData\App-V” to “C:\A” which saves us 18 characters. The C:\A folder needed to be created and given the same permissions and owner (system) as the original folder. I then unloaded and reloaded the App-V package so it got cached to the \A folder whereupon it all worked properly.

Exporting validated shortcuts to CSV file

In a recent Citrix XenApp 7.x consulting engagement, the customer wanted to have a user’s start menu on their XenApp desktop populated entirely via shortcuts created dynamically at logon by Citrix Receiver. Receiver has a mechanism whereby a central store can be configured to be used for shortcuts such that when the user logs on, the shortcuts they are allowed are copied from this central share to their start menu. For more information on this see https://www.citrix.com/blogs/2015/01/06/shortcut-creation-for-locally-installed-apps-via-citrix-receiver/

Before not too long there were in excess of 100 shortcuts in this central store so to check these individually by examining their properties in explorer was rather tedious to say the least so I looked to automate it via good old PowerShell. As I already had logon scripts that were manipulating shortcuts, it was easy to adapt this code to enumerate the shortcuts in a given folder and then write the details to a csv file so it could easily be filtered in Excel to find shortcuts whose target didn’t exist although it will also output the details of these during the running of the script.

I then realised that the script could have more uses than just this, for instance to check start menus on desktop machines so decided to share it with the wider community.

The get-help cmdlet can be used to see all the options but in its simplest form, just specify a -folder argument to tell it where the parent folder for the shortcuts is and a -csv with the name of the csv file you want it to write (it will overwrite any existing file of that name so be careful).

It can also check that the shortcut’s target exists on a remote machine. For instance, you can run the script on a Citrix Delivery Controller but have it check the targets on a XenApp server, via its administrative shares, by using the -computername option.

If you run it on a XenApp server with the “PreferTemplateDirectory” registry value set, use the -registry option instead of -folder and it will read this value from the registry and use that folder.

If you’re running it on a desktop to check that there are no bad shortcuts in the user’s own start menu, whether it is redirected or not, or in the all users start menu then specify the options -startmenu or -allusers respectively.

Finally, it can email the resultant csv file via an SMTP mail server using the -mailserver and -recipients options.

The script is available for download here.

Poor man’s Hyper-V cloning from VHD/VHDX

If you use Server 2012 Hyper-V but don’t have the luxury of System Center Virtual Machine Manager to do provisioning of new virtual machines for you then I offer here a PowerShell script, with GUI, that will do it for you from a master VHD/VHDX. I’ll also show you how to integrate it into explorer so you can just right click on a VHD/VHDX file and clone it from there.

I use Hyper-V at home and have SysPrep’d VHDs of all the Operating Systems I typically use but let’s not get into the debate, started by one Mark Russinovich, as to whether we actually need to SysPrep anymore – I’m old school so I do it out of habit more than anything.

So what we need to do to use an existing VHD to create a new VM is:

  1. Copy the VHD/VHDX to a new file or create a new differencing disk from it (aka a linked clone)
  2. Create a new VM with the new VHD/VHDX file as its disk
  3. Connect the VM to a network (optional)
  4. Start the VM (optional)

We start this process by right clicking the VHD that we want to copy in explorer:

Right click vhd

and selecting the “Create VM” option which is a registry key I’ve created, which  we’ll cover later, that runs the PowerShell script with the VHD/VHDX file as one of the arguments.

This then gives the user interface constructed on the fly by PowerShell:

hyper-v vhd cloner

All we have to do at this point if we are happy with the defaults is to enter the new virtual machine name and either click the “Create” button or hit Alt C or enter. Note that the VHD/VHDX destination folder is pre-populated with my personal default so I don’t have to type in or use the folder browser – this is done via the registry value we’ll see later and we can set other defaults in there too if we wish. The name of the new VHD/VHDX file will be the machine name that you have entered above or with a “- <number>” suffix if you create more than one VM at a time. If the destination VHD/VHDX file already exists then an error will be shown unless the “-force” option is specified which will cause the VHD/VHDX to be overwritten so use it wisely.

Depending on your storage, the file copy for a non-linked clone may take a while but you’ll find a PowerShell window behind the GUI with some rudiments of a progress indicator in there. You will also get a message box upon completion unless you specify the -quiet switch.

cloned ok

Also, if you run as an administrator but with UAC enabled (what I like to call the “sensible” option when running as an administrator (which I avoid as much as I can anyway)), the script will detect this and run itself again, but elevated, so you should expect to see a UAC elevation prompt. Without it elevated, it will probably fail due to lack of permission.

I’ve tested it just on Server 2012R2 English with PowerShell 4.0 and it certainly works reliably for me but I thought I’d share it lest it helps anyone else save time when creating new VMs from VHD/VHDX files. It won’t work on Server 2008/2008 R2 as they don’t have PowerShell support for Hyper-V (out of the box anyway).

It’s quick to use to as it has been designed to be used via the keyboard so you can tab through the fields and hit “enter” to start the creation process at any point. There are also keyboard accelerators – activated by hitting the “Alt” key. Hitting the escape key will quit immediately.

Note that it won’t work with AVHD/AVHDX files or snapshots as the disk cloning is done completely outside of Hyper-V – it just uses the Hyper-V PowerShell cmdlets to check if a VM of the name you entered exists already and if not to create a new VM, set the generation and number of processors and start it if requested.

The script can be downloaded here and is used entirely at your own risk. Any error messages will be displayed in a message box.

To integrate it into explorer, create a new key with the name you want in your right click explorer menu (mine is “Create VM” as shown above) in “HKEY_CLASSES_ROOT\Windows.VhdFile\shell”. In the key you create, create another key called “command” and set the default value in this “command” key to be something like this (obviously substituting your preferred default destination folder for my G: drive folder example or just leave out the -Folder option if you are happy to always enter or browse to a folder):

powershell.exe -command “& ‘c:\scripts\Clone VHD.ps1’ -Verbose -ShowUI -Folder \”G:\Hyper-V\Hyper-V Hard Drives\” -SourceVHD \”%1\”

So it looks like this in regedit – make sure you get all the single and double quote marks correct otherwise spaces in any argument will stop it working:

vhd association

A .reg file can be downloaded here – just change the path in it to wherever you save the PowerShell script and the argument to the -Folder switch to your default folder for Hyper-V hard drives to be stored.

The script can also be run without the GUI in a PowerShell session, e.g.:

& ‘c:\scripts\Clone VHD.ps1’ -SourceVHD “c:\folder\source.vhdx” -Name “My new VM” -Switch “External virtual switch” -Start -LinkedClone -Folder “d:\hyper-v\hyper-v hard drives”

where you need to change paths and virtual switches to match your system. Full help is available for it via Get-Help.

If the script won’t run due to the PowerShell execution policy then run “Set-ExecutionPolicy -ExecutionPolicy RemoteSigned” as an administrative user but please understand the implications of this before doing so. If it still won’t set the execution policy because of group policy then as an administrator change the “ExecutionPolicy” value in “HKLM\SOFTWARE\Policies\Microsoft\Windows\PowerShell” to “RemoteSigned” although this is only a temporary workaround as it will eventually reapply the original value due to group policy refresh.