When even Process Monitor isn’t enough

I was recently tasked to investigate why an App-V 5.1 application was giving a license error at launch on XenApp 7.8 (on Server 2008R2) when the same application installed locally worked fine. I therefore ran up the trusty Process Monitor (procmon) tool to get traces on the working and non-working systems so I could look for differences. As I knew what the licence file was called, I honed in quickly on this in the traces. In the working trace, you could see it open the licence file, via a CreateFile operation, and then read from the file. However, in the App-V version it wasn’t reading from the file (a ReadFile operation) but no CreateFile operation was failing so I couldn’t understand why it wasn’t even attempting to read from the file when it didn’t appear to be unable to access it. The same happened when running as an administrator so it didn’t look like a file permission issue.

Now whilst procmon is a simply awesome tool, such that life without it would be an unimaginably difficult place, it does unfortunately only tell you about a few of the myriad of Microsoft API calls. In order to understand even more of what a process is doing under the hood, you need to use an API monitor program that has the ability to hook any API call available. To this end I used WinAPIOverride (available here). What I wanted was to find the calls to CreateFile for the licence file and then see what happened after that, again comparing good and bad procmon traces.

WinAPIOverride can launch a process but it needs to be inside the App-V bubble for the app in order for it to be able to function correctly. We therefore run the following PowerShell to get a PowerShell prompt inside the bubble for our application which is called “Medallion”:

$app = Get-AppvClientPackage | ?{ $_.Name -eq 'Medallion' };
Start-AppvVirtualProcess -AppvClientObject $app powershell.exe

We can then launch WinAPIOverride64.exe in this new PowerShell prompt, tell it what executable to run and then run it:

winapioverride-launch

Note that you may not be able to browse to the executable name so you may have to type it in manually.

Once we tell it to run, it will allow us to specify what APIs we want to get details on by clicking on the “Monitoring Files Library” button before we click “Resume”.

api-monitor-hook

You need to know the module (dll) which contains the API that you want to monitor. In this case it is kernel32.dll which we can glean from the MSDN manual page for the CreateFile API call (see here).

api-monitor-kernel32

Whilst you can use the search facility to find the specific APIs that you want to monitor and just tick those, I decided initially to monitor everything in kernel32.dll, knowing that it would generate a lot of data but we can search for what we want if necessary.

So I resumed the process, saw the usual error about the licence file being corrupt, stopped the API monitor trace and set about finding the CreateFile API call for the licence file to see what it revealed. What I actually found was that CreateFile was not being called for the licence file but when I searched for the licence file in the trace, it revealed that it was being opened by a legacy API called OpenFile instead. Looking at the details for this API (here), it says the following:

you cannot use the OpenFile function to open a file with a path length that exceeds 128 characters

Guess how long the full path for our licence file is? 130 characters! So we’re doomed it would seem with this API call which we could see was failing in API monitor anyway:

medallion-open-file

I suspect that we don’t see this in procmon as the OpenFile call fails before it gets converted to a CreateFile call and thence hits the procmon filter driver.

The workaround, as we found that the installation wouldn’t work in any other folder than c:\Medallion so we couldn’t install it to say C:\M, was to shorten the package installation root by running the following as an admin:

Set-AppvClientConfiguration -PackageInstallationRoot '%SystemDrive\A'

This changes the folder where App-V packages are cached from “C:\ProgramData\App-V” to “C:\A” which saves us 18 characters. The C:\A folder needed to be created and given the same permissions and owner (system) as the original folder. I then unloaded and reloaded the App-V package so it got cached to the \A folder whereupon it all worked properly.

Advertisements

Manipulating autorun logon entries with PowerShell

As I came to manually create a registry value for a script I wanted to run at logon, I thought that I may as well write a script that would ass the item to save people having to remember where to create the entries. It then morphed into something that would list or remove entries too, depending on the command line parameters specified.

The script is available here and can be run in a number of different ways, e.g.:

  • Show all existing autorun entries in the registry for the current user (if -registry is not specified then the file system will be used):
& "C:\Scripts\Autorun.ps1" -list -registry
  • Delete an existing autorun entry for the current user that runs calc.exe either by the name of the entry (the shortcut name or registry value name) or by regular expression matching on the command run:
& "C:\Scripts\Autorun.ps1" -name "Calculator" -remove
& "C:\Scripts\Autorun.ps1" -run Calc.* -remove
  • Add a new autorun entry to the file system that runs calc.exe for the current user:
& "C:\Scripts\Autorun.ps1" -name "Calculator" -run "calc.exe" -description "Calculator" 

The -allusers flag can be used with any of the above to make them operate for all users but the user running the script must have administrative rights.

When creating an autorun item in the file system, the icon for the shortcut can be specified via the -icon option.

Deletions will prompt for confirmation unless you specify -Confirm:$false and it will also prompt for confirmation when creating a new entry if the name of the entry already exists.

Specifying -verify will generate an error if the executable does not exist when creating a new entry.

If you are on a 64 bit system then you can specify the -wow6432node option when using -registry to work in the wow6432node area of the registry.

Script to update Dynamic DNS registrations

If you use Dynamic DNS services like I do, then you are probably familiar with the need to keep your DNS record(s) updated with your external IP address. Whilst there are plenty of client programs around for this, I’m a little paranoid about running relatively unknown software on my systems so I looked at the feasibility of doing it via a PowerShell script.

The script, available here, has been tested with FreeDNS although should work with other providers (I hope). It can use one of three methods to update the registration by calling a URL although will not do so unless the IP address has changed as some providers can block you if you update too often. You can override this behaviour with the -force switch although by default it will send an update if the IP address hasn’t changed for 25 days  anyway, although you can change the number of days with the -notupdated option.

The easiest method to use is a randomized update token which is unique to your host name and does not need any authentication or passing of any other parameters. You get this URL from your provider and then just specify it on the command line to the script via the -url option.

The second method is to specify a URL of the form https://freedns.afraid.org/nic/update?hostname=+hostname&myip=+ip where the “+hostname” will be replaced with the host name provided via the -hostname argument and “+ip” will be the external IP address discovered by the script. This method needs authentication so -username and -password must be specified which will be those for your Dynamic DNS account.

The last method is a URL of the form https://+username:+password@freedns.afraid.org/nic/update?hostname=+hostname&myip=+ip which is similar to the second method but also passes the -username and -password arguments in the URL itself.

I run this hourly via a scheduled task. The action for the scheduled task is to run “powershell.exe” and the command line is this where obviously you change the options to suit your account and the method you are using:

-ExecutionPolicy Bypass -NoProfile -File "C:\Scripts\Update dynamic dns.ps1" -url http://sync.afraid.org/u/blahblahblah/ -logfile c:\temp\dyndns.log -verbose -history

The optional -history switch writes your external IP address to a registry value whose name is the date/time the IP address was first detected as changed, to the key “HKEY_CURRENT_USER\SOFTWARE\Guy Leech\DynDNS\History” (unless you specify a different key via the -regkey option).

 

Exporting validated shortcuts to CSV file

In a recent Citrix XenApp 7.x consulting engagement, the customer wanted to have a user’s start menu on their XenApp desktop populated entirely via shortcuts created dynamically at logon by Citrix Receiver. Receiver has a mechanism whereby a central store can be configured to be used for shortcuts such that when the user logs on, the shortcuts they are allowed are copied from this central share to their start menu. For more information on this see https://www.citrix.com/blogs/2015/01/06/shortcut-creation-for-locally-installed-apps-via-citrix-receiver/

Before not too long there were in excess of 100 shortcuts in this central store so to check these individually by examining their properties in explorer was rather tedious to say the least so I looked to automate it via good old PowerShell. As I already had logon scripts that were manipulating shortcuts, it was easy to adapt this code to enumerate the shortcuts in a given folder and then write the details to a csv file so it could easily be filtered in Excel to find shortcuts whose target didn’t exist although it will also output the details of these during the running of the script.

I then realised that the script could have more uses than just this, for instance to check start menus on desktop machines so decided to share it with the wider community.

The get-help cmdlet can be used to see all the options but in its simplest form, just specify a -folder argument to tell it where the parent folder for the shortcuts is and a -csv with the name of the csv file you want it to write (it will overwrite any existing file of that name so be careful).

It can also check that the shortcut’s target exists on a remote machine. For instance, you can run the script on a Citrix Delivery Controller but have it check the targets on a XenApp server, via its administrative shares, by using the -computername option.

If you run it on a XenApp server with the “PreferTemplateDirectory” registry value set, use the -registry option instead of -folder and it will read this value from the registry and use that folder.

If you’re running it on a desktop to check that there are no bad shortcuts in the user’s own start menu, whether it is redirected or not, or in the all users start menu then specify the options -startmenu or -allusers respectively.

Finally, it can email the resultant csv file via an SMTP mail server using the -mailserver and -recipients options.

The script is available for download here.

Getting the PVS RAM cache usage as a percentage

Citrix Provisioning Services has long been one of my favourite products (or Ardence as it originally was before being purchased by Citrix and that name still appears in many places in the product). It has steadily improved over time and the cache to RAM with overflow to disk feature is great but how do you know how much of the RAM cache has been used? We care about this because if our overflow disk isn’t on SSD storage then using the overflow file could cause us performance degradation.

The PVS status tray program doesn’t tell us this as it just displays the sum of the free disk space available where the overflow disk resides (vdiskdif.vhdx) plus the RAM cache size and the usage of the overflow disk, not the RAM cache usage.

PVS cache

There are a number of articles out there that show you either how to get the non-paged pool usage, which gives a rough indication, or to use the free Microsoft Poolmon utility to retrieve the non-paged pool usage for the device driver that implements the cache. There’s a great article here on how to do this. Poolmon is also very useful for finding what drivers are causing memory leaks although now that most servers are 64 bit, there isn’t the problem there used to be where non-paged pool memory could become exhausted and cause BSoDs.

However, once we have learnt what the RAM cache usage is, how do we get that as a percentage of the RAM cache configured for this particular vdisk ? I looked at the C:\personality.ini file on a PVS booted VM (where the same information is also available in “HKLM\System\CurrentControlSet\services\bnistack\PVSAgent”) but it doesn’t have anything that correctly tells us the cache size. There is a “WriteCacheSize” value but this doesn’t seem to bear any relation to the actual cache size so I don’t use it.

With the release of PVS 7.7 came a full object based PowerShell interface so it is now very easy to interrogate PVS to find (and change!) all sorts of information including the properties of a vdisk such as its RAM cache size (if it is set for Cache to RAM, overflow to disk which is type 9 if you look at the $WriteCacheType entry in personality.ini). So in a health reporting script that I’m running for all XenApp servers (modified from the script available here) I run the following to build a hash table of the RAM cache sizes for all vdisks:

[string]$PVSServer = 'Your_PVS_Server_name'
[string]$PVSSiteName = 'Your_PVS_Site_name'
[hashtable]$diskCaches = ${}
Invoke-Command -ComputerName $PVSServer { Add-PSSnapin Citrix.PVS.SnapIn ; `
	Get-PvsDiskInfo -SiteName $Using:PVSSiteName } | %{ `
    if( $_.WriteCacheType -eq 9 ) ## cache in RAM, overflow to disk
    {
        $diskCaches.Add( $_.Name , $_.WriteCacheSize )
    }
}

Note that this requires PowerShell 3.0 or higher because of the “$using:” syntax.

Later on when I am processing each XenApp server I can run poolmon.exe on that server remotely and then calculate the percentage of RAM cache used by retrieving the cache size from the hash table I’ve built by using the vdisk for the XenApp server as the key into the table.

## $vdisk is the vdisk name for this particular XenApp server
## $server is the XenApp server we are processing
$thisCache = $diskCaches.Get_Item( $vdisk ) ## get cache size from our hash table
[string]$poolmonLogfile = 'D:\poolmon.log'
$results = Invoke-Command -ComputerName $server -ScriptBlock `
	{ Remove-Item $using:poolmonLogfile -Force -EA SilentlyContinue ; `
	C:\tools\poolmon.exe -n $using:poolmonLogfile ; `
	Get-Content $using:poolmonLogfile -EA SilentlyContinue | `
		?{ $_ -like '*VhdR*' } }

if( ! [string]::IsNullOrEmpty( $results ) )
{           
        $PVSCacheUsedActual = [math]::Round( ($results -split "\s+")[6] / 1MB  )
        $PVSCacheUsed = [math]::Round( ( $PVSCacheUsedActual / $thisCache ) * 100 )
        ## Now do what you want with $PVSCacheUsed
}

Finding out the usage of the overflow to disk file is just a matter of getting the size of the vdiskdif.vhdx file which is achieved in PowerShell using the Get-ChildItem cmdlet and then accessing the “Length” attribute.

(Get-ChildItem "\\$server\d$\vdiskdif.vhdx" -Force).Length

We can then get the free space figure for the drive containing the overflow file using the following:

Get-WmiObject Win32_LogicalDisk -ComputerName $Server 
	-Filter "DeviceID='D:'" | Select-Object -ExpandProperty FreeSpace

So now I’ve got a script I can run as a scheduled task to email a report of the status of all XenApp servers including their PVS cache usage.

citrix health

Embedding files in an AppSense Environment Manager configuration

If you use AppSense EM for copying files from central locations to your end-points to use in logon actions, I’ve come up with a nice and easy way to embed these files into the configuration itself so that there is no need for or reliance on a file server to copy these down to the end-point. What you lose is the ability to compare file timestamps and so on since we will be dynamically creating the content on the end-point so its timestamp will be the current time, more or less.

We achieve this by simply encoding the source file, which may be a binary such as a wallpaper image, inserting that encoded data into a PowerShell custom action which decodes it and writes it to file and then using the file we’ve just created in other actions such as setting the “wallpaper” registry value.

To encode a file, we use the following lines of PowerShell that we put in a .ps1 file somewhere since this code isn’t going in to the EM configuration:

$inputFile = 'Path to your file for encoding'
[byte[]]$contents = Get-Content $inputFile -Encoding Byte
[System.Convert]::ToBase64String($contents) | Set-Content –Path c:\encoded.txt -NoClobber

The more adventurous among you might want to wrap this into a PowerShell script that takes parameters for ease of reuse. This gives us a text file “c:\encodedfile.txt” whose contents we need to embed into an EM custom action so open the file in your preferred text editor, select all content and copy to the clipboard. It should look something like this:

encoded

Now in your EM configuration create your custom action like the example below, where you paste the text copied above into the definition of the variable $encoded between the quotes, making sure that it all goes on one line.

$encoded = 'Paste your encoded data in here'
$newfile = ( $env:temp + '\myfile.jpg' )
[System.Convert]::FromBase64String($encoded) | Set-Content -Path $newfile -Encoding Byte

Which will result in a file “myfile.jpg” in the user’s temporary folder which can then be used as required. Obviously use the same file extension as for the file that was originally encoded.

If the file needs to be written somewhere where the user doesn’t have write access then simply run the custom action as system.

And that’s all there is to it – nice and easy thanks to good old PowerShell. I haven’t tried it for huge files but it certainly works fine for files that are up to hundreds of KB in size.

The base64 encoding will result in data which is four thirds the size of the original file.

If it’s a PowerShell script that you want to embed, then I’ll show you a different technique for doing this in a later post which allows that script to be used asynchronously outside of EM without the need to create scheduled tasks or similar.

The origins of AppSense

A long time ago (1998) in a city far, far away (London and I live in West Yorkshire some 200 miles north), I was a consultant working as part of a team implementing a new Citrix environment for a private bank. I think it was probably MetaFrame 1.0 on NT 4.0 Terminal Server Edition but I may be wrong – I started with WinFrame 1.5 in 1995, albeit with an OEM version called NTrigue from Insignia Solutions, based in High Wycombe, who Citrix went on to acquire (and with it Keith Turnbull and Jon Rolls) . All was going well until I noticed that one particular user had somewhere approaching fifty executable files in their home drive and was periodically running them on our shiny new (physical) servers. Back then, there wasn’t anything like AppSense Performance Manager, as AppSense hadn’t yet come into existence, four physical processors was about the most you could put in a server (and that was very expensive) and a few gigabytes of RAM cost more than the server itself so resources were at a premium and needed protecting. We therefore had a problem in that line of business applications were competing with all manner of “fun” software – probably what might be classed as malware these days.

My background is in software development – six years as a UNIX developer after graduation in 1988 and I wrote my first programs, initially in BASIC and then 6502 assembly language, in 1980 on a Commodore Pet – so I set about developing something that would stop this user from running any of these applications in order to preserve server resources.

So EStopper from ESoft was born, baring in mind that the two most difficult challenges in software development are not actualy writing the code itself but what to call a product and what icon to use.

The very first version used Image File Execution Options (IFEO) in the registry coupled with File Type Associations (FTAs) so was very much a blacklisting approach. You will have probably used the admin interface for this at some point – it was called “regedit” (my career started out in the non-GUI days so I couldn’t, and still can’t, code GUIs). All of the development for version 1.x was done out of hours, as I had a regular day job as a consultant, so was in hotel rooms, trains and late at night at home after my wife had gone to bed. For version 2.0 I was allocated a whole month where I had to learn how to write GUIs which cured me of ever wanting to be a full time developer again!

From here came the idea of Trusted Ownership, which obviates the needs for whitelisting or blacklisting thus simplifying deployment so an out of the box/default configuration can give instant protection from all external executable threats. I’m  not sure how I came up with this idea but by this point the first full-time developer had been recruited, as we’d sold the product to a couple of our “tame” Citrix accounts (as an “independent” organisation called iNTERNATA),  and whilst he was a fantastic coder, he did his best work in the early hours of the morning so I’d get phone calls around 3am asking about some aspect of Windows NT security – and I needed my beauty sleep even back then!

 

Auditing of who was denied access to what was in the product from the very beginning but after seeing a denial of something called “porn.exe” in the event logs, I thought that a feature to take a copy of denied executables, purely for research/disciplinary purposes of course, was a good idea. And, no, I never did investigate “porn.exe” (yes, really) although archiving is still a very useful feature today to use when understanding a new environment so denied content can be examined without having to recreate it.

So finally AppSense the company was born in 1999, with dedicated sales and development resources and the EStopper product was rebranded to be called AppSense as that was the only product they originally had. Even when Performance Manager joined the stable a few years later, the installer for what eventually was rebranded to Application Manager was still called AppSense.msi for a while.

And the rest, as they say, is history although the origins of Environment Manager are also “Quite Interesting” which I’ll leave to another time.