Script to update Dynamic DNS registrations

If you use Dynamic DNS services like I do, then you are probably familiar with the need to keep your DNS record(s) updated with your external IP address. Whilst there are plenty of client programs around for this, I’m a little paranoid about running relatively unknown software on my systems so I looked at the feasibility of doing it via a PowerShell script.

The script, available here, has been tested with FreeDNS although should work with other providers (I hope). It can use one of three methods to update the registration by calling a URL although will not do so unless the IP address has changed as some providers can block you if you update too often. You can override this behaviour with the -force switch although by default it will send an update if the IP address hasn’t changed for 25 days  anyway, although you can change the number of days with the -notupdated option.

The easiest method to use is a randomized update token which is unique to your host name and does not need any authentication or passing of any other parameters. You get this URL from your provider and then just specify it on the command line to the script via the -url option.

The second method is to specify a URL of the form https://freedns.afraid.org/nic/update?hostname=+hostname&myip=+ip where the “+hostname” will be replaced with the host name provided via the -hostname argument and “+ip” will be the external IP address discovered by the script. This method needs authentication so -username and -password must be specified which will be those for your Dynamic DNS account.

The last method is a URL of the form https://+username:+password@freedns.afraid.org/nic/update?hostname=+hostname&myip=+ip which is similar to the second method but also passes the -username and -password arguments in the URL itself.

I run this hourly via a scheduled task. The action for the scheduled task is to run “powershell.exe” and the command line is this where obviously you change the options to suit your account and the method you are using:

-ExecutionPolicy Bypass -NoProfile -File "C:\Scripts\Update dynamic dns.ps1" -url http://sync.afraid.org/u/blahblahblah/ -logfile c:\temp\dyndns.log -verbose -history

The optional -history switch writes your external IP address to a registry value whose name is the date/time the IP address was first detected as changed, to the key “HKEY_CURRENT_USER\SOFTWARE\Guy Leech\DynDNS\History” (unless you specify a different key via the -regkey option).

 

Exporting validated shortcuts to CSV file

In a recent Citrix XenApp 7.x consulting engagement, the customer wanted to have a user’s start menu on their XenApp desktop populated entirely via shortcuts created dynamically at logon by Citrix Receiver. Receiver has a mechanism whereby a central store can be configured to be used for shortcuts such that when the user logs on, the shortcuts they are allowed are copied from this central share to their start menu. For more information on this see https://www.citrix.com/blogs/2015/01/06/shortcut-creation-for-locally-installed-apps-via-citrix-receiver/

Before not too long there were in excess of 100 shortcuts in this central store so to check these individually by examining their properties in explorer was rather tedious to say the least so I looked to automate it via good old PowerShell. As I already had logon scripts that were manipulating shortcuts, it was easy to adapt this code to enumerate the shortcuts in a given folder and then write the details to a csv file so it could easily be filtered in Excel to find shortcuts whose target didn’t exist although it will also output the details of these during the running of the script.

I then realised that the script could have more uses than just this, for instance to check start menus on desktop machines so decided to share it with the wider community.

The get-help cmdlet can be used to see all the options but in its simplest form, just specify a -folder argument to tell it where the parent folder for the shortcuts is and a -csv with the name of the csv file you want it to write (it will overwrite any existing file of that name so be careful).

It can also check that the shortcut’s target exists on a remote machine. For instance, you can run the script on a Citrix Delivery Controller but have it check the targets on a XenApp server, via its administrative shares, by using the -computername option.

If you run it on a XenApp server with the “PreferTemplateDirectory” registry value set, use the -registry option instead of -folder and it will read this value from the registry and use that folder.

If you’re running it on a desktop to check that there are no bad shortcuts in the user’s own start menu, whether it is redirected or not, or in the all users start menu then specify the options -startmenu or -allusers respectively.

Finally, it can email the resultant csv file via an SMTP mail server using the -mailserver and -recipients options.

The script is available for download here.

Getting the PVS RAM cache usage as a percentage

Citrix Provisioning Services has long been one of my favourite products (or Ardence as it originally was before being purchased by Citrix and that name still appears in many places in the product). It has steadily improved over time and the cache to RAM with overflow to disk feature is great but how do you know how much of the RAM cache has been used? We care about this because if our overflow disk isn’t on SSD storage then using the overflow file could cause us performance degradation.

The PVS status tray program doesn’t tell us this as it just displays the sum of the free disk space available where the overflow disk resides (vdiskdif.vhdx) plus the RAM cache size and the usage of the overflow disk, not the RAM cache usage.

PVS cache

There are a number of articles out there that show you either how to get the non-paged pool usage, which gives a rough indication, or to use the free Microsoft Poolmon utility to retrieve the non-paged pool usage for the device driver that implements the cache. There’s a great article here on how to do this. Poolmon is also very useful for finding what drivers are causing memory leaks although now that most servers are 64 bit, there isn’t the problem there used to be where non-paged pool memory could become exhausted and cause BSoDs.

However, once we have learnt what the RAM cache usage is, how do we get that as a percentage of the RAM cache configured for this particular vdisk ? I looked at the C:\personality.ini file on a PVS booted VM (where the same information is also available in “HKLM\System\CurrentControlSet\services\bnistack\PVSAgent”) but it doesn’t have anything that correctly tells us the cache size. There is a “WriteCacheSize” value but this doesn’t seem to bear any relation to the actual cache size so I don’t use it.

With the release of PVS 7.7 came a full object based PowerShell interface so it is now very easy to interrogate PVS to find (and change!) all sorts of information including the properties of a vdisk such as its RAM cache size (if it is set for Cache to RAM, overflow to disk which is type 9 if you look at the $WriteCacheType entry in personality.ini). So in a health reporting script that I’m running for all XenApp servers (modified from the script available here) I run the following to build a hash table of the RAM cache sizes for all vdisks:

[string]$PVSServer = 'Your_PVS_Server_name'
[string]$PVSSiteName = 'Your_PVS_Site_name'
[hashtable]$diskCaches = ${}
Invoke-Command -ComputerName $PVSServer { Add-PSSnapin Citrix.PVS.SnapIn ; `
	Get-PvsDiskInfo -SiteName $Using:PVSSiteName } | %{ `
    if( $_.WriteCacheType -eq 9 ) ## cache in RAM, overflow to disk
    {
        $diskCaches.Add( $_.Name , $_.WriteCacheSize )
    }
}

Note that this requires PowerShell 3.0 or higher because of the “$using:” syntax.

Later on when I am processing each XenApp server I can run poolmon.exe on that server remotely and then calculate the percentage of RAM cache used by retrieving the cache size from the hash table I’ve built by using the vdisk for the XenApp server as the key into the table.

## $vdisk is the vdisk name for this particular XenApp server
## $server is the XenApp server we are processing
$thisCache = $diskCaches.Get_Item( $vdisk ) ## get cache size from our hash table
[string]$poolmonLogfile = 'D:\poolmon.log'
$results = Invoke-Command -ComputerName $server -ScriptBlock `
	{ Remove-Item $using:poolmonLogfile -Force -EA SilentlyContinue ; `
	C:\tools\poolmon.exe -n $using:poolmonLogfile ; `
	Get-Content $using:poolmonLogfile -EA SilentlyContinue | `
		?{ $_ -like '*VhdR*' } }

if( ! [string]::IsNullOrEmpty( $results ) )
{           
        $PVSCacheUsedActual = [math]::Round( ($results -split "\s+")[6] / 1MB  )
        $PVSCacheUsed = [math]::Round( ( $PVSCacheUsedActual / $thisCache ) * 100 )
        ## Now do what you want with $PVSCacheUsed
}

Finding out the usage of the overflow to disk file is just a matter of getting the size of the vdiskdif.vhdx file which is achieved in PowerShell using the Get-ChildItem cmdlet and then accessing the “Length” attribute.

(Get-ChildItem "\\$server\d$\vdiskdif.vhdx" -Force).Length

We can then get the free space figure for the drive containing the overflow file using the following:

Get-WmiObject Win32_LogicalDisk -ComputerName $Server 
	-Filter "DeviceID='D:'" | Select-Object -ExpandProperty FreeSpace

So now I’ve got a script I can run as a scheduled task to email a report of the status of all XenApp servers including their PVS cache usage.

citrix health

Embedding files in an AppSense Environment Manager configuration

If you use AppSense EM for copying files from central locations to your end-points to use in logon actions, I’ve come up with a nice and easy way to embed these files into the configuration itself so that there is no need for or reliance on a file server to copy these down to the end-point. What you lose is the ability to compare file timestamps and so on since we will be dynamically creating the content on the end-point so its timestamp will be the current time, more or less.

We achieve this by simply encoding the source file, which may be a binary such as a wallpaper image, inserting that encoded data into a PowerShell custom action which decodes it and writes it to file and then using the file we’ve just created in other actions such as setting the “wallpaper” registry value.

To encode a file, we use the following lines of PowerShell that we put in a .ps1 file somewhere since this code isn’t going in to the EM configuration:

$inputFile = 'Path to your file for encoding'
[byte[]]$contents = Get-Content $inputFile -Encoding Byte
[System.Convert]::ToBase64String($contents) | Set-Content –Path c:\encoded.txt -NoClobber

The more adventurous among you might want to wrap this into a PowerShell script that takes parameters for ease of reuse. This gives us a text file “c:\encodedfile.txt” whose contents we need to embed into an EM custom action so open the file in your preferred text editor, select all content and copy to the clipboard. It should look something like this:

encoded

Now in your EM configuration create your custom action like the example below, where you paste the text copied above into the definition of the variable $encoded between the quotes, making sure that it all goes on one line.

$encoded = 'Paste your encoded data in here'
$newfile = ( $env:temp + '\myfile.jpg' )
[System.Convert]::FromBase64String($encoded) | Set-Content -Path $newfile -Encoding Byte

Which will result in a file “myfile.jpg” in the user’s temporary folder which can then be used as required. Obviously use the same file extension as for the file that was originally encoded.

If the file needs to be written somewhere where the user doesn’t have write access then simply run the custom action as system.

And that’s all there is to it – nice and easy thanks to good old PowerShell. I haven’t tried it for huge files but it certainly works fine for files that are up to hundreds of KB in size.

The base64 encoding will result in data which is four thirds the size of the original file.

If it’s a PowerShell script that you want to embed, then I’ll show you a different technique for doing this in a later post which allows that script to be used asynchronously outside of EM without the need to create scheduled tasks or similar.

The origins of AppSense

A long time ago (1998) in a city far, far away (London and I live in West Yorkshire some 200 miles north), I was a consultant working as part of a team implementing a new Citrix environment for a private bank. I think it was probably MetaFrame 1.0 on NT 4.0 Terminal Server Edition but I may be wrong – I started with WinFrame 1.5 in 1995, albeit with an OEM version called NTrigue from Insignia Solutions, based in High Wycombe, who Citrix went on to acquire (and with it Keith Turnbull and Jon Rolls) . All was going well until I noticed that one particular user had somewhere approaching fifty executable files in their home drive and was periodically running them on our shiny new (physical) servers. Back then, there wasn’t anything like AppSense Performance Manager, as AppSense hadn’t yet come into existence, four physical processors was about the most you could put in a server (and that was very expensive) and a few gigabytes of RAM cost more than the server itself so resources were at a premium and needed protecting. We therefore had a problem in that line of business applications were competing with all manner of “fun” software – probably what might be classed as malware these days.

My background is in software development – six years as a UNIX developer after graduation in 1988 and I wrote my first programs, initially in BASIC and then 6502 assembly language, in 1980 on a Commodore Pet – so I set about developing something that would stop this user from running any of these applications in order to preserve server resources.

So EStopper from ESoft was born, baring in mind that the two most difficult challenges in software development are not actualy writing the code itself but what to call a product and what icon to use.

The very first version used Image File Execution Options (IFEO) in the registry coupled with File Type Associations (FTAs) so was very much a blacklisting approach. You will have probably used the admin interface for this at some point – it was called “regedit” (my career started out in the non-GUI days so I couldn’t, and still can’t, code GUIs). All of the development for version 1.x was done out of hours, as I had a regular day job as a consultant, so was in hotel rooms, trains and late at night at home after my wife had gone to bed. For version 2.0 I was allocated a whole month where I had to learn how to write GUIs which cured me of ever wanting to be a full time developer again!

From here came the idea of Trusted Ownership, which obviates the needs for whitelisting or blacklisting thus simplifying deployment so an out of the box/default configuration can give instant protection from all external executable threats. I’m  not sure how I came up with this idea but by this point the first full-time developer had been recruited, as we’d sold the product to a couple of our “tame” Citrix accounts (as an “independent” organisation called iNTERNATA),  and whilst he was a fantastic coder, he did his best work in the early hours of the morning so I’d get phone calls around 3am asking about some aspect of Windows NT security – and I needed my beauty sleep even back then!

 

Auditing of who was denied access to what was in the product from the very beginning but after seeing a denial of something called “porn.exe” in the event logs, I thought that a feature to take a copy of denied executables, purely for research/disciplinary purposes of course, was a good idea. And, no, I never did investigate “porn.exe” (yes, really) although archiving is still a very useful feature today to use when understanding a new environment so denied content can be examined without having to recreate it.

So finally AppSense the company was born in 1999, with dedicated sales and development resources and the EStopper product was rebranded to be called AppSense as that was the only product they originally had. Even when Performance Manager joined the stable a few years later, the installer for what eventually was rebranded to Application Manager was still called AppSense.msi for a while.

And the rest, as they say, is history although the origins of Environment Manager are also “Quite Interesting” which I’ll leave to another time.

First Experiences with XenApp 7.8 & App-V 5.1

Background

I’m currently working on a new XenApp rollout for a customer where we’ve been eagerly awaiting the 7.8 release to have a look at the App-V integration given that it promised to remove the need for separate App-V server infrastructure.

I’m not going to go into details here of how you make App-V applications available natively in XenApp/XenDesktop as that is covered elsewhere such as here. That article also covers troubleshooting and how to enable logging.

How it appears to work

When the Citrix Desktop Service (BrokerAgent) starts on your XenApp server, it communicates with a Delivery Controller and writes the details to the “ApplicationStartDetails” REG_MULTI_SZ value in “HKLM\SOFTWARE\Policies\Citrix\AppLibrary”. Now why it writes to the policies key when we’re not actually setting anything to do with App-V in policies I don’t know but a key is a key (unless it’s a value!). A typical line in this value looks like this:

56c1d895-e3d8-4dcc-a303-b0162a97c87b;\\ourappvserver\appvshare\thisapp\thisapp.appv;de0a5cd1-3264-4418-82dd-4bdf5959a29d;957c71c9-a732-401b-b354-17c493decac8;This App

Where the fields are semicolon delimited thus:

App-V App  GUID;Package UNC;App-V Package GUID;Published App Name

The BrokerAgent then downloads all .appv packages that you’ve added to your delivery groups to the “%SystemRoot%\Temp\CitrixAppVPkgCache” folder. This is regardless of whether App-V has been configured with a Shared Content Store. As this happens at boot, the packages should be locally cached by the time users logon who might want to run one of the published App-V applications so you’re trading system drive disk space with speed of launch. I’ve yet to see how this impacts on PVS cache in RAM so we may look at whether we can pre-populate the cache in the PVS master image so we don’t lose write cache when .appv packages are downloaded when the image is booted into shared mode.

There is a gotcha here though in that because Citrix use PowerShell to integrate with App-V so that if your PowerShell execution policy does not allow local scripts to be run, such as being set to “Restricted” which is the default, then the App-V integration will not work which can be seen in the above cache folder not populating and apps erroring when launched. To get around this, we set the execution policy to “RemoteSigned” in the base PVS image so we didn’t have to rely on group policy getting applied before the BrokerAgent starts.

We’re giving users all of their applications via Receiver generated shortcuts which is where the next small issue arises in that the shortcuts that Receiver (actually SelfService.exe) generates for App-V run SelfService.exe so effectively a new logon session is created, which can be seen by running quser.exe, to host the App-V application. Ultimately, Citrix call their own launcher process, CtxAppVLauncher.exe, which sits in the VDA folder and is installed with the VDA by default. This then uses PowerShell to launch the  App-V application from the %AllUsersProfile%\App-V folder (using sparse files so disk space is efficiently managed). You do still need to have the Microsoft App-V client installed though, since that’s what runs the App-V package, as you’d kind of expect.

This second logon all takes time though so we decided to cut out the middle man, selfservice.exe, and make the shortcut run CtxAppvLauncher.exe directly which takes the App-V app GUID as its single argument. This we do with a PowerShell script, run at logon (actually via AppSense Environment Manager), that was initially designed to check that pinned Receiver shortcuts were still valid and to update their icons as these are dynamically created, and named, at each logon (we’re using mandatory profiles). This was extended to find shortcuts for App-V apps, by matching the application name in the shortcut target with the data found in the “ApplicationStartDetails” registry value, and then changing them to run CtxAppVLauncher.exe, instead of SelfService.exe, with the App-V app GUID found in this registry value.

It does seem slightly strange though that we’ve had to go to these lengths to create locally launched App-V apps although the results are quite impressive in that the apps launch almost instantly due to the caching.

There may be further posts on the App-V integration depending on what else we unearth. Looking at FTAs (File Type Associations) is definitely on the agenda.

 

The Taming of the Print Server

The Problem

A customer recently reported to me  that sometimes their users complained (users complaining – now there’s a rarity!) that printing was slow.

The Investigation

When I logged on to the print server I observed that the CPU of this four vCPU virtual machine was near constantly at 100% and in digging in with taskmgr saw that it was mostly being consumed by four explorer processes for four different users who were actually disconnected (in some cases, for many days). Getting one of these users to reconnect, I saw that they had a window open on “Devices and Printers” and with over six hundred printers defined on this print server, my theory was that it was spending its time constantly trying to update statuses and the like for all these printers.

The Solution

Logoff the users and set idle and disconnected session timeouts so that this didn’t happen again! Well, that’s great but what if the users are actually logged on, checking print queues and the like. as administrators have a tendency to do?

What we have to remember here is that consumption of CPU isn’t necessarily a bad thing as long as the “right” processes get preferential access to the CPU. So how do we define the “right” processes on this print server? Well, we know it’s probably not the explorer processes hosting the “Devices and Printers” applet so why don’t we ensure that these don’t get CPU so that more deserving processes can have CPU resource?

Therefore, what we do here is to lower the base process priorities of the explorer processes so that they have a lower base priority assigned to them. This means that if a process (technically a thread in a process) arrives in the CPU queue with a higher base priority than it gets on the CPU before the lower priority thread.

You can do this manually with taskmgr by right-clicking on the process and changing its base priority but that’s not the most fun way to spend your day although I may be wrong.

Explorer lower priority

So I wrote a few lines of good old PowerShell to find all processes of a given name for all users and then change their base priorities to that specified on the command line. This must be run elevated since the “-IncludeUserName” parameter requires it. This parameter is used in order to filter out processes owned by SYSTEM or a service account, not that explorer processes are likely to be thus owned, so that the script example can be used for any process since we shouldn’t mess with operating system processes as it can cause deadlocks and similar  catastrophic issues. Also, I would strongly recommend that you never use the “RealTime” priority as that could cause severe resource shortages if the process granted it is CPU hungry.

I first implemented this in the All Users Startup folder so it would run at logon for everyone once their explorer process had launched but I felt slightly nervous about this in case explorer got restarted mid-session.

I, therefore, implemented the script as a scheduled task that ran every ten minutes, under an administrative account whether that user was logged on or not, which looked for all explorer processes and set their base priorities to  “Idle”, which is the lowest priority, so when any spooler thread required CPU resource it would get it in preference to the lower priority explorer threads. However, if these explorer threads needed CPU and nothing else needed it then they would get the CPU, despite their low priority, so potentially there are no losers. Users might experience a slightly unresponsive explorer at times of peak load but that’s a small price to pay to get happier users I hope you’ll agree.

Param
(
 [Parameter(Mandatory=$true,Position=0)]
 [string]$processName ,
 [Parameter(Mandatory=$false,Position=1)]
 [ValidateSet('Normal','Idle','High','RealTime','BelowNormal','AboveNormal')]
 [string]$priority = "Idle"
)
Get-Process -Name $processName -IncludeUserName | ?{ $_.UserName -notlike "*\SYSTEM" -and $_.UserName -notlike "* SERVICE" } | %{ $_.PriorityClass = $priority }

So we just create a scheduled task to run under an administrator account, whether we are logged on or not, passing in a single positional/named parameter of “Explorer” as the base priority defaults to “Idle” if not specified. If you implement this on a non-English system then you may need to change the account names above to match “SYSTEM”, “LOCAL SERVICE” and “NETWORK SERVICE”. Job done, as we say in the trade.

Oh, and I always like to create a folder for my scheduled tasks to keep them separate from the myriad of other, mostly built-in, ones.

task scheduler