Checking SQL Account Password Expiry


“Who uses SQL authentication these days” I hear you ask. Well unfortunately there are some legacy applications out there that only work with a SQL account, not an Active Directory one. Not ideal but that’s life if you want to use that vendor’s application but what if your corporate security policy insists that passwords are changed regularly, which, let’s face it, isn’t exactly unreasonable in this day of heightened security awareness? Oh, and perhaps we also should change passwords when people who know them leave an organisation – now there’s a radical idea!

A Scripted Solution

I’m no SQL expert but it didn’t take me long to figure out a TSQL query to return SQL account details, assuming you run it with an account that has sufficient SQL privileges to query accounts although I found that a non-privileged SQL account can query itself which is ideal for my first use case as we have no privileged access to the SQL instance.

Putting this into PowerShell is relatively easy although I tend to use the SQL cmdlets available with .NET rather than relying on the SQLPS module having been installed. A little more code is required to use them but with the advantage that the script should run wherever the .NET Framework is installed.

Passing Credentials

When I write any script that takes credentials, I’m acutely aware that if someone is sniffing process command lines, either live with task manager, Process Explorer, etc. or trawling through historic data, depending on what process auditing options are set, then passing a clear text password on the command line isn’t a great idea.

Therefore there are a number of ways to make it more difficult, but not impossible, for the password to be obtained by a third-party.

  1. The -password option can be omitted if the %SQLval1% environment variable has been set to contain the clear text password as the script will use that instead. However, this still requires the plain text password to be stored somewhere although file system permissions can be used to restrict access. If mail server credentials need to be specified then the environment variable %_MVal12% can be used to store that.
  2. PowerShell includes the cmdlet ConvertTo-SecureString which can take a plain text string and encrypt it so the script has the ability to encrypt a plain text password such that it can later be passed via the -hashedPassword option to the script (or -mailHashedPassword if this is the password for the account required to connect to the mail server). This encrypted password can only be used, which includes decryption, on the same machine as it was encrypted on and by the same user. For example, to encrypt the password we can run this which places it into the clipboard:
& '.\Check SQL account expiry.ps1' -encryptPassword -password thepassword|Set-Clipboard

and then we can paste that into the argument for the -hashedPassword option when we call the script, such as via a scheduled task. This still isn’t 100% secure because an administrator could reset the password on the account used to encrypt the password so they could then logon and decrypt it but it’s a whole lot better than passing a clear text password on a command line!

There is also a -mailOnFail argument so that if the connection to the SQL server fails then an email can be sent by the script in case the potential outage or account problem hasn’t been detected already.

Additionally, there is -includeDisabled, which will include all accounts, unless -accountName is specified (as a regular expression) to limit what accounts are reported, that cannot currently login since password expiry for an account is a little moot if that account is already disabled, or set to require a password change after first use.

Example Usage

The script includes full PowerShell compatible help including examples but I’ll include one here so you get the gist of it:

& '.\Check SQL account expiry.ps1' -sqlServer SQL01\instance01 -recipients -mailServer -mailOnFail -expireWithinDays 10

This will connect to the specified SQL server instance as the user running the script and email a list of all SQL accounts which expire within the next 10 days to the given recipient. If the script fails to connect to the SQL server then an email will also be sent containing details of the error.

The Output

If no mail parameters, such as the mail server and recipients, are specified then the output will go to an onscreen grid view, unless -nogridview is specified whereupon the account objects will be output directly so they can be piped into another script or manipulated manually.

With mail parameters set, an email should be sent which looks similar to the following, where I’ve overridden the default of 7 days for the expiry check to 39 days to ensure that it captures some data for demonstration purposes:

sql account expiry email

If the script fails to connect to SQL and the -mailOnFail argument is specified then an email similar to the following will be sent:

sql script failed to connect

If you just want the script to check that connections to SQL can be made, specify the -accountName argument with the name of an account that doesn’t exist, such as “non-existent” so that it won’t email any expiring accounts, just connection failures. Even without privileged credentials being used to connect to SQL, either by running the script under a Windows account that has the securityadmin server/instance privilege or with a privileged SQL account via -username, a SQL account can query itself so that can be used.

The Script

The digitally signed script is available on GitHub here and you use it at your own risk although it is purely passive so cannot inflict any damage.

Finding Zombie Processes

Apologies in advance if this article gets a tad scary but not because of the threat of flesh-eating zombies appearing but because of some of the tools I’ll be using although I’ll show you how easy they actually are to use.

So What is a Zombie Process?

There seem to be a few different views out there as to what constitutes a zombie process on Windows operating systems.

There’s the school of thought here that they are processes which have open handles to non-existent items such as processes which have exited. The article links to a handy tool on GitHub which can identify these processes. This led me to write a PowerShell script which opens handles via the OpenProcess() API to a given process and then waits for user input or a specified amount of time before closing them. Whilst it is waiting, if the process(s) to which it has the handles open are terminated then the PowerShell process running my script is deemed by the tool as being a Zombie.

self induced zombies

It’s definitely a useful tool but my interpretation of a Zombie process is one that should be dead and thus gone completely but isn’t so this doesn’t fit the case here in my opinion since my PowerShell process is alive and well, and living in West Yorkshire, but it has handles to what are dead processes, so almost Zombies.

Helge Klein wrote a great post on finding the causes for lack of session re-use on RDS/Citrix servers, so where  session ids returned by the quser.exe command are relatively high numbers unless it has just been rebooted, which is a subject close to my heart as I see this frequently in my consultancy work and this is what initially led to my Zombie hunt. I’ve had some success with this to find Zombie sessions, mainly where processes show in task manager with a status of “Suspended”. I haven’t had any joy though in using SysInternals handle.exe utility to close all open handles and thus free up the session (which is also potentially dangerous). The “suspended” processes are closer to zombies though, in my view of the world, as they should be dead but aren’t as they are generally present in sessions which have been logged off so all processes in that session should have been terminated.

I’ve yet to find a way in PowerShell to find processes that task manager shows as suspended as that doesn’t seem to be in any property returned by Get-Process or from a WMI/CIM query on the win32_process class. The latter has an “ExecutionState” property which ought to be what I need but it is always empty so appears to not be implemented. However, I have had some success in looking for processes whose session id is for a session that no longer exists or which have no threads or open handles as that seems to be a sympton of some of these “Suspended” processes. I thought it would be a case of finding that all threads for a given process were suspended but how does that then relate to processes flagged as suspended in task manager but which have no threads at all unless task manager assumes that any process with no threads must be suspended since it has nothing that could be scheduled to run on a CPU?

Microsoft have a debugger extension command “!zombies” but I’ve never got that to show any Zombie processes as yet.

My own definition of a Zombie process is where they are not visible in any user mode tools like task manager or Process Explorer but still have an EPROCESS block in the kernel address space although these are tricky to find as you have to look for symptoms of their existence, such as Zombie sessions causing a lack of session reuse, given their inherent lack of visibility.

Finding Zombie Processes

So how do we see these EPROCESS blocks which reside in the kernel address space? Why, with a debugger of course. However, we would normally look at this in kernel dumps which we usually get after a BSoD (although there are some neat ways of converting hypervisor snapshot memory files and saved state files into dump files for VMware and Hyper-V that I’ve used).

We can point a debugger at a running machine but only if it has been started in debug mode which isn’t generally how they should be booted in BAU situations.

Fortunately, SysInternals comes to the rescue, again, in that they have a tool called livekd that in conjunction with Windows debuggers like windbg (a debugger with a GUI) or kd (a command line debugger), can debug a live system. It’s easy to use; we just need to tell it where windbg or kd are (they are in the same folder as each other when installed) and also decide what we are going to do about symbols which are what are required to convert raw memory addresses into meaningful function names. If you don’t do anything about symbols then livekd will prompt with some defaults but I prefer to first set the _NT_SYMBOL_PATH environment variable to “srv*c:\Symbols*” which uses Microsoft’s online symbol server and stores the downloaded symbols in the c:\Symbols folder.

So to start windbg to debug the local system, we run the following, elevated, obviously changing folders depending on where livekd.exe and windbg.exe live; where I’m using a PowerShell session, not a (legacy) cmd:

& 'C:\Program Files\sysinternals\livekd.exe' -k 'C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\windbg.exe' -w -b

This will then start windbg which should look something like this:

windbg via livekd

Don’t worry about the error for symbols for livekd.sys as we can safely ignore this.

We’re now ready to view the process entries contained in EPROCESS blocks which we achieve via running the “!process 0 0″ debugger extension command where the first zero instructs it to dump all processes rather than the current process or a specific pid (which we would specify in hex, not decimal) and the second is a flags value which tells it here to just dump basic information. We type this in at the bottom where it shows the “0: kd>” prompt.

This should then give something similar to the following although it may take some time to complete if there are a lot of processes (if you’re on a remote session then minimising the windbg window can make it complete faster since it has to do less screen updating).

windbg livekd bang process

To quit the debugger, enter “q”.

So there we have it, all of the processes as far as the kernel sees it. But hang on, that’s not exactly easy to digest is it now and what does all the data mean? The Cid is actually the process id, albeit in hex, and thus the ParentCid is the parent process id for the process.

Rather than manually wading through this data, I wrote a quick PowerShell script that uses regular expressions to parse this data, which can be copied and pasted into a text file or saved directly to file via the “Write Window Text to File” option from the windbg Edit menu, correlates these processes against currently running processes via the Get-Process cmdlet and then outputs the data as below where I’ve filtered the grid view where “Running” is false:

selfservice zombies

Ignore the fact that it flags the System process as being a Zombie as it isn’t – I’ll filter that out presently.

So here we can see lots of Citrix SelfService.exe Zombie processes since there are none shown in task manager and yet the processed !process output has over 25 instances in the EPROCESS table. We don’t have any Zombie sessions here though, since the Zombie processes all exist in active or disconnected sessions. I have discovered Zombie sessions too using this technique where there are processes shown, and typically the same executables, for sessions that no longer exist and aren’t shown in task manager, etc.

You will potentially get a few false positives for processes that have exited between when the !process was executed and then when the script was run to process its output. However, I’m working on a script to automate the whole process, (weak) pun intended, by using the command line debugger kd.exe, rather than windbg, to run the !process command and then pipe its output straight into a PowerShell script to process it immediately.

The script will be made available in the publicly accessible ControlUp Script Based Actions (SBA) library although all of the available SBAs there can be used standalone without ControlUp software.


Guide to my GitHub Scripts

This article, which will be updated as new scripts are added, serves as an index to the scripts I have uploaded to GitHub with a quick summary of what the script can do and links to explanatory blog articles. The scripts are split logically into a number of GitHub repositories, namely:


  1. DailyChecks.ps1 – allows you to get a summary of your Citrix XenApp/XenDesktop 7.x deployment emailed to you via a scheduled task to help spot issues. Blog Post
  2. End Disconnected sessions.ps1 – finds sessions disconnected over a given duration and logs them off, optionally terminating specified processes in case they are preventing logoff.
  3. Get PVS boot time stats.ps1 – pull PVS target device boot times from PVS server event logs to show fastest, slowest, mean, median and mode values with the option to send an email if thresholds are breached. Blog Post
  4. Get PVS device info.ps1 – retrieve PVS target device information from PVS servers and display their configuration along with corresponding data from Citrix Studio, Active Directory, VMware and the devices themselves such as last boot time & IP address. Selected devices can then have operations performed on them such as deleting from PVS/AD/Studio or rebooting. Blog Post
  5. Ghost Hunter.ps1 – find disconnected Citrix XenApp sessions which Studio/Director say still exist but do not and mark them such that they cannot prevent affected users from launching further published applications. Blog Post
  6. Show PVS audit trail.ps1 – collect PVS auditing events in a given date/time range and show on-screen or export to a csv file. Can also enable auditing if it is not already enabled.
  7. Show Studio Access.ps1 – show all users granted access to Citrix Studio and their access levels and optionally export to a csv file. It will recursively enumerate AD groups to show each individual user with Studio access.
  8. StoreFront Log Levels.ps1 – display and/or change the logging levels on Citrix StoreFront servers. It can operate on multiple servers from a single script invocation. Blog Post
  9. Parse storefront log files.ps1 – show Citrix StoreFront log files in a sortable and filterable consolidated view, optionally filtering on entry type and date ranges. Selected lines will be placed in the clipboard to enable further research. Blog Post
  10. Get Citrix admin logs.ps1 – retrieve the logs viewable in Studio in a given time window and write to a csv file or display in an on screen sortable/filterable grid view. The logs can be filtered on the user who performed the action, where the action was performed from, either Studio or Director, whether it was an admin or config change action and the type of action such as logoff or shadow.
  11. Get Citrix OData.ps1 – query the OData interface exposed by Citrix Delivery Controllers to retrieve information on sessions, errors, machines, etc. This is where Citrix Director gets its information from and also means that you don’t have to query SQL (which is unsupported). See here for information on what is available.
  12. Modify and launch file.ps1 – make modifications to a text file such as an ICA file, e.g. to change window sizes, and launch the newly created file. Can also install itself as an explorer SendTo context menu shortcut.
  13. Recreate PVS XML manifest.ps1 – create the XML manifest that PVS needs in order to import disks which have multiple versions. Can import from orphaned SQL data or examination of specified *.(a)vhd(x) files. Use when a disk has disappeared from the PVS console.
  14. Direct2Events.ps1 – Uses OData (like Citrix Director) to retrieve Citrix Virtual Apps and Desktops Session information from a Delivery Controller and displays in a WPF GUI allowing troubleshooting and remediation without needing to go to different tools such as PVS Console, VMware vSphere and Active Directory or connecting to the end-points


  1. Change CPU priorities.ps1 – dynamically change the base priorities of processes which over consume CPU so other processes get preferential access to the CPU. If a process stops over consuming then its original base priority will be restored. Can include/exclude specific users, processes and sessions.
  2. Trimmer.ps1 – trim the working sets of processes to make more memory available for other processes/users on a system. Can trim on demand or when processes are unlikely to need the memory such as when a session is idle, disconnected or locked. Can also set hard working set limits to cap leaky processes. Blog Post Blog Post Blog Post
  3. Get installed software.ps1 – show the installed software on one or more computers where the computers are specified on the command line or via a csv file. Queries the registry rather than the win32_product WMI/CIM class which is faster and gives more complete results. Output can be to a csv file, an on screen grid view or standard output for piping into something else. If -uninstall is specified, items selected when OK is clicked in the grid view will be uninstalled. Similarly, a -remove option takes a comma separated list of package names or regular expressions and will run the uninstaller for them, silently if -silent is specified and the uninstall program is msiexec.exe.
  4. Group Membership Modifier.ps1 – add or remove a specified list of user accounts from local groups, such as Administrators or Remote Desktop Users, on one or more machines.
  5. Clone VHD.ps1 – create a new Hyper-V virtual machine from a .vhd/.vhdx file containing an existing VM, selecting the VM configuration in a GUI. Will integrate itself into Windows Explorer so you right-click on a virtual disk file and run it, elevating itself if required. Can make linked clones which can reduce disk space. Blog Post
  6. Fix Sysprep Appx errors.ps1 – parses sysprep logs looking for failures due to AppX packages causing sysprep to fail, removes them and runs sysprep again until successful.
  7. Show NTFS zone info.ps1 – Google Chrome and Internet Explorer store the URL of where downloaded files have come from in an NTFS Alternate Data Stream (ADS). This script shows these and optionally removes this information. Blog Post
  8. Profile Cleaner.ps1 – retrieve local profile information from one or more machines, queried from Active Directory OU, group or name, present them in an on-screen filterable/sortable grid view and delete any selected after prompting for confirmation. Options to include or exclude specific users and write the results to a csv file. Blog Post
  9. Show users.ps1 – Show current and historic logins including profile information, in a given time range or since boot, across a number of machines queried from Active Directory OU, group or name, write to csv file or display in an on-screen sortable/filterable grid view and logoff any selected sessions after confirmation. Works on RDS and infrastructure servers as well as XenApp. Blog Post
  10. Profile.ps1 – a PowerShell profile intended to be used on Server Core machines, with PowerShell set as the shell, which reports key configuration and status information during logon.
  11. Add firewall rules for dynamic SQL ports.ps1 – find all SQL instances and create firewall rules for them to work with dynamic ports
  12. Find Outlook drafts.ps1 – find emails in your Outlook drafts folder of a given age, prompt with the information with the option to open the draft. Designed to help you stop forgetting to complete and send emails. Has options to install & uninstall itself to launch at logon. Blog Post
  13. Outlook Leecher.ps1 – find SMTP email addresses in all your Outlook folders including calendars and write them to a csv file including context such as the subject and date of the email.
  14. Check Outlook recipient domains – an Outlook macro/function which will check the recipient addresses when sending an email and will warn if the email is going to more than a single external domain. Designed to help prevent accidental information leakage where someone may pick the wrong person when composing.
  15. Fix reminders – an Outlook macro/function which will find any non-all day Outlook meetings which have no reminder set, display the details in a popup and add a reminder for a number of minutes before the event as selected by the user. Blog Post.
  16. Check Skype Signed in.ps1 – uses the Lync 2013 SDK to check Skype for Business is signed in and will alert if it is not via a popup and playing an optional audio file. Can also pop up an alert if the client has been in “Do Not Disturb” in excess of a given period of time.
  17. Redirect Folders.ps1 – show existing folder redirections for the user running the script or set one or more folder redirections with a comma separated list of items of the form specialfolder=path. For example Music=H:\Music
  18. Check and start processes.ps1 – check periodically if each of a given list of processes is running and if not optionally start it, after an optional prompt is displayed. Any necessary parameters for each process can be specified after an optional semicolon character in the process name argument. Can install or uninstall itself to the per user or per machine registry run key so it runs at logon. Use it to launch and monitor key processes such as Outlook or Skype for Business (lync.exe).
  19. Autorun.ps1 – list, remove or add logon autoruns entries in the file system or registry for the user running the script or all users if the user has permissions. Can also operate on the RunOnce key and wow6432node on x64 systems. Uses regular expressions for matching the shortcut/registry value name and/or the command so knowing the exact names or commands is not required. Uses PowerShell’s built in confirmation mechanism before overwriting/deleting anything.
  20. Find and check IIS server certs.ps1– find IIS servers via OUs or AD groups or specify via regular expression, specific servers or from the contents of a text file. Check the expiry date of any certificates in use and present a list of those expiring within a specified number of days in a grid view, write to csv file or send via email.
  21. wcrasher.cs.ps1 – compiles embedded C# code to produce an exe file (32 or 64 bit or even Itanium) which will crash when the “OK” button of the displayed dialogue box is clicked. Use it to check that the OS is configured the desired way for handling application crashes or to produce dumps for practicing analysis.
  22. WTSApi.ps1 – provides the function Get-WTSSessionInformation which is a wrapper for the WTSQuerySessionInformationW function from wtsapi32.dll with the WTSSessionInfoEx class parameter. This returns an array of session information items for the one or more computers passed to it which can be used in place of running quser.exe (“query user”) and having to parse its somewhat inconsistent output.
  23. Trim run history.ps1 – Remove items from the history of Explorer’s Start->Run menu, and task Manager’s  File->Run new task, either by specifying what to keep or what to remove via regular expression (which can be as simple as something like ‘mstsc’). Uses PowerShell’s builtin confirmation mechansim so by default will prompt before each deletion.
  24. Get Process Durations.ps1 – Retrieve process creation and termination events from the security event log, if auditing of these is enabled, and show the start and end times of the processes and command lines if that auditing is enabled too. Can optionally show how long after logon and/or boot processes started and can filter on specific processes and/or users. Output to csv format file, sortable/filterable grid view or the PowerShell pipeline.
  25. Analyse IIS log files.ps1 – Analyse IIS log files to show requests/min/sec, and min, max, average and median response times per time interval, usually seconds to aid in finding busy/overloaded periods for capacity planning, troubleshooting, etc.
  26. Check AD account expiry.ps1 – Find AD accounts with passwords or accounts expiring within the specified number of days or are locked out or disabled and optionally send an email containing the information. To help spot problems where account expiry could cause issues such as when used as service accounts.
  27. Check SQL account expiry.ps1 – Find SQL accounts with passwords expiring within the specified number of days and optionally send an email containing the information. Useful where these accounts are used as service accounts. Can also be used to send an email alert if the specified SQL server cannot be connected to.
  28. Download and Install Office 365 via ODT.ps1 – Download the latest version of the Office Deployment Kit and use that, once the executable has been extracted and its certificate checked, to download and install Office 365.
  29. Find loaded modules.ps1 – Examine loaded modules all or specific processes by name or pid and show those where the module name/path or company name match a specified string/regex. Designed to help spot processes hooked by 3rd party software like Citrix, Ivanti, Lakeside, etc. Shows module versions so can also be used to play spot the difference between processes.
  30. Get Remote User Logon Times.ps1 – Use WMI to query computers to find out, since boot, when any remote desktop connections logged on. Gives finer granularity than “query user” (quser) and works on multiple computers in a single invocation.
  31. Kill elevated processes.ps1 – Check already running processes and then watch for process created events and if the process is in a specified list and have been launched elevated then terminate them and audit to the event log.
  32. Monitor process start stop.ps1 – Uses WMI/CIM to register for notifications when processes are started or stopped so effectively a process watcher.
  33. Network Profile Actioner.ps1 – Check network connection profiles and if any are connected on a public network, or nothing is connected so the computer is offline, set a registry value differently compared with private/domain network. Defaults to setting the registry such that the username is not displayed on the lock screen if the computer is on a public network or offline to aid with privacy protection.
  34. Power Watcher.ps1 – Designed to help set the most suitable power scheme when using an external power bank for a laptop as the laptop sees it as still being powered by an external power source so does not implement any power saving (on a Dell laptop).
  35. Show FSlogix volumes.ps1 – Show FSLogix currently mounted volume details & cross reference to FSLogix session information in the registry.
  36. Check and fix domain membership.ps1 – Check domain membership of the machine the script is running on and try to repair using Test-ComputerSecureChannel. Can be placed in a computer startup script with encrypted password.
  37. Convert graphics files.ps1 – bulk convert graphics files from one format to another
  38. Get file bitness.ps1 – show the file bitness of specified files or files in a folder including the .NET CPU specifications
  39. event aggregator.ps1 – retrieve all events from the 300+ event logs on one or more computers and show in sortable/filterable gridview and/or write to csv. Various filter in/out option available.
  40. Set Foreground Window.ps1 – find the main window for given one or more processes, by id or name, and optional argument matching, and set as the foreground window or perform another operation on them such as minimising or maximising. Written to solve a problem when a running process refused to show its window or even a taskbar icon for it.
  41. Get MSI Properties.ps1 – get any MSI property, such as ProductVersion, from one or more MSI files by reading the contents of the MSI file. Useful for finding out the version of an MSI file. ALso gets summary information such as bitness
  42. Get files modified since boot.ps1 – Find files modified or created since the last boot time without following symbolic links and junction points. Useful to find out what has consumed the Citrix Provisioning Services write cache but can also take arbitrary start and end times to find files modified/created in a given time window for troubleshooting.
  43. Pause Resume Processes.ps1 – Pause or resume processes using debugger API functions. Useful to stop applications being used outside of approved hours, stop resource guzzling applications impacting other processes so it can be examined later. Note that the process resuming a process must be the same one that paused it otherwise it will fail. Also if the pausing process exits, the paused processes will exit. The script caters for this.
  44. Bincoder GUI.ps1 – base 64 encode and decode data to and from any file to allow the data to be copied over the Windows clipboard, e.g. to or from a remote session where file sharing sites, email, etc are not available.
  45. Get Extended File Properties.ps1 – Retrieve specified or all extended properties from a file, not just those in the version resource of the file
  46. Add computers to perfmon xml.ps1 – Take an XML template exported from perfmon with a single machine and duplicate all counters for a specified set of machines. This creates a new XML file which can be imported back in to perfmon to capture performance data across all the machines.
  47. Delete profile for group member.ps1 – Delete local user profiles for members of an Active Directory group which are not currently in use
  48. Fix shortcuts.ps1 – Find shortcuts with target or icon path or arguments matching a given regular expression and change to a new string.
  49. Get Account Lockout details.ps1 – Find all domain controllers and show account lockouts in a given time range and/or for a specific user including the machine where the lockout occurred.
  50. Get info via CIM.ps1 – Gather info from one or more computers via CIM and write to CSV files to aid health checking. A list of nearly 50 CIM classes is built in to assist relevant information gathering.
  51. Send to Clipboard.ps1 – Put contents of text or graphics files onto the clipboard – designed for use as a shortcut in explorer’s right click send to menu

General Scripts

  1. Regrecent.ps1 – find registry keys modified in a given time/date window and write the results to a csv file or in an on-screen sortable/filterable grid view. Can include and/or exclude keys by name/regular expression. Blog Post
  2. Leaky.ps1 – simulate a leaky process by causing the PowerShell host process for the script to consume working set memory at a rate and quantity specified on the command line.
  3. Twitter Statistics.ps1 – fetch Twitter statistics, such as the number of followers and tweets, for one or more Twitter handles without using the Twitter API
  4. Sendto Checksummer.ps1 – when a shortcut to this script, by setting the shortcut target to ‘powershell.exe -file “path_to_the_script.ps1”, is added to the user’s Explorer SendTo folder, a right-click option for calculating file checksums/hashes is available. The user will be prompted for which hashing algorithm to use and then the checksums of all selected files will be calculated and shown in a grid view where selected items will be copied to the clipboard when “OK” is clicked.
  5. Zombie Handle Generator.ps1 – opens handles to a given list of processes and then closes them after a given time period or after keyboard input. Used to simulate handle leaks to test other software. Can open process or thread handles.
  6. Sendto folder size.ps1 – shows the sizes of each folder/file selected in explorer, or passed directly on the command line. For each item then selected in the grid view, it will show the largest 50 files. If any files are selected when OK is pressed in that grid view, a prompt to delete will be shown and if Yes is clicked, the files will be deleted via the recycle bin. To install for explorer right-click use and add a shortcut to this script via Powershell.exe -file in the shell:sendto folder.
  7. Compare files in folders.ps1 – compare file attributes and checksums between files in two specified folders, and sub folders. Files selected in the grid view when OK is clicked will then have their differences shown in separate grid views.
  8. Query SQLite database.ps1 – query data from a SQLite database file or show all of the table names. Queries can be qualified with a “where” clause, the columns to return specified, or it defaults to all, and the results output to a csv file or are displayed in an on-screen filterable/sortable grid view.
  9. Find file type.ps1 – Looks at the content of files specified to determine what the type of a file actually is. File types identifiable include various zip formats, image and video formats and executables. It will also seek out files stored in Alternate Data Streams on NTFS volumes.
  10. Set photo dates.ps1  – Get the date/time created from image file metadata and set as the file’s creation date/time which can make it easier to see/sort picture files by the creation date of the image itself, not when the file was copied to the current folder it resides in.
  11. Shortcuts to csv.ps1 – Produce csv reports of the shortcuts in a given folder and sub-folders and optionally email the resulting csv file. Can check shortcuts locally (default) or on a remote server, e.g. for checking centralised Citrix XenApp/XenDesktop shortcuts. By default it will check that the target and working directory exist for a shortcut so the resulting csv file can be filtered on these columns to easily find bad shortcuts.
  12. Update dynamic dns.ps1 – Update dynamic DNS provider if the external IP address has changed (stored in the registry) to update the address or email the details to a given list of recipients.
  13. Find JSON attribute by name.ps1 – Find JSON attributes via name or regex and return the value(s). Saves having to navigate a potentially unknown object structure.
  14. Get chunk at offset.ps1 – display the text from a given file at a given offset within the file. Used with SysInternals Process Monitor (procmon) to see what is being written to a log file for any given procmon trace line.
  15. Digital Clock.ps1 – display a digital clock, stop watch (with 0.1 second granularity) or countdown timer with the ability to “mark” specific points, e.g. when timing a logon clock


  1. AMC configuration exporter.ps1 – Export the configuration of one or more AppSense/Ivanti DesktopNow Management Servers to csv or xml file.
  2. Get process module info.ps1 – Interrogate running processes to extract file and certificate information for their loaded modules which can be useful in composing Ivanti Application Control configurations.
  3. Ivanti UWM EM event processor.ps1 – Get Ivanti UWM EM event log entries and split into sortable table for durations to aid logon analysis. Display on screen in a sortable/filterable grid view or export to a CSV file.


  1. ESXi cloner.ps1 – Create one or more new VMware ESXi virtual machines from existing VMs nominated as templates. For use when not using vCenter which has a built in templating mechansim. Can created linked clones to save on disk space and drastically speed up new VM creation. Can be used with or without a GUI.
  2. Get VMware or Hyper-V powered on vm details.ps1 – Retrieves details of all powered on virtual machines, or just those matching a name pattern, from either VMware vSphere/ESXi or Hyper-V and either displays them in an on screen sortable & filterable grid view, standard output for further processing or writes to a text file that can be used in a custom field in SysInternals BGinfo tool to show IP addresses of these VMs on your desktop wallpaper which is useful when they are on an isolated network or not registered in DNS.
  3. Power state change running VMs.ps1 – Pause or shutdown running VMs and the ESXi host – designed to be run by UPS shutdown software. Requires the VMware PowerCLI module.
  4. VMware GUI.ps1 – Allow users to view VMs and their details that they have access to in a WPF grid view and perform the following actions if they have permissions in VMware as well as being able to launch mstsc and VMware consoles:
        • Snapshots – take, delete, revert, consolidate
        • Power – on, off, suspend, shutdown/restart guest
        • Reconfigure – number of CPUs, amount of memory and change notes
        • Delete
        • Screenshot
        • Run scripts/cmdlets/exes
        • Mount/Unmount CDs
        • Connect/Disconnect NICs
        • Show events
        • Backup
  1. Set VMware guest info.ps1– Set VM guest information, by connecting to vCenter or ESXi directly, so it can be retrieved in VMs. For example, set the VMware host running the VM in the guest so it knows who its parent is.

Dynamically Creating Process Monitor Filters


I recently had the need to automate the use of SysInternals’ Process Monitor such that no manual intervention is required to initiate the capture, with a filter, and then to process the results, in PowerShell of course. Searching around, I found that the format of a procmon configuration (.pmc) file didn’t appear to be documented anywhere and, being a binary format, could prove tricky, and time-consuming, to fully reverse engineer. Indeed, web searches showed others looking for ways to dynamically create these configuration files, which contain the filters as well as included columns, but apparently without success.

Of course, one could run it without a filter but that will make for potentially much larger trace files, which could impact free disk space and performance and would take longer to process in PowerShell. I therefore set about trying to figure out how I could add a process id (PID) filter for a specific process via a script and I present the research and relevant script parts here for the benefit of others.

Isolating the Relevant Section of the Configuration File

In order to see if it was feasible to take an existing procmon configuration file containing a PID filter and change it, I performed a binary comparison between two configuration files I had manually saved from the procmon user interface. In terms of the filter parameters they contained, they were identical except one was for a PID of 123456 and the other for a PID 567890, e.g:

procmon controlup pid filter

To perform a binary comparison, I used the built-in Windows File Compare utility fc.exe. Note that when calling this from PowerShell, you must append the .exe to the end of the command since “fc” is a built-in alias for the Format-Custom cmdlet which is not what we want to call. The results looked like this:

procmon pmc binary comparison

Which instantly gave me hope that what I was trying to accomplish was achievable since there were only nine differences and being the sad geek that I am from my 6502 hand assembly days on Commodore computers, I already knew that hex 31 is the ASCII code for the number 1, hex 32 is 2 and so on so that the first six rows of the first column were representing the PID 123456 and the second column 567890. But what about the last 3 bytes which are different? Well, 123456 in hex is 01E240 and 567890 is 08AA52 which we can see stored in those last 3 different bytes, albeit in little endian format.

If we look at the area around these differences in a hex editor (I use XVI32 which has served me well for many years), then we get some context and more information:

xvi32 pmc file

Where the selected character is the start of the “123456” PID string. However, notice that after each ASCII character there is a null (00) – this means that the characters are technically Unicode (16 bit) rather than ASCII (8 bit). This is for information only, it doesn’t cause an issue as we’ll see presently. Also, the (crudely) highlighted portion shows that there is a Unicode null terminator character (00 00) after the “6” of “123456” followed by the PID in little endian format.

So my thinking was that I could produce a template configuration file with a placeholder PID of 123456 and then replace that with the actual PID I wanted procmon to trace. One potential issue was that PIDs can be between 1 and 6 digits long and I didn’t want to risk changing the size/layout of the file since that may have broken procmon. Fortunately I found that procmon was quite happy accepting a PID with leading zeroes such as “000123” so that meant as long as I padded my PID to six digits, procmon would still work.

Dynamically Creating a Configuration File

Whilst it is easy with the procmon GUI to set the filters how you want them and also include or exclude display columns and then save this as a .pmc file, I had the added complication that this script, because it is run as a Script Based Action (SBA) by ControlUp’s Real-Time console, needed to be self-contained so had to have the .pmc configuration file embedded within the script itself.

This fortunately is easy to achieve since we can base64 encode the binary file, which converts it to text that we then just assign to a variable within the script. To base64 encode a file and place it in the clipboard so that it can be pasted into a script, run the following:

[System.Convert]::ToBase64String( [byte[]][System.IO.File]::ReadAllBytes( 'c:\temp\demo.pmc' ) | Clip.exe

and we then paste it into the value of a variable assignment in our script, remembering to place the terminating quote character at the end of what will be a very long line:

base64 encoded

At run time, we can convert this base64 encoded text back to binary data simply by running the following:

[byte[]]$filter = [System.Convert]::FromBase64String( $procmonFilter )

I initially just used XVI32 to determine at what point the “123456” string appeared in the file data and placed that offset into a variable but I found as I tweaked the filter that I had to keep using XVI32 to see what the offset was which became laborious. I therefore wrote a function which returns the offset of a Unicode string within a PowerShell byte array, or -1 if it is not found. I then ended up with the following code snippet which, using the aforementioned function, finds the offset of the “FilterRules” block in the config file (see hex view above), finds “123456” after that offset and replaces the PID with ours from the $processId variable:

posh to replace pmc pid

The in memory configuration file, contained in the $filter variable, can then be written to a .pmc file and the full path to the file specified as an argument to procmon.exe via the “/LoadConfig” option. The arguments highlighted below are the ones I used to capture a trace where I used “/RunTime” which runs the capture for a given number of seconds and then terminates procmon, and thus the trace, cleanly. You could also run it without “/RunTime” and call procmon.exe again with a “/Terminate” argument when you have finished the capture. If you just kill the procmon.exe or procmon64.exe processes then the trace file will not be closed cleanly and will not be usable.

procmon automated options

Once this has finished capturing and exited, it will leave a binary trace in the file argument given to the “/BackingFile” option so to convert this to a CSV file that PowerShell can read in using the Import-Csv cmdlet, we run procmon again thus:

procmon.exe /Quiet /AcceptEula /OpenLog `"$backingFile`" /Minimized /SaveAs `"$csvTrace`" /SaveApplyFilter /LoadConfig `"$pmcFile`"

where $backingFile is the .pml trace, $csvTrace is the csv file that we want it to produce and $pmcFile is the configuration file we constructed and wrote to disk. Notice the quoting of the variables in case they contain spaces.

Downloading Procmon

But what if procmon isn’t already on the system where the script needs to run? Technically I could’ve used the same base64 encoding technique to embed it within the script but this would tie it to a specific version of procmon and may also fall foul of the licence agreement as it could be construed as distributing procmon. Thankfully, for many years, the SysInternals tools have been individually available via so the following will download procmon to the file specified in the $procmon variable (proxy willing):

(New-Object System.Net.WebClient).DownloadFile( '' , $procmon )

However, depending on security settings, sometimes running of the downloaded executable will produce a dialogue box warning that the file may be untrusted. We can prevent that by running the following:

Unblock-File -Path $procmon

The Finished Script

Will be made available shortly in the ControlUp Script Library. In fact, it’s likely there will be a number of Script Based Actions authored that utilise this dynamic filtering method.

Whilst this technique has shown how a custom PID filter can be dynamically constructed and used, the same techniques could be used to set other filters. The only caveat is that the patterns, such as “123456”, would need to be unique as the simple mechanism presented here cannot determine the column or relation for the filter rule.

Ghost Hunting in XenApp 7.x

The easily scared amongst you needn’t worry as what I am referring to here are disconnected XenApp sessions where the session that Citrix believes is still alive on a specific server have actually already ended, as in been logged off. “Does this cause problems though or is it just cosmetic?” I can hear you ask. Well, if a user tries to launch another application which is available on the same worker then it will cause a problem because XenApp will try and use session sharing, unless disabled, but there is no longer a session to share so the application launch fails. These show as “machine failures” in Director. Trying to log off the actually non-existent session, such as via Director, won’t fix it because there is no session to log off. Restarting the VDA on the effected machine also doesn’t cause the ghost session to be removed.

So how does one reduce the impact of these “ghost” sessions? In researching this, I came across this article from @jgspiers detailing the “hidden” flag which can be set for a session, albeit not via Studio or Director, such that session sharing is disabled for that one session.

I therefore set about writing a script that would query Citrix for disconnected sessions, via Get-BrokerSession, cross reference each of these to the XenApp server they were flagged as running on via running quser.exe and report those which didn’t actually have a session on that server. In addition, the script also tries to get the actual logoff time from the User Profile Service event log on that server and also checks to see if they have any other XenApp sessions, since that is a partial indication that they are not being hampered by the ghost session.

If the -hide flag is passed to the script then the “hidden” flag will be set for ghost sessions found.

The script can email a list of the ghost sessions if desired, by specifying the -recipients and -mailserver options (and -proxymailserver if the SMTP mail server does not allow relaying from where you run the script) and if a history file is specified, via the -historyFile option, then it will only email when there is a new ghost session discovered.

ghosted sessions example

I have noticed that the “UserName” field return by Get-BrokerSession is quite often blank for these ghost sessions and the user name is actually in the “UntrustedUserName” field about which the documentation states “This may be useful where the user is logged in to a non-domain account, however the name cannot be verified and must therefore be considered untrusted” but it doesn’t explain why the UserName field is blank since all logons are domain ones via StoreFront non-anonymous applications.

If running the script via a scheduled task, which I do at a frequency of every thirty minutes, with -hide, also specify the -forceIt flag otherwise the script will hang as it will prompt to confirm that you want to set any new ghost sessions to hidden.

The script is available on GitHub here and you use it at your own risk although I’ve been running it for one of my larger customers for months without issue; in fact we no longer have reports of users failing to launch applications which we previously had tracked down to the farm being haunted with these ghosts although it rarely affects more than 1% of disconnected sessions. This is on XenApp 7.13.

Showing Current & Historical User Sessions

One of my pet hates, other than hamsters, is when people logon to infrastructure servers, which provide a service to users either directly or indirectly, to run a console or command when that item is available on another server which isn’t providing user services. For instance, I find people logon to Citrix XenApp Delivery Controllers to run the Studio console where, in my implementations, there will always be a number of management servers where all of the required consoles and PowerShell cmdlets are installed. They compound the issue by then logging on to other infrastructure servers to run additional consoles which is actually more effort for them than just launching the required console instance(s) on the aforementioned management server(s). To make matters even worse, I find they quite often disconnect these sessions rather than logoff and have the temerity to leave consoles running in these disconnected sessions! How not to be in my good books!

Even if I have to troubleshoot an issue on one of these infrastructure servers, I will typically remotely access their event logs, services, etc. via the Computer Management MMC snap-in connected remotely and if I need to run non-GUI commands then I’ll use PowerShell’s Enter-PSSession cmdlet to remote to it which is much less of an impact than getting a full blown interactive session via mstsc or similar.

To find these offenders, I used to run quser .exe, which is what the command “query user” calls, with the /server argument against various servers to check if people were logged on when they shouldn’t have been but I thought that I really ought to script it to make it easier and quicker to run. I then also added the ability to select one or more of these sessions and log them off.

It also pulls in details of the “offending” user’s profile lest that’s too big and needs trimming or deleting. I have written a separate script for user profile analysis and optional deletion which is also available in my GitHub repository.

For instance, running the following command:

 & '.\Show users.ps1' -name '^cxt2[05]\d\d' -current

will result in a grid view similar to the one below:

show users ordered


It works by querying Active Directory via the Get-ADComputer cmdlet, runs quser.exe against all machines named CTX20xx and CTX25yy, where xx and yy are numerical, and display them in a grid view. Sessions selected in this grid view when the “OK” button is pressed will be logged off although PowerShell’s built in confirmation mechanism is used so if “OK” is accidentally pressed, the world probably won’t end because of it.

The script can also be used to show historical logons on a range of servers where the range can be specified in one of three ways:

  1. -last x[smhdwy] where x is a number and s=seconds, m=minutes, h=hours, d=days, w=weeks and y=years. For example, ‘-last 7d’ will show sessions logged on in the preceding 7 days
  2. -sinceboot
  3. -start “hh:mm:ss dd/MM/yyyy” -end “hh:mm:ss dd/MM/yyyy” (if the date is omitted then the current date is used)

For example, running the following:

& '.\Show users.ps1' -ou ' XenApp/Production/Infrastructure Servers' -last 7d

gives something not totally unlike the output below where the columns can be sorted by clicking on the headings and filters added by clicking “Add criteria”:

show users aged

Note that the OU is specified in this example as a canonical name, so can be copied and pasted out of the properties tab for an OU in AD Users and Computers rather than you having to write it in distinguished name form, although it will accept that format too. It can take a -group option instead of -ou and will recursively enumerate the given group to find all computers and the -name option can be used with both -ou and -group to further restrict what machines are interrogated.

The results are obtained from the User Profile Service operational event log and can be written to file, rather than being displayed in a grid view, by using the -csv option.

Sessions selected when “OK” is pressed will again be logged off although a warning will be produced instead if a session has already been logged off.

If you are looking for a specific user, then this can be specified via the -user option which takes a regular expression as the argument. For instance adding the following to the command line:

-user '(fredbloggs|johndoe)'

will return only sessions for usernames containing “fredbloggs” or “johndoe”

Although I wrote it for querying non-XenApp/RDS servers, as long as the account you use has sufficient privileges, you can point it at these rather than using tools like Citrix Director or Edgesight.

The script is available on GitHub here and use of it is entirely at your own risk although if you run it with the -noprofile option it will not give the OK and Cancel buttons so logoff cannot be initiated from the script. It requires a minimum of version 3.0 of PowerShell, access to the Active Directory PowerShell module and pulls data from servers from 2008R2 upwards.

If you are querying non-English operating systems, there may be an issue since the way the script parses the output from the quser command is to use the column headers, namely ‘USERNAME’,’SESSIONNAME’,’ID’,’STATE’,’IDLE TIME’,’LOGON TIME’ on an English OS, since the output is fixed width. You may need to either edit the script or specify the column names via the -filedNames option.

Memory Control Script – Capping Leaky Processes

In the third part of the series covering the features of a script I’ve written to control process working sets (aka “memory”), I will show how it can be used to prevent leaky processes from consuming more memory than you believe they should.

First off, what is a memory leak? For me, it’s trying to remember why I’ve gone into a room but in computing terms, it is when a developer has dynamically allocated memory in their programme but then not subsequently informed the operating system that they have finished with that memory. Older programming languages, like C and C++, do not have built in garbage collection so they are not great at automatically releasing memory which is no longer required. Note that just because a process’s memory increases but never decreases doesn’t actually mean that it is leaking – it could be holding on to the memory for reasons that only the developer knows.

So how do we stop a process from leaking? Well short of terminating it, we can’t as such but we can limit the impact by forcing it to relinquish other parts of its allocated memory (working set) in order to fulfil memory allocation requests. What we shouldn’t do is to deny the memory allocations themselves, which we could actually do with hooking methods like Microsoft’s Detours library. This is because the developer, if they even bother checking the return status of a memory allocation request before using it, which would result in the infamous error “the memory referenced at 0x00000000 could not be read/written” (aka a null pointer dereference), probably can’t do a lot if the memory allocation fails other than outputting an error to that effect and exiting.

What we can do, or rather the OS can do, is to apply a hard maximum working set limit to the process. What this means is that the working set cannot increase above the limit so if more memory is required, part of the existing working set must be paged out. The memory paged out is the least recently used so is very likely to be the memory the developer forgot to release so they won’t be using it again and it can sit in the page file until the process exits. Thus increased page file usage but decreased RAM usage which should help performance and scalability and reduce the need for reboots or manual intervention.

Applying a hard working set limit is easy with the script, the tricky part is knowing what value to set as the limit – too low and it might not just be leaked memory that is paged out so performance could be negatively affected due to hard page faults. Too high a limit and the memory savings, if the limit is ever hit, may not be worth the effort.

To set a hard working set limit on a process we run the script thus:

.\trimmer.ps1 -processes leakprocess -hardMax -maxWorkingSet 100MB

or if the process has yet to start we can use the waiting feature of the script along with the -alreadyStarted option in case the process has actually already started:

.\trimmer.ps1 -processes leakprocess -hardMax -maxWorkingSet 100MB -waitFor leakyprocess -alreadyStarted

You will then observe in task manager that its working set never exceeds 100MB.

To check that hard limits are in place, you can use the reporting option of the script since tools like task manager and SysInternals Process Explorer won’t show whether any limits are hard ones. Run the following:

.\trimmer.ps1 -report -above 0

which will give a report similar to this where you can filter where there is a hard working set limit in place:

hard working set limit

There is a video here which demonstrates the script in action and uses task manager to prove that the working set limit is adhered to.

One way to implement this for a user, would be to have a logon script that uses the -waitFor  option as above, together with -loop so that the script keeps running and picks up further new instances of the process to be controlled, to wait for the process to start. To implement for system processes, such as a leaky third party service or agent, use the same approach but in a computer start-up script.

Once implemented, check that hard page fault rates are not impacting performance because the limit you have imposed is too low.

The script is available here and use of it is entirely at your own risk.

Changing/Checking Citrix StoreFront Logging Settings

Enabling, capturing and diagnosing StoreFront logs is not something I have to do often but when I do, I found it was time consuming to enable, and disable, logging across multiple StoreFront servers and also to check on the status of logging since Citrix provide cmdlets to change tracing levels but not to query them as far as I can tell.

After looking at reported poor performance of several StoreFront servers at one of my customers, I found that two of them were set for verbose logging which wouldn’t have been helping. I therefore set about writing a script that would allow the logging (trace) level to be changed across multiple servers and also to report on the current logging levels. I use the plural as there are many discrete modules within StoreFront and each can have its own log level and log file.

So which module needs logging enabled? The quickest way, which is all the script currently supports, is to enable logging for all modules. The Citrix cmdlet that changes trace levels, namely  Set-DSTraceLevel, can be used more granularly it seems but I have found insufficient details in order to be able to implement it in my script.

The script works with clustered StoreFront servers in that you can specify just one of the servers in the cluster via the -servers option together with the -cluster option which will (remotely) read the registry on that server to find where StoreFront is installed so that it can load the required cmdlets to retrieve the list of all servers in the cluster.

To set the trace level on all servers in a StoreFront cluster run the following:

& '.\StoreFront Log Levels.ps1' -servers storefront01 -cluster -traceLevel Verbose

The available trace levels are:

  • Off
  • Error
  • Warning
  • Info
  • Verbose

To show the trace levels, without changing them, on these servers and check that they are consistent on each server and across them, run the following:

& '.\StoreFront Log Levels.ps1' -servers storefront01 -cluster -grid

Which will give a grid view similar to this:

storefront log settings

It will also report the version of StoreFront installed although the -cluster option must be used and all servers in the cluster specified via -servers if you want to display the version for all servers.

The script is available here and you use it entirely at your own risk although I do use it myself on production StoreFront servers. Note that it doesn’t need to run on a StoreFront server as it will remote commands to them via the Invoke-Command cmdlet. It has so far been tested on StoreFront versions 3.0 and 3.5 and requires a minimum of PowerShell version 3.0.

Once you have the log files, there’s a script introduced here that stitches the many log files together and then displays them in a grid view, or csv, for easy filtering to hopefully quickly find anything relevant to the issue being investigated.

For those of an inquisitive nature, the retrieval side of the script works by calling the Get-DSWebSite cmdlet to get the StoreFront web site configuration which includes the applications and for each of these it finds the settings by examining the XML in each web.config file.

Don’t forget to return logging levels to what they were prior to your troubleshooting although I would recommend leaving them set as “Error” as opposed to “Off”.

Citrix PVS device detail viewer with action pane


I find myself frequently using the script I wrote, see here, to check the status of PVS devices and then sometimes I need to perform power actions on them, turn maintenance mode on or off or maybe message users on them before performing power actions (I am a nice person after all). Whilst we can perform most of those actions in the PVS console, if you are dealing with devices across multiple collections, sites or PVS instances then that can involve a lot of jumping around in the PVS console. Plus if you want to change maintenance mode settings or message logged on users then you need to do this from Citrix Studio so you’ll need to launch that and go and find the PVS devices in there.

So I decided to put my WPF knowledge to work and built a very simple user interface in Visual Studio and then inserted it into the PVS device detail viewer script. Once you’ve run the script and got a list of the PVS devices, sorted and/or filtered as you desire, select those devices and then click on the “OK” button down in the bottom right hand side of the grid view. Ctrl-A will select all devices which can be useful if you’ve filtered on something like “Booted off latest” so you only have devices displayed which aren’t booting off the latest production vdisk. This will then fire up a user interface that looks like this, unless you’ve run the script with the -noMenu option or hit “Cancel” in the grid view.

pvs device actioner gui

All the devices you selected in the grid view will be selected automatically for you but you can deselect any before clicking on the button for an action. It will ask you to confirm the action before undertaking it.

pvs device viewer confirm

If you select the “Message Users” option then an additional dialog will be shown asking you for the text, caption and level of the message although you can pass these on the command line via -messageText and -MessageCaption options.

pvs device viewer message box

The “Boot” and “Power Off” options use PVS cmdlets rather than Delivery Controller ones since the devices may not be known to the DDC. “Shutdown” and “Restart” use the “Stop-Computer” and “Restart-Computer” cmdlets respectively and I have deliberately not used the -force parameter with them so if users are logged on, the commands will fail. Look in the window you invoked the script from for errors.

You can keep clicking the action buttons until you exit the user interface so, for instance, it can be used to enable maintenance mode, message users asking them to logoff, reboot the devices when they have logged off or you have had enough of waiting for them to do so and then turning off maintenance mode, if you want to put it back in to service after the reboot.

I hope you find it as useful as I do but note that you use the script entirely at your own risk. It is available here and requires version 7.7 or higher of PVS, XenApp 7.x and PowerShell 3.0 or later where those consoles are installed on the machine where you will run the script from (so that their PowerShell cmdlets are available too).


Regrecent comes to PowerShell

About 20 years ago, after I found out that registry keys had last modified timestamps, I wrote a tool in C++ called regrecent which showed keys that had been modified in a given time window. If you can still find it, this tool does work today although being 32 bit will only show changes in Wow6432Node on 64 bit systems.

Whilst you might like to use Process Monitor, and before that regmon, or similar, to look for registry changes, that approach needs you to know that you need to monitor the registry so what do you do if you need to look back at what changed in the registry yesterday, when you weren’t running these great tools, because your system or an application has started to misbehave since then? Hence the need for a tool that can show you the timestamps, although you can actually do this from regedit by exporting a key as a .txt file which will include the modification time for each key in the output.

The PowerShell script I wrote to replace the venerable regrecent.exe, available here, can be used in a number of different ways:

The simplest form is to show keys changed in the last n seconds/minutes/hours/days/weeks/years by specifying the number followed by the first letter of the unit. For example, the following shows all keys modified in the last two hours:

.\Regrecent.ps1' -key HKLM:\System\CurrentControlSet -last 2h

We can specify the time range with a start date/time and an optional end date/time where the current time is used if no end is specified.

.\Regrecent.ps1' -key HKLM:\Software -start "25/01/17 04:05:00"

If just a date is specified then midnight is assumed and if no date is given then the current date is used.

We can also exclude (and include) a list of keys based on matching a regular expression to filter out (or in) keys that we are not interested in:

.\regrecent.ps1 -key HKLM:\System\CurrentControlSet -last 36h -exclude '\\Enum','\\Linkage'

Notice that we have to escape the backslashes in the key names above because they have special meaning in regular expressions. You don’t need to specify backslashes but then the search strings would match text anywhere in the key name rather than at the start (which may of course be what you actually want).

To specify a negation on the time range, so show keys not changed in the given time window, use the -notin argument.

If you want to capture the output to a csv file for further manipulation, use the following options:

.\regrecent.ps1 -key HKCU:\Software-last 1d -delimiter ',' -escape | Out-File registry.changes.csv -Encoding ASCIII

I hope you find this as useful as I do for troubleshooting.