VMware {code} Power Sessions Online April/May…

VMware {code} Power Sessions Online April/May…

We’re excited to announce the launch of our VMware {code} Power Sessions Online program! Each week, we’ll be live streaming a new technical talks on our YouTube channel If you are interested in giving a talk of your own, please see details here for further instructions. April 2020 Simplify Kubernetes development with Octant The post VMware {code} Power Sessions Online April/May Schedule appeared first on VMware {code}.

Update to Citrix PVS device detail viewer

In using the script, introduced here, at a customer this week, I found a few bugs, as you do, and also added a few new features to make my life easier.

In terms of new features, I’ve added a -name command line option which will only show information for devices that match the regular expression you specify. Now don’t run away screaming because I’ve mentioned regular expressions as, contrary to popular belief, they can be straightforward (yes, really!). For instance, if you’ve got devices CTXUAT01, CTXUAT02 and so on that you just want to report on then a regex that will match that is “CTXUAT” – we can forget about matching the numbers unless you specifically need to only match certain of those devices.

Another option I needed was to display Citrix tag information since I am providing a subset of servers, using the same naming convention as the rest of the servers, where there are tag restrictions so specific applications only run off specific servers. Using tags means I don’t have to create multiple delivery groups which makes maintenance and support easier. Specify a -tags option and a column will be added with  the list of tags for each device, if present.

However, adding the -tags option was “interesting” because the column didn’t get added. A bug in my code – surely not! What I then found, thanks to web searches, is that versions of PowerShell prior to 5 have a limit of 30 columns so any more than that and they silently get dropped. The solution? Upgrade to PowerShell version 5 or if that’s not possible and you want the tag information, remove one of the other columns by changing the $columns variable. Yes, 30 columns is a lot for the script to produce but I decided it was better to produce too much information, rather than too little, and then let columns be removed later in Excel or the grid view. I also found a bug, yes really, where if the vDisk configured for a device had been changed since it was booted then it would not be identified as not booting off the latest. That’s fixed so remember you can quickly find all devices not booting off the latest production version of the vDisk or booting off the wrong vDisk by filtering on the “Booted off Latest” column: The script is still available here (GitHub? never heard of it :-)) The origins of AppSense A long time ago (1998) in a city far, far away (London and I live in West Yorkshire some 200 miles north), I was a consultant working as part of a team implementing a new Citrix environment for a private bank. I think it was probably MetaFrame 1.0 on NT 4.0 Terminal Server Edition but I may be wrong – I started with WinFrame 1.5 in 1995, albeit with an OEM version called NTrigue from Insignia Solutions, based in High Wycombe, who Citrix went on to acquire (and with it Keith Turnbull and Jon Rolls) . All was going well until I noticed that one particular user had somewhere approaching fifty executable files in their home drive and was periodically running them on our shiny new (physical) servers. Back then, there wasn’t anything like AppSense Performance Manager, as AppSense hadn’t yet come into existence, four physical processors was about the most you could put in a server (and that was very expensive) and a few gigabytes of RAM cost more than the server itself so resources were at a premium and needed protecting. We therefore had a problem in that line of business applications were competing with all manner of “fun” software – probably what might be classed as malware these days. My background is in software development – six years as a UNIX developer after graduation in 1988 and I wrote my first programs, initially in BASIC and then 6502 assembly language, in 1980 on a Commodore Pet – so I set about developing something that would stop this user from running any of these applications in order to preserve server resources. So EStopper from ESoft was born, bearing in mind that the two most difficult challenges in software development are not actually writing the code itself but what to call a product and what icon to use. The very first version used Image File Execution Options (IFEO) in the registry coupled with File Type Associations (FTAs) so was very much a blacklisting approach. You will have probably used the admin interface for this at some point – it was called “regedit” (my career started out in the non-GUI days so I couldn’t, and still can’t, code GUIs). All of the development for version 1.x was done out of hours, as I had a regular day job as a consultant, so was in hotel rooms, trains and late at night at home after my wife had gone to bed. For version 2.0 I was allocated a whole month where I had to learn how to write GUIs which cured me of ever wanting to be a full time developer again! From here came the idea of Trusted Ownership, which obviates the needs for whitelisting or blacklisting thus simplifying deployment so an out of the box/default configuration can give instant protection from all external executable threats. I’m not sure how I came up with this idea but by this point the first full-time developer had been recruited, as we’d sold the product to a couple of our “tame” Citrix accounts (as an “independent” organisation called iNTERNATA), and whilst he was a fantastic coder, he did his best work in the early hours of the morning so I’d get phone calls around 3am asking about some aspect of Windows NT security – and I needed my beauty sleep even back then! Auditing of who was denied access to what was in the product from the very beginning but after seeing a denial of something called “porn.exe” in the event logs, I thought that a feature to take a copy of denied executables, purely for research/disciplinary purposes of course, was a good idea. And, no, I never did investigate “porn.exe” (yes, really) although archiving is still a very useful feature today to use when understanding a new environment so denied content can be examined without having to recreate it. So finally AppSense the company was born in 1999, with dedicated sales and development resources and the EStopper product was rebranded to be called AppSense as that was the only product they originally had. Even when Performance Manager joined the stable a few years later, the installer for what eventually was rebranded to Application Manager was still called AppSense.msi for a while. And the rest, as they say, is history although the origins of Environment Manager are also “Quite Interesting” which I’ll leave to another time. The Taming of the Print Server The Problem A customer recently reported to me that sometimes their users complained (users complaining – now there’s a rarity!) that printing was slow. The Investigation When I logged on to the print server I observed that the CPU of this four vCPU virtual machine was near constantly at 100% and in digging in with taskmgr saw that it was mostly being consumed by four explorer processes for four different users who were actually disconnected (in some cases, for many days). Getting one of these users to reconnect, I saw that they had a window open on “Devices and Printers” and with over six hundred printers defined on this print server, my theory was that it was spending its time constantly trying to update statuses and the like for all these printers. The Solution Logoff the users and set idle and disconnected session timeouts so that this didn’t happen again! Well, that’s great but what if the users are actually logged on, checking print queues and the like. as administrators have a tendency to do? What we have to remember here is that consumption of CPU isn’t necessarily a bad thing as long as the “right” processes get preferential access to the CPU. So how do we define the “right” processes on this print server? Well, we know it’s probably not the explorer processes hosting the “Devices and Printers” applet so why don’t we ensure that these don’t get CPU so that more deserving processes can have CPU resource? Therefore, what we do here is to lower the base process priorities of the explorer processes so that they have a lower base priority assigned to them. This means that if a process (technically a thread in a process) arrives in the CPU queue with a higher base priority than it gets on the CPU before the lower priority thread. You can do this manually with taskmgr by right-clicking on the process and changing its base priority but that’s not the most fun way to spend your day although I may be wrong. So I wrote a few lines of good old PowerShell to find all processes of a given name for all users and then change their base priorities to that specified on the command line. This must be run elevated since the “-IncludeUserName” parameter requires it. This parameter is used in order to filter out processes owned by SYSTEM or a service account, not that explorer processes are likely to be thus owned, so that the script example can be used for any process since we shouldn’t mess with operating system processes as it can cause deadlocks and similar catastrophic issues. Also, I would strongly recommend that you never use the “RealTime” priority as that could cause severe resource shortages if the process granted it is CPU hungry. I first implemented this in the All Users Startup folder so it would run at logon for everyone once their explorer process had launched but I felt slightly nervous about this in case explorer got restarted mid-session. I, therefore, implemented the script as a scheduled task that ran every ten minutes, under an administrative account whether that user was logged on or not, which looked for all explorer processes and set their base priorities to “Idle”, which is the lowest priority, so when any spooler thread required CPU resource it would get it in preference to the lower priority explorer threads. However, if these explorer threads needed CPU and nothing else needed it then they would get the CPU, despite their low priority, so potentially there are no losers. Users might experience a slightly unresponsive explorer at times of peak load but that’s a small price to pay to get happier users I hope you’ll agree. Param ( [Parameter(Mandatory=$true,Position=0)]
[string]$processName , [Parameter(Mandatory=$false,Position=1)]
[ValidateSet('Normal','Idle','High','RealTime','BelowNormal','AboveNormal')]
[string]$priority = "Idle" ) Get-Process -Name$processName -IncludeUserName | ?{ $_.UserName -notlike "*\SYSTEM" -and$_.UserName -notlike "* SERVICE" } | %{ $_.PriorityClass =$priority }

So we just create a scheduled task to run under an administrator account, whether we are logged on or not, passing in a single positional/named parameter of “Explorer” as the base priority defaults to “Idle” if not specified. If you implement this on a non-English system then you may need to change the account names above to match “SYSTEM”, “LOCAL SERVICE” and “NETWORK SERVICE”. Job done, as we say in the trade.

Oh, and I always like to create a folder for my scheduled tasks to keep them separate from the myriad of other, mostly built-in, ones.

Advanced Procmon Part 2 – Filtering inclusions

In part 1 I showed some of the exclusions I would typically add to procmon’s filters to help reduce the volume of data you have to sift through. Having been working on a customer site last week where I had to try to find the cause of a disappearing “My Documents” folder, I think that it’s time to finally write part 2 so here goes.

What I had been asked to troubleshoot was why the hidden attribute on the user’s documents folder, which was being redirected, to a folder on the user’s H: drive (a DFS share, not that this is relevant other than it is not a local folder), was being set during a user’s XenApp session which made it rather difficult for them to work.

The first thing to check was what was doing the folder redirection which is where the first procmon filters were used. I spend a lot of time trying to trace where a Windows application or the Operating System itself stores a particular setting so that I can then automate its setting in someway. When it boils down to it, a setting is almost always going to be either stored in the registry or in a file which means we can use procmon to filter on the setting of the registry value or the writing of a file.

For example, let’s use procmon to see where Notepad (one of my favourite and most used apps) stores the font if you change it. I set my include filters as shown below – note that the excludes don’t really matter at this point:

I then change the font to “Candara” in the Format->Font menu and click “OK”

What we notice is that procmon doesn’t show anything after I click “OK” so have I got the filters wrong or is it saving it somewhere else (a network database for instance)? No, what we’re seeing is the result of a developer who has decided that changed settings will only get written back when the application exits normally rather than as soon as they are changed. It’s more efficient this way but does mean that changed settings won’t get written back if the application exits abnormally like if it crashes or it is terminated via task manager.

Note that the “WriteFile” operation isn’t always listed in the drop down list so if this is the case, select “IRP_MJ_WRITE” instead.

So after I exit notepad, procmon then shows me the following where I’ve highlighted the line where we can see the font being set to what I picked previously.

And there we have it – how to find where an application/process stores settings if it does persist them in a human readable form of course – it might encrypt them or store them as Unicode although the latter is relatively easy to spot, although not search for, as you’ll see zero byte padding for every character if you look at the data in regedit as shown below for a value in my mail profile.

What you might have to contend with is an application, like a Microsoft Office application for instance, writing a large number of settings in one go, rather than just the one you changed. What I do here, when the entry is a text entity like a folder name or some arbitrary string data, is to specify a string that is unlikely to be used anywhere else so I can either search for it in procmon or in the registry or file system directly. For instance last week I needed to know where Excel 2013 stored the folder that you specify in the “At startup, open all files in” setting in the Advanced->General options so I specified it as “H:\Elephant” and went on an elephant hunt (ok, search) …

So back to the tale of the hiding of the Documents folder. Using the above technique and because I know that redirected folder settings are written to “HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders” (and potentially “Shell Folders” in the same key but the former takes priority if the value exists in both keys), I set the procmon filters to the following in an admin session on my customer’s test XenApp server:

At this point I didn’t know what the process was that was setting it so I didn’t filter on a specific process. Note that if multiple users had logged on during the procmon trace then it may have confused matters so you can filter on the session id too within procmon should you deem it necessary. You can right-click on the column headers in procmon, select “Select Columns” and then add columns like session id and user name although be careful if filtering on the latter as it may be a system process doing what you’re investigating rather than one running as that user.

This procmon trace told me that it was a process called EMUser.exe that was setting the “Personal” (Documents/My Documents) registry value which is part of the AppSense Environment Manager product. I checked the AppSense configuration and it was indeed setting the redirection but it wasn’t changing the attributes in any way that I could see.

The next part of the puzzle is to figure what I needed to filter on to be able to spot the changing of the attributes to hidden. This I did with the help of the good old attrib utility by creating a test folder and then running attrib to set and unset the hidden attribute on this folder so I could see what procmon reported.

Note my use of “echo %time%” – it can be a good idea to know what time you performed an operation in case there are many entries in a trace – I sometimes wait for the taskbar clock to hit an exact minute before I click something I want to trace the consequences of so I can search for that time in the log (and then exclude lines before that point if necessary by right clicking on that line and selecting “Exclude Events Before”).

So what we learn from the above is that when the hidden attribute is set, it will be the operation “SetBasicInformationFile” with “FileAttributes” in the “Detail” column containing the letter “H” since we see “HSDN” when I set hidden and system and we don’t see “H” when I subsequently remove the hidden attribute.

Back on the trail of the hidden documents folder, my filters became thus:

If “SetBasicInformationFile” is not present in the drop down list then pick “IRP_MJ_SET_INFORMATION” instead.

I then logged on to the XenApp server with my test user account and very quickly noticed that the Documents folder was not visible so I stopped the trace at this point and found that it was Lync.exe (Communicator) that had set the hidden attribute. Next I reset the filter and just filtered on Lync.exe so I could try and figure out what Lync was doing just before it decided to set the hidden attribute. What I found was that is was querying the “Cache” value in the afore mentioned “User Shell Folders” key. Looking at this value I noticed that it too was set to “H:\Documents” which didn’t feel right given that it is normally set to the temporary internet files folder in the user’s local profile.

This is where you have to try and second guess developers or at least come up with a theory and then prove or disprove it. My theory was that Lync was querying what the Cache folder was set to, since it needed to use it, but had decided that it was a system folder that a user shouldn’t see so set it to hidden.

So what I did to try and both prove and fix this was to change the AppSense configuration to redirect the cache folder back to the temporary internet files folder in the user’s local profile (so “%USERPROFILE%\AppData\Local\Microsoft\Windows\Temporary Internet Files”). I did this and the documents folder no longer became hidden.

Thank you procmon – it would have been very difficult without you!

Teeing up your output

I was on customer site this week running some good old cmd scripts on a Windows Server that output to standard output, or stdout as we say in the Linux/Unix world, so showed in the cmd prompt that I’d run them from. But what I also needed as well as seeing the results as they happened was to record the output to a file in case I needed to refer back to any specific piece of it later given that it was running against nearly 6000 users.

Enter the Unix/Linux “tee” command that outputs to standard output as well as to a file. This is available through packages such as Cygwin but when on customer site you typically need access to a standalone executable that doesn’t need installing that you can just copy to the required system and run. To that end, here is my implementation of the “tee” utility that I wrote many years ago that will serve the purpose required.

At its simplest, it just takes an output file name that is overwritten with whatever it is piped to it so is invoked thus:

yourscript.cmd your parameters | tee outputfile.log

If you also want to catch errors then you need to do the following which is “stolen” directly from Unix shell scripting:

yourscript.cmd your parameters 2>&1 | tee outputfile.log

Since “2” is the file number for standard error (stderr) and “1” is the file number for standard output (stdout).

By default it overwrites the output file but specify a -a argument and it will append to an existing file and use -b if you have long line lengths as it reads a line at a time. Run with -? to see a usage message.

You can download the tool here but as ever it comes with absolutely no warranty and you use it entirely at your own risk (not that there’s any risk but I have to say that, don’t I?).

Yes, I know that tee, or tee-object, exists in PowerShell 3.0 and higher but sadly we still can’t guarantee that we have this on customer systems, much as I like PowerShell.

Transferring HP Recovery Media to Bootable USB Storage

Now that most desktops and laptops don’t ship with separate recovery media, like they did in the old days, and the cost of buying it afterwards is not insignificant, what happens if your hard drive completely fails thus taking with it the afore mentioned recovery media?

I kind of accidentally had this issue recently on a new laptop I was setting up so wondered if I could get the recovery media transferred from the hard disk to a bootable USB stick and then boot off this USB stick to perform the recovery to what was effectively a brand new hard drive. It was fortunately very easy to get this to work so here’s what you do:

1. Get a blank USB stick/drive – for the recent HP laptop with Windows 8.1 I purchased, I used a 32GB stick although 16GB may just have worked.
2. Format as NTFS – the main installation file is over 12GB but the maximum file size on FAT32 partitions is “only” 4GB so this is why FAT32 cannot be used.
3. I’d taken an image of the laptop as it arrived, so before booting into Windows for the first time, so I mounted that on the system where I was preparing the bootable (not the destination laptop although you could use it). If your original recovery partition is still available you could use that instead.
4. Copy all of the files/folders from the Recovery partition to the root of the USB stick. These are the folders you should see (note that they are hidden):
5. On the USB stick, rename the file “\recovery\WindowsRE\winUCRD.wim” to “winre.wim” (this is the file that bcdedit shows as being the boot device in the \boot\BCD file)
6. Make the USB stick bootable by running the following, obviously changing the drive letter as appropriate:
bootsect /nt60 e: /mbr

If it’s a Windows 8.x device then it may be configured for SecureBoot in which case you may need to enter the BIOS and disable this temporarily just whilst you are performing the recovery in order to get it to boot from USB. Don’t forget to change it back to the original settings once the restore is complete.

I’ll now keep this bootable around just in case the hard drive should fail or otherwise get hosed in such a way that the HP supplied recovery media will not work. At well under £10 currently for a USB 2.0 32GB USB stick, it’s a small price to pay.

Note that the recovery media is protected by a software mechanism that means that you cannot apply it to a different hardware model so this is not a means to clone illegal, activated, copies of Windows!

Reasons for Reboots – Part 1

So you’re quite happily working away having installed an update to an application that you’re not currently running only to find that the installer demands a reboot at the end of the installation anyway. “Why”, you ask yourself, “I wasn’t even using the program”. Over the next few posts, I will cover the main mechanisms that Windows uses to update in-use files and how you can sometimes safely make the required updates without rebooting.

In this article, we’ll look at the registry value “PendingFileRenameOperations” found in the key “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager” although the value may not be present if there are no pending file renames to be performed at the next boot. This value consists of pairs of files – the first file of the pair is the source file and the second file is the destination. If the second file is the empty string then the source file is to be deleted.

In the example above, there are three Adobe Flash files that are to be deleted, presumably as they were in use when the Flash update was applied and the replacement files have different names (since the file names seem to include the version number in this example). Now whilst these files may have been in use when the update was applied, they may not be now so we can use SysInternals/Microsoft Process Explorer to see if these files are still in use or we can just try deleting the files anyway since if they are in use we will get an error.

In the Process Explorer handle search results above we can see that nothing currently has this file open so we could do the deletion now. If a process did have a handle open to the file that needed deleting then we would have to make a judgement call as to whether we could safely terminate the process holding the handle open, and restart it if required. Do not use the Process Explorer functionality that allows you to close a handle though as this may cause the process owning the handle to malfunction or crash since the application developer for that application believes they are in control of the handle’s lifespan and probably won’t expect, or cater for, external interference.

Next in the PendingFileRenameOperations value above, googledrivesync.exe is to be deleted and then the file “C:\Program Files\Google\Drive\TBM5CAD.tmp” is to be moved to googledrivesync.exe. Actually, the delete operation here is superfluous as the file move will overwrite googledrivesync.exe anyway. Here we could terminate googeldrivesync.exe, do the file move manually as an administrator and then restart (the updated) googledrivesync.exe.

You might think that if PendingFileRenameOperations just included file delete operations, rather than moves, then the reboot could be delayed until a convenient time. However, I have seen many times where the file(s) to be deleted is one that is still in use because the installer has failed to stop/restart whatever was using the old, to be deleted, file such that the system although claims to be running the updated software is in fact still running some of the old components which may cause problems. I always check PendingFileRenameOperations after an installation even if the installer hasn’t requested a reboot. These files are typically “rollback” files which are usually located in the %systemdrive%\Config.msi folder and have a “.rbf” extension. To find out what they are, file wise, since they will have been renamed, take a copy, add a .dll extension and then view the properties in Explorer.

Note that regedit does not allow you to have empty strings when modifying a REG_MULTI_SZ value so if you want to edit the value, you must not use regedit – export to .reg, modify that in a text editor like notepad and re-import, otherwise the value will become corrupt. If there is no data (file names) left in the value then the value itself should be deleted. Don’t click “OK” when viewing the value data even if you haven’t changed it (my best practice is always to click “Cancel” on anything where you’ve not made a change anyway) otherwise you will get the warning below (which I personally feel is a (long standing) bug in regedit since REG_MULTI_SZ values can quite evidently contain empty strings):

and the value will then look something like this:

which is unfortunately now rather broken and will potentially update/delete the wrong files!

You can add your own entries into the PendingFileRenameOperations using an old Microsoft command line tool called InUse.exe which is available here if you need it.

If you set Process Monitor to log from boot, you can see the PendingFileRenameOperations value being processed. It’s not very interesting though! You should see that it is processed in a top down manner – updates to the value when setting file moves/deletions are usually appended to the value so the most recent updates will feature at the end, not the start, of the value’s data.

Introduction

In Part 1 we covered how to make a very basic native Windows bootable image and Part 2 went further to show customisation methods. In this part I’ll discuss some further enhancements we can make, namely:

• Making a multi-image bootable

Over the (many) years that I’ve been making Windows bootables, frequently used for recovery purposes on physical devices, the need to add drivers to the images has reduced greatly since each Windows release seems to cope better with the array of PCs, Servers and laptops that I encounter (or perhaps there is less and less “exotic” hardware out there). However, given the extracted driver package for a specific device, e.g. storage controller or network card (note that wireless doesn’t work out of the box with Windows PE), it is incredibly easy to add the drivers to the bootable image once it has been mounted, to c:\temp\wimp in this case, which we covered in part 2. So to add the drivers simply run the following command once your wim file has been mounted:

Dism /Image:C:\temp\wimp /Add-Driver /Driver:C:\drivers\mydriver.inf

This should then parse the driver’s .inf file specified and copy all of the required files from the specified folder into the mounted image. If you have more than one driver to add and they are in a folder hierarchy then you can use a single command to add them all – use the /Recurse option instead of /Driver. If any of the drivers are unsigned then using the /ForceUnsigned option may help.

To see what 3rd party, as in non-Microsoft, drivers are in your image run the following:

Dism /Image:C:\test\wimp /Get-Drivers

Once you have finished adding drivers then remember to commit and unmount the image so that the changes you have made are written back to the .wim file and then make the bootable USB or ISO as before although for USB all you need to do is to copy the updated boot.wim file into the \sources folder.

Given that the out of the box experience with vanilla WinPE is a command prompt which not everyone is comfortable with, an easy way to provide a Windows style start menu is to use the excellent Nu2menu tool. It uses a simple XML configuration file to build a menu of tools – when adding a new tool I usually just copy and paste an existing entry and then modify it as necessary. There are sample XML snippets on the site and one in the download zip file. This then can look something like the following which also shows you the sorts of utilities that can be worth obtaining/purchasing to put into your recovery suite, such as the excellent Explorer++:

To launch nu2menu.exe automatically at boot, my startnet.cmd file simple has the following line:

start "" \programs\nu2menu\nu2menu.exe

Where \programs\nu2menu is a folder in the .wim image where I have copied the nu2menu executable, xml configuration file and the image to use for the start menu button itself which is in a file “nu2go.bmp”.

Making a multi-image bootable

So now we have the knowledge to create an all singing, all dancing, Windows PE bootable that can be used for all kinds of things Windows wise but what if I’ve got a physical Linux system I need to do some offline servicing of? This is where we can add grub4dos to the existing bootable media, so USB drives have to be FAT32 format rather than NTFS, to allow other operating systems to be booted such as Linux and DOS by the following steps:

1. Rename “bootmgr” to “bootmgr8”
2. Extract Grub4Dos grldr and menu.lst to the root of the bootable media from here
3. Rename grldr to bootmgr
4. Modify the menu.lst file as required (see below)

For example, the following entry in menu.lst will give an option to boot your existing WinPE image:

title WinPE plus BartPE
find --set-root /bootmgr8
chainloader /bootmgr8

And the following will boot an Ubuntu (I used build 14.04, 64 bit) ISO image /Ubuntu.iso already present on the media:

title Ubuntu Live from ISO
find --set-root /ubuntu.iso
map --sectors-per-track=0 --heads=0 /ubuntu.iso (0xff) || map --sectors-per-track=0 --heads=0 --mem /ubuntu.iso (0xff)
map --hook
root (0xff)
kernel /casper/vmlinuz.efi boot=casper iso-scan/filename=/ubuntu.iso noprompt noeject noswap noapm nomodeset locale=en_GB
initrd /casper/initrd.lz

Note that as it is booting from the ISO which it loads into memory first, it can be quite slow to boot on physical systems and can spend several minutes at the stage below:

It does however have the advantage that because no files need to be extracted to the bootable media from the ISO then it can be updated with a newer ISO simply by copying any new ISO over the top of the old one on the bootable media.

To boot to DOS (very occasionally I need this for tasks like BIOS updates on very old systems), I have the following:

title DosFlashDisk
kernel /dos/memdisk
initrd /dos/idecdrom.img

Where the idecdrom.img file is a 1.44MB floppy disk image I created years ago from an actual floppy disk and memdisk comes from here. You may be able to obtain a FreeDOS image if you don’t have a bootable floppy disk and drive from which to make an image.

Note also that we can have multiple WinPE images booting from the same USB/ISO, e.g. 32 and 64 bit variants although I usually just have a 32 bit variant since that works fine on 64 bit systems, by using bcdedit to manipulate the \boot\BCD file on the USB stick (not in the .wim file itself). To show the current boot entries simply run the following (where G: is my USB stick mount point):

bcdedit /store G:\boot\bcd

We can then create new entries and modify them to point to other .wim files on our boot media. See here for a quick guide to using bcdedit for this but remember to always to also specify the /store option as well so that bcdedit manipulates the bcd file on the boot media and not your local Windows system!

That’s all for now folks – have fun making bootable media!

Making a native Windows bootable USB stick (Part 2)

In part 1 I explained how to make a very basic bootable USB or ISO using the Microsoft Automated Deployment Kit. However, when booted that just gives a command prompt, often erroneously referred to as a “DOS prompt” (16 bit apps won’t even run on a 64 bit OS!), which although powerful in its own right, particularly those of us who shun GUIs and embrace command lines, doesn’t give a great user experience. I will therefore cover how we can customise the image which can be done for a variety of reasons:

1. Add required drivers (needed less often these days due to the large driver set provided by Microsoft)
2. Add startup scripts to give information about the environment such as disk volumes and network configuration (but not wireless (yet))
3. Add extra tools (e.g. defragmentation, backup , anti-virus, etc.)

The first thing we have to do is to mount the boot image (WIM) file so that we can manipulate it. If you have ever installed an operating system from Vista onwards then you will already have used a WIM file as this is usually the largest file on installation media and is effectively a file system within a file. See this link for a detailed explanation if you are so inclined.

To mount the WIM file we use the multi-talented DISM.exe (Deployment Image Servicing and Management) tool in the Deployment and Imaging Tools Environment administrative command prompt that I introduced in part 1. We must first create a folder in a local file system which will become our mount point, which is “c:\temp\wimp” in my examples below. So now we run:

dism /mount-wim /wimfile:"c:\WinPE\media\sources\boot.wim" /index:1 /MountDir:c:\temp\wimp

Which should then give us some folders in the mount point as shown below:

We can now add what we want to this folder hierarchy and once we’re done we simply unmount it, again using DISM.exe, and then create the bootable image again, exactly as described already in part 1.

Note that if you are adding files which you might want to update frequently, such as pattern files for an offline anti-virus scanner, like Trend Micro’s Sysclean, then I tend to not add these to the WIM file put just keep them in a folder on my bootable USB stick.

Now onto the customisation. There are a number of official ways to customise the image but by far and away the easiest in my opinion is to edit the c:\temp\wimp\Windows\System32\startnet.cmd script file which is what is invoked when the WIM file is booted. By default it just contains the “wpeinit” command which initialises the WinPE environment.

My usual startnet.cmd file looks like this:

@echo off
title Guy's Win8.1 x86 WinPE environment
color 0e
echo Initialising ...
wpeinit
\programs\bginfo\bginfo.exe \programs\bginfo\bginfo.bgi /accepteula /silent /timer:0
ipconfig
diskpart /s %SystemDrive%\programs\tools\diskpart.txt
start "" \programs\nu2menu\nu2menu.exe

Which results in the following at boot:

Where BGInfo is the excellent Microsoft/SysInternals Background Info tool that displays system information, such as processor, memory and disk information, as the wallpaper. The file bginfo.bgi is a config file that I saved previously from having running BGInfo interactively and saved the settings to this file.

The diskpart.txt file is just a list of commands to run the diskpart tool such that it will show us information about the hard drives on the system. It contains the following:

list disk
list vol
exit

The \programs folder is one I have created myself and then added in all of the extra, third party, tools that I want to use in the booted image. We’ll cover how to hook this into the excellent NU2menu tool in the next thrilling instalment. Also, look out for information on how we can extend the capabilities of the bootable so that it will also boot live Linux distributions and even DOS all from a single USB stick or ISO.

It’s a good idea to not use driver letters in scripts, etc. because although the system drive is usually X:, I hate making assumptions/hard coding so I either don’t put a drive letter in at all, as above, or use the %systemdrive% environment variable. Be aware though that anything that you actually change or add to this X: drive will be lost as soon as you shutdown, since it is dynamically created from the WIM file, so use a persistent drive, such as a USB stick, if necessary to maintain a file, etc.

Don’t forget to unmount the WIM file when you have finished customising it and before you make any bootable images from it. Ensure that no command prompts, explorer windows, etc. are in your c:\temp\wimp folder, or subfolders, and nothing has files open in this folder otherwise the image will not unmount properly. Then run:

dism /unmount-wim /mountdir:"c:\temp\wimp" /commit

Note that if you want to discard your changes then specify the /discard option in place of /commit.

Lastly, I always put file/path arguments in double quotes not only in case there are spaces in them but also so that file name completion works, by default via the <TAB> key, so the chances of a typo in a path name are greatly reduced.