Memory Control Script – Capping Leaky Processes

In the third part of the series covering the features of a script I’ve written to control process working sets (aka “memory”), I will show how it can be used to prevent leaky processes from consuming more memory than you believe they should.

First off, what is a memory leak? For me, it’s trying to remember why I’ve gone into a room but in computing terms, it is when a developer has dynamically allocated memory in their programme but then not subsequently informed the operating system that they have finished with that memory. Older programming languages, like C and C++, do not have built in garbage collection so they are not great at automatically releasing memory which is no longer required. Note that just because a process’s memory increases but never decreases doesn’t actually mean that it is leaking – it could be holding on to the memory for reasons that only the developer knows.

So how do we stop a process from leaking? Well short of terminating it, we can’t as such but we can limit the impact by forcing it to relinquish other parts of its allocated memory (working set) in order to fulfil memory allocation requests. What we shouldn’t do is to deny the memory allocations themselves, which we could actually do with hooking methods like Microsoft’s Detours library. This is because the developer, if they even bother checking the return status of a memory allocation request before using it, which would result in the infamous error “the memory referenced at 0x00000000 could not be read/written” (aka a null pointer dereference), probably can’t do a lot if the memory allocation fails other than outputting an error to that effect and exiting.

What we can do, or rather the OS can do, is to apply a hard maximum working set limit to the process. What this means is that the working set cannot increase above the limit so if more memory is required, part of the existing working set must be paged out. The memory paged out is the least recently used so is very likely to be the memory the developer forgot to release so they won’t be using it again and it can sit in the page file until the process exits. Thus increased page file usage but decreased RAM usage which should help performance and scalability and reduce the need for reboots or manual intervention.

Applying a hard working set limit is easy with the script, the tricky part is knowing what value to set as the limit – too low and it might not just be leaked memory that is paged out so performance could be negatively affected due to hard page faults. Too high a limit and the memory savings, if the limit is ever hit, may not be worth the effort.

To set a hard working set limit on a process we run the script thus:

.\trimmer.ps1 -processes leakprocess -hardMax -maxWorkingSet 100MB

or if the process has yet to start we can use the waiting feature of the script along with the -alreadyStarted option in case the process has actually already started:

.\trimmer.ps1 -processes leakprocess -hardMax -maxWorkingSet 100MB -waitFor leakyprocess -alreadyStarted

You will then observe in task manager that its working set never exceeds 100MB.

To check that hard limits are in place, you can use the reporting option of the script since tools like task manager and SysInternals Process Explorer won’t show whether any limits are hard ones. Run the following:

.\trimmer.ps1 -report -above 0

which will give a report similar to this where you can filter where there is a hard working set limit in place:

hard working set limit

There is a video here which demonstrates the script in action and uses task manager to prove that the working set limit is adhered to.

One way to implement this for a user, would be to have a logon script that uses the -waitFor  option as above, together with -loop so that the script keeps running and picks up further new instances of the process to be controlled, to wait for the process to start. To implement for system processes, such as a leaky third party service or agent, use the same approach but in a computer start-up script.

Once implemented, check that hard page fault rates are not impacting performance because the limit you have imposed is too low.

The script is available here and use of it is entirely at your own risk.

Author: guyrleech

I wrote my first program, in BASIC, in 1980, was a Unix developer after graduation from Manchester University (Computer Science) and then became a consultant, initially with Citrix WinFrame, in 1995 and later into Terminal Server/Services and thence EUC. I currently hold the Citrix CTP, Microsoft MVP, VMware vExpert and Parallels VIPP awards. I invented and wrote the first few versions of the security product which is now Ivanti Application Control (formerly AppSense Application Manager). I now work as an freelance consultant-cum-developer, live in West Yorkshire, England; have a wife, three children, one grandchild and two dogs and was a keen competitive runner until health problems put an end to that fun.

One thought on “Memory Control Script – Capping Leaky Processes”

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: