Jeremy Davis
Jeremy Davis
Sitecore, C# and web development
Article printed from: https://blog.jermdavis.dev/posts/2025/memory-dump-docker-sitecore

Memory diagnostics for Sitecore under Docker

It's more manual, but it still works...

Published 24 February 2025

When I was writing about dealing with memory analysis for Sitecore recently I focused on a site running under Auzre PaaS. But what if you're working in Docker locally? A good question it turns out...

In an ideal world, this should be easy... url copied!

Working locally with an instance of Sitecore installed directly on your computer this is prety simple - you can attach either the Performance Profiler tools or attach the debugger and look at the Diagnostic tools pane. Either of these can give you useful data.

But if you fire up recent Visual Studio and attach the debugger to the website process in a container, you'll get this:

The Diagnostic Tools pane in Visual Studio showing that it 'does not support the current configuration' when attached to a Docker process

It doesn't support being attached to a Docker-based process. And the Performance Profiler UI doesn't have an option to attach to a container.

So one approach to dealing with this that's worked for me is to do something slightly fancier with some extra tooling:

Stealing a trick from Azure PaaS url copied!

For Sitecore on Azure we used a memory dump - so can that work for us in a Docker container? Yes it can! Though it does need a bit more effort.

For me, the most "developer friendly" way to grab a dump at runtime is with the SysInternals ProcDump tool. That's a free download, and it gives you a decent command line for triggering process dumps in different scenarios.

You can download the tool and add it to your base docker image if you want. (Via your CM role DockerFile - but see below if you're going that way) But for a very simple developer test / blog post write-up, it's probably easiest to just copy it into the container you're debugging manually. I copied the files into a sub-folder of the Deploy/Website data volume for a Sitecore Docker instance:

The files for ProcDump copied into the deploy/website folder of a Sitecore Docker instance

That's a quick and easy way to make the files available in the container. But remember to clear the "you downloaded this from the internet" security flag on the ProcDump zip file before you extract the files and copy them...

Now you need to get inside the running container to run ProcDump. So from the command line you can run:

docker exec -it <your-container-name> powershell

					

And that will get you a shell running inside your Sitecore container. From there you can go to the folder with ProcDump in it with cd .\tool\ and we're ready to capture some data.

The key command line options to grab a dump now are:

.\procdump.exe -accepteula -ma w3wp "c:\deploy"

					

Using -accepteula bypasses any "are you sure you agree to the license" business. -ma asks for a full dump of the specified process, so we get all the data available. (You can take smaller dumps - see the documentation for further info) Specifying w3wp for the process to dump means we get a copy of just what IIS (and hence our Sitecore website) are up to. And finally the "c:\deploy" specifies where we want the dump file written.

The docker volume that maps the docker\deploy\website folder on your physical disk into the container maps to c:\deploy - it doesn't got straight to the IIS webroot folder due to a limitation of mapped volumes under Windows. (You can only map a volume to an empty folder) So we need to write the new data there so it shows up on your host computer.

When you run that you'll see something like:

A terminal window showing the output of ProcDump running to capture a dump.

And you'll see the dump pop up on your disk in amongst your deployed code:

Windows Explorer showing the dump file written to disk in the shared docker volume for deploying to Sitecore

And now you can follow the same process as before, to load that dump into Visual Studio and take a look at what's going on with the memory.

Fancier business url copied!

ProcDump does have some clever extra features - such as waiting until a specific condition is met before doing anything. The one of these I've found most useful is "dump when a memory threshold is exceeded" which uses the -m switch.

For example, using -m 3000 ProcDump will sit and wait until your instance of IIS has used more than 3GB of RAM and then take the process dump:

A console window showing ProcDump running in 'wait for the process to use 3GB RAM' mode, and eventually triggering

That's really helpful if you want ProcDump to trigger once you get close to a memory problem due to some automated test against your instance.

But you can do other clever things like triggering on exceptions being thrown, or when CPU usage (or other performance counters) excede specified value. So you can get fairly fancy if you need to.

But overall, this gives you a lot more power for diagnosing issues in your local Docker-hosted code.

A better setup? url copied!

The examples above are as simple as possible to make for a easy read - but if you plan to use this for anything serious it would make sense to adjust this very minimal setup.

Writing the dumps to the c:\deploy isn't really the best plan - inside the container there's a script to copy data from the this folder into the website root. You don't really want that to happen to a big dump file. So there are a couple of ways to fix this:

  • Change the entrypoint script's file-copy code to ignore dump files by default
    Inside the container Watch-Directory.ps1 configures the copying and it's called by the entrypoint script with some parameters passed. The defaults for what files are ignored are set in the definition of the $DefaultExcludedFiles variable in the watch script. That can be adjusted to add dump files:
    ...
    [Parameter(Mandatory = $false)]
    [array]$DefaultExcludedFiles = @(
          "*.user", "*.cs", "*.csproj", "packages.config",
          "*ncrunch*", ".gitignore", ".gitkeep", ".dockerignore",
          "*.example", "*.disabled",
          "*.dmp"),
    ...
    
    							
    Now the files will appear in you deployment folder, but there will be no risk of them getting copied elsewhere.
  • Make your own folder and map it out of the container
    If your dockerfile for the CM role creates a new folder (say c:\dump) and you make ProcDump.exe write to that folder, then it can be mapped out of the container fairly easily. A new volume can be added to the docker-compose.override.yml file's CM service:
    ...
      volumes:
        - ${LOCAL_DEPLOY_PATH}\platform:C:\deploy
        - ${LOCAL_DATA_PATH}\cm:C:\inetpub\wwwroot\App_Data\logs
        - ${HOST_LICENSE_FOLDER}:c:\license
        - $(${LOCAL_DATA_PATH}\dump):c:\dump
    ...
    
    							
    You'll have to rememeber to create \docker\data\dump (and likely drop a .gitkeep file in it, so it doesn't get deleted) as well. And now the dump files will appear in that folder.

And it's not just Docker url copied!

I should note this technique can also work if you're hosting your containers with Kubernetes. It does require some different configuration of course...

↑ Back to top