Jeremy Davis
Jeremy Davis
Sitecore, C# and web development
Article printed from: https://blog.jermdavis.dev/posts/2018/spotting-common-challenges-when-youre-doing-performance-tracing

Spotting common challenges when you're doing performance tracing

Published 05 February 2018
Updated 06 February 2018
Performance Sitecore ~5 min. read

I find myself doing quite a lot of work on performance for Sitecore websites at the moment. Whenever I do a similar job for a group of clients, I start to spot patterns in the sites I'm working on – and it struck me that there are some common performance issues that can be spotted just from the overview graphs you see when you collect trace data.

So to try and help you all improve the sites you ship, here are three that I've come across in a few projects recently:

1. The hard-working CPU url copied!

Hard Working CPU

What do you see:
Wide, mostly solid rectangles on the CPU-time graph for your threads. Sometimes all the requests look similar, and sometimes you'll see some short ones and some long ones.

Why does it look like this?
Simply put, you're working the CPU hard when a request looks like this. There are lots of potential reasons for this, but some common scenarios are:

  • Poor queries against the Sitecore content APIs. Especially where a big chunk of data gets returned, and is then filtered by Linq-to-Objects in memory. For example, work hard to avoid queries that look like this:
    var results = Sitecore.Context.Item
                    .Axes("//*")
                    .Where(i => i.TemplateID == requiredTemplateID);
    
    							
    because it's a classic performance bottleneck – it can fetch a lot of items from the database (which is itself a problem due to query time and cache churn) but then it filters them in memory, using up lots more CPU time and discarding a chunk of the data that Sitecore worked hard to fetch.
  • Complicated algorithms. Where you're doing big calculations, organising lots of data or parsing big chunks of XML/JSON.
  • Slow rendering code. Building HTML through lots of string concatenations, doing a large number of single item lookups etc.

What should you do about it?
Use your profiling tool to look at what bits of code consume most CPU time, and use that information to optimise your code where you can.

Try to push effort off to the database server for data queries by including the best filter clauses you can into the query. Where possible, reduce the scope of API queries to reduce the numbers of items to process. Try to make best use of both Sitecore's data and HTML caches to reduce the CPU effort involved in rendering your pages. And maybe you could replace a big API query with a ContentSearch query instead? Or make use of custom fields in your search index to pre-compute (at index time) that complex lookup you need, and just index the answer?

2. The memory ramp url copied!

Memory Ramp

What do you see:
The graph of memory in use over time rises as long as the app keeps running. Garbage collection cycles may cause a brief drop, but overall the upward trend continues as the app runs.

Why does it look like this?
The .Net runtime's Garbage Collector is good, but there are still ways for your app to leak (or appear to leak) memory. Something going on in your code is preventing the GC from taking back control of some of the memory you're using. In desktop applications this can be kind-of normal (If you keep adding more text and images to a Word document, Word has to keep asking for more memory...), but for web applications it's less common for that sort of state data to stick around in memory between requests. Some common scenarios that might lead to high memory usage are:

  • Memory is being referenced by a static field, event or object. Statics don't get garbage collected, as they live for the entire duration of the app-pool's life. So if a static keeps a reference to some other objects, those objects can't be garbage collected either. Imagine a "cache" object in static scope which never removes any data due to age, but just adds new things as it sees them.
  • Your code is allocating lots of big chunks of memory. Single objects over 85kb in size go on the "large object heap", where garbage collection behaves a bit differently. This heap is not usually compacted when memory is released, so while spaces are freed up, the total heap size allocated tends to grow more than the normal heap. That's because if an 86kb object is freed and then a 87kb object is allocated, the new one doesn't fit in the old space so more heap memory is required for the new allocation. You can spot this behaviour because the overall heap size grows, whilst the size of all your managed objects does not necessarily grow at the same rate.
  • Your code is doing interop with something unmanaged, and the external code relies on pointers to absolute memory locations in managed memory space. This requires objects to be "pinned" in place (so the GC cannot automatically compact them), which can lead to similar heap issues as large objects. Generally us web developers don't write code like this very much, but you might be using a 3rd party library which does. I've seen image manipulation libraries in native code cause this sort of issue in the past.
  • Failure to use IDisposable objects correctly. If code doesn't call Dispose() at an appropriate point, these objects can hang on to memory until the .Net runtime gets around to calling their finaliser method. You don't know how long it will be before that happens, so even if they're no longer held in memory by references, their heap space can still be occupied.

What should you do about it?
You can use your trace tool's memory usage recording to look and see what objects are causing the heap size to grow. Try and work out why the GC cannot release or compact them. Look for statics and the allocation of really big chunks of data, and try to refactor your code to reduce these issues. Look for objects that implement IDisposable and make sure they're implemented and used correctly.

Most memory profiling tools (including Microsoft's) have ways to help you find both how the number objects change over time, and why particular objects are staying in memory.

3. The waiting-for-stuff sawtooth url copied!

Waiting Sawtooth

What do you see:
The CPU-time trace for your request has a definite castle-crenelations look to it, where the thread goes from busy to quiet to busy again while it answers a single http request. This might be many short cycles, or fewer longer cycles.

Why does it look like this?
This is usually down to threads waiting for locks or for some sort of IO operation. Locks might be accesses to thread-safe collections, or use of lock(){} blocks in your code. Waiting for IO will most likely be calls to blocking methods like Read() or async code which explicitly calls Await() to block until something completes.

What should you do about it?
If your code includes locks, you'll need to consider if it's possible to refactor to remove or reduce the scope of these operations. Ideally, avoid code which needs to explicitly lock anything. If you have to have lock(){} blocks, try to ensure they wrap the smallest (and fastest) section code you possibly can. You should also be very careful of any code which needs to lock two things - this is classic "deadlock" territory, and they can be really hard to debug...

Where you're dealing with IO, the ideal scenario is to allow is to get rid of the need to block entirely. If you can refactor your code to allow operations to complete asynchronously in parallel with other operation then this is probably the best bet. C#'s async features can be very helpful for this. Otherwise, consider if you can reduce the effort involved in reads and writes - can you reduce the number of small operations by doing one big one, to increase efficiency? Or is it possible to cache data in memory and reduce the amount of I/O you're doing overall?

Conclusions url copied!

As with anything performance related, I'd repeat the key theme from my Manchester User Group talk: Check your performance early and often. It's much easier (and hence cheaper) to change things early on in your project development cycle than later. An hopefully the comments above will help you spot some common problem patterns before they become a launch-day criss for you... ↑ Back to top