A few weeks back I wrote about spotting site performance challenges in the patterns you might see in trace data. But over the years I've noticed another set of repeating patterns that can be relevant here: Those of how a development team can find itself thinking and acting in the run up to a project hitting problems.
If any of these resonate with you and your team, maybe it's time to take a step back and think about how you can improve things?
In the midst of a challenging project, it's very easy to let non-functional issues like the speed get pushed to the back of your mind. That makes it very easy to miss them when they first appear, as when a strong focus on other issues gets combined with having a decent development environment (with a fast machine, load from only one user and a smallish set of test content) the initial signs are hard to spot. Performance has to be pretty bad to distract from a focus on implementing new features.
So things tend to unravel once the code gets moved onwards. Once the performance-specific tests begin or the code gets hit with internet scale load. As the size of content grows, and the numbers of users increases the code can grind to a halt. And late in the project, fixing these issues is harder work.
So: This is why I've said in a few posts before that it's important to set aside some time in your normal development cycle to think about performance issues. Put a few hours into each of your sprints when you can focus on how well the code will cope with load. Spin up the diagnostic tools in Sitecore and look at how many items a page actually reads.
A relatively small amount of effort with those tools can give you a useful indication of whether you have any issues you need to worry about.
The world is full of clever developers. And most of us enjoy focusing on challenging problems, and learning about interesting solutions to them. That's a good thing much of the time, but it can introduce some risk. As enterprise software gets more complex it gets harder and harder for the average developer to understand all the details of it, but at the same time we're all pretty used to the "please just get this sorted" demands that can come down the chain of command. And that sometimes means just doing our best with something we don't sufficiently understand.
Under those sort of circumstances it's not uncommon to find a group of enthusiastic developers who are struggling to gain the depth of experience that a challenging project might need. Whether that's because they're ASP.Net developers who "we'll hire now to fill a gap and train later", or because they're fresh out of a training course but have little real-world development experience the outcome can be similar: not using aspects of the software in the way they were designed.
I've harped on before about the risks of using content queries including
in Sitecore code. And these are a classic example of the sort of issues that get created by developers who are enthusiastic but don't have a lot of experience. It's an easy query to write, and it can get the right answer with a minimum of developer effort. But of course as the content tree grows, it gets progressively more expensive. And it's that sort of understanding that can be missing in some development teams.
So: Alongside trying to add some time for performance review into your projects, it's also a good idea to try and make sure you have a sensible process for code review. Whether that's pair programming between experienced developers and newer staff or a formalised "all new features get reviewed before merging" approach, it's important to try and spot code patterns that can lead to performance issues, and educate developers about how to avoid them.
The other extreme from the "lack of skill" problem is that experience might end up in over-confidence. Sometimes when a problem comes up an experienced team may try to solve it in a "clever" way that works around Sitecore rather than with it. Having seen a few of these, history tells me that it's not uncommon for the clever approach to end up being more of a problem than a solution in the long run.
If your project is wrapping framework functionality up in layers of custom caching, or ripping out standard providers or plug-ins in favour of custom code, you'd do well to ask yourself a simple question: Can we prove this is working better?
Now don't get me wrong – trying to improve or optimise the platform you're working on is not a bad thing in itself. But where I've seen teams fall down is in blind faith and dogged determination that what they're doing is actually making things better.
So: As mentioned above, meaningful test and measurement is key to ensuring a project that includes these sorts of changes actually works. If you're building code to wrap or replace bits of the underlying framework in order to get better functionality or performance, you need to be able to prove to yourself that what you built is actually doing better than the default approach.
It's also important to have the guts to walk away from an approach that isn't working. There's no shame in learning that what you tried doesn't work and moving on to something better...