A few weeks back I wrote about spotting site performance challenges in the patterns you might see in trace data. But over the years I've noticed another set of repeating patterns that can be relevant here: Those of how a development team can find itself thinking and acting in the run up to a project hitting problems.
If any of these resonate with you and your team, maybe it's time to take a step back and think about how you can improve things?
So things tend to unravel once the code gets moved onwards. Once the performance-specific tests begin or the code gets hit with internet scale load. As the size of content grows, and the numbers of users increases the code can grind to a halt. And late in the project, fixing these issues is harder work.
So: This is why I've said in a few posts before that it's important to set aside some time in your normal development cycle to think about performance issues. Put a few hours into each of your sprints when you can focus on how well the code will cope with load. Spin up the diagnostic tools in Sitecore and look at how many items a page actually reads.
A relatively small amount of effort with those tools can give you a useful indication of whether you have any issues you need to worry about.
Under those sort of circumstances it's not uncommon to find a group of enthusiastic developers who are struggling to gain the depth of experience that a challenging project might need. Whether that's because they're ASP.Net developers who "we'll hire now to fill a gap and train later", or because they're fresh out of a training course but have little real-world development experience the outcome can be similar: not using aspects of the software in the way they were designed.
I've harped on before about the risks of using content queries including
\\*
in Sitecore code. And these are a classic example of the sort of issues that get created by developers who are enthusiastic but don't have a lot of experience. It's an easy query to write, and it can get the right answer with a minimum of developer effort. But of course as the content tree grows, it gets progressively more expensive. And it's that sort of understanding that can be missing in some development teams.
So: Alongside trying to add some time for performance review into your projects, it's also a good idea to try and make sure you have a sensible process for code review. Whether that's pair programming between experienced developers and newer staff or a formalised "all new features get reviewed before merging" approach, it's important to try and spot code patterns that can lead to performance issues, and educate developers about how to avoid them.
If your project is wrapping framework functionality up in layers of custom caching, or ripping out standard providers or plug-ins in favour of custom code, you'd do well to ask yourself a simple question: Can we prove this is working better?
Now don't get me wrong – trying to improve or optimise the platform you're working on is not a bad thing in itself. But where I've seen teams fall down is in blind faith and dogged determination that what they're doing is actually making things better.
So: As mentioned above, meaningful test and measurement is key to ensuring a project that includes these sorts of changes actually works. If you're building code to wrap or replace bits of the underlying framework in order to get better functionality or performance, you need to be able to prove to yourself that what you built is actually doing better than the default approach.
It's also important to have the guts to walk away from an approach that isn't working. There's no shame in learning that what you tried doesn't work and moving on to something better...
↑ Back to top