Jeremy Davis
Jeremy Davis
Sitecore, C# and web development
Article printed from: https://blog.jermdavis.dev/posts/2021/patching-kubernetes-config

Patching Kubernetes config

Published 29 March 2021
Kubernetes Sitecore ~3 min. read

Deploying Sitecore (or anything else) in containers has been a big learning curve for me. Every so often I come across a new aspect of the whole business that I've not seen before. This week, another agency's work showed me a new thing which might help with making changes to Kubernetes config. The approaches I'd seen to deployments involved pushing all of the Kubernetes config each time you want to release, but it turns out you may not need to do that...

The underlying challenge

All the early examples I'd seen for running containers releases used the kubectl apply command to send the full set of configuration data out to a Kubernetes cluster. Sitecore provide you with a big pile of config files in their container release:

Config Files

These files specify a load of configuration settings, but a critical one to consider for the release process is the image tag (version) which you want Kubernetes to find and run for each of your roles. In your config files, there's a specific bit of yaml which describes this:

Image Tags

The simplest thing to do here is specify latest as the version tag. So if you deploy this config, Kubernetes will look at your container registry and find the most recent build and deploy that. Which is fine if you're just pushing out the newest thing – but it doesn't work if you need to deploy another version. Maybe you need to roll back? Or maybe your QA team is swapping between different feature builds because they're testing multiple bits of work in a different order that they were built...

So to be able to release a specific version of your website to Kubernetes, you need to change this config to describe a specific version tag, not a generic one like "latest".

Where I'd started from

When I first put the deployment process together for this particular project, I went with (what I thought) was a fairly straight forward solution to this challenge.

When DevOps builds the containers, it tags them with the release details using the build number and the source branch it came from:

Applying Tags

The next build step used a bit of PowerShell to edit the default Kubernetes config files, to replace the default image tag with the same tag that the build just applied to the images. And then the build put all the Kubernetes config files into the release artefact, so they would be available when the release was run.

So the release pipeline can download the artefact, and apply the config files it contains to get a release of the right set of images for the particular release.

An alternative approach?

But it turns out this may not be necessary. The kubectl command has a feature for patching configuration - changing part of it without deploying everything. So maybe the approach above can be refactored?

I've not had a chance to try this properly yet but:

The idea of applying a patch here is that you could separate your configuration into two parts. You can do an initial deployment using the files Sitecore supplies with your platform-related changes in it. This could stick with the latest tag – since for an initial deployment you're not worrying about a particular release.

But then subsequently, you don't need to apply this full config again. You could create a patch which just sets the image tags. Maybe something like:

spec:
  template:
    spec:
      containers:
      - name: sitecore-xp1-cm
        image: myrepository.azurecr.io/myclient-xp1-cm:8823-example-branch

					

Since the image tag is the last bit of this text, it makes it much easier to generate at build time – no need for search-and-replace – string concatenation would work here. And then you can put that fragment into your release artefact instead of the full configuration file. That does a better job of separating your infrastructure config from your release-specifc config, which might be an improvement in the long term...

The release process can apply this with something like:

kubectl patch deployment cm -n myNamespace --patch-file ThatPatchAbove.yaml

					

You can also do this sort of patching without any disk files it turns out – you have to express your patch as part of the command line: (using a json representation of the schema above)

kubectl patch deployment cm -n myNamespace -p {"spec":{"template":{"spec":{"containers":[{"name":"sitecore-xp1-cm","myrepository.azurecr.io/myclient-xp1-cm:8823-example-branch"}]}}}}'

					

So maybe it would be possible to have a release that didn't need to include any of the Kubernetes config in the release artefact? That's something I need to find time to test...

And another thing...

Turns out you can also use this trick to acquire a patched version of your config file if you want. You can also specify a "dry run" parameter (so the command doesn't change the config, just works out what the result of the operation would be) and ask kubectl to output the result to the console. That gives you back the result of applying your patch to a specific config. That might be the current state of the deployment (e.g. as specified by patch deployment cm above) or you can specify a source file if you like with the -f parameter.

kubectl patch -n myNamespace -f cm.yaml -p '{"spec":{"template":{"spec":{"containers":[{"name":"sitecore-xp1-cm","myrepository.azurecr.io/myclient-xp1-cm:8823-example-branch"}]}}}}' --dry-run=client -o yaml > cm-patched.yaml

					

And that seems a better solution to getting a file with the right version in it than my hacky "search and replace with PowerShell" original...

↑ Back to top