Jeremy Davis
Jeremy Davis
Sitecore, C# and web development
Article printed from: https://blog.jermdavis.dev/posts/2021/broken-unicorn-synchronisation

Broken Unicorn synchronisation

Published 18 January 2021

Ever had a tool that works reliably suddenly not work? I had a problem like that recently, and it lead to some experimentation that I think I may need to come back to later. So this is mostly so I can remember what I was doing when I get back to this. But as we move toward a more "platform agnostic" with more use of .Net Core on Linux, maybe there's something here that might help you too...

Setting the scene url copied!

I'm working on a client project right now, which has a pile of "new to me" technology in it. The big ones have been Docker and Kubernetes. But the client doesn't own TDS licenses, so serialisation for the solution uses Unicorn. Development has been chugging away quietyly for a while – and the deployments to our internal QA infrastructure were working fine. Our DevOps build was collecting up all the Unicorn serialised items, ensuring they got to the target server, and then using the PowerShell script for "remote sync" to ask Unicorn on the CM box to pull in all the changed items.

Nothing surprising there.

But then the client sorted out some of their internal infrastructure for the project, and tried to run a deployment over to their kit instead of ours. And this one failed. The Unicorn sync step would not run.

After a bit of digging and thinking we hit on the likely cause: Our internal releases were using Microsoft's Hosted release agents running Windows – which are happy to run the .Net 4.x DLL that the Unicorn sync uses. But the client was using their own private release agents which were running on Linux with PowerShell Core. It seems that configuration could not run the .Net 4 code and hence the failure.

Fixing the immediate problem url copied!

That was pretty easy – the client did also have some Windows agents available on their network, so we split the Unicorn sync task into a separate job which could run on the Windows agent. The main bulk of the release needed to stay on the Unix agents for network security – they access to Kubernetes APIs. But moving the sync part of the release allowed that code to run, and the process to complete.

But it also piqued my interest...

Fixing things via a different route url copied!

What could I do if there wasn't a Windows agent available? Well the obvious answer is that the Unicorn Sync would need to be triggered using some .Net Core code. What would that involve?

Well the sync is basically two things: A PowerShell script that controls the process, and a DLL that includes some code for authentication. The DLL is available in a Github repo, so I took a fork of that...

It includes a few classes and some tests. It's was using XUnit for the tests though – which for some reason I could not make run on the laptop I was doing this with. (Everything built and ran, but all the tests were marked as "not run" and the tests log had a warning about lacking a test adapater nuget project) So I took a quick detour and changed it over to using MSTest just to make some progress. Having done that, a couple of the tests didn't pass:

Failed Tests

Tests seem important for this, so I figured I should fix them. So what's the fault?

Test Error

Why is the logger null? Well looks like it's created that way in the current code:

private IChapServer CreateTestServer()
{
	var responseService = Substitute.For<ISignatureService>();
	var signatureResult = new SignatureResult { SignatureHash = "RESPONSE" };
	responseService.CreateSignature(Arg.Any<string>(), Arg.Any<string>(), Arg.Any<IEnumerable<SignatureFactor>>()).Returns(signatureResult);

	return new ChapServer(responseService, new InMemoryChallengeStore(null));
}

					

So the simplest fix is to give it what it wants – a substitute object:

private IChapServer CreateTestServer()
{
	var logger = Substitute.For<IChallengeStoreLogger>();
	var responseService = Substitute.For<ISignatureService>();
	var signatureResult = new SignatureResult { SignatureHash = "RESPONSE" };
	responseService.CreateSignature(Arg.Any<string>(), Arg.Any<string>(), Arg.Any<IEnumerable<SignatureFactor>>()).Returns(signatureResult);

	return new ChapServer(responseService, new InMemoryChallengeStore(logger));
}

					

And doing the same thing for the other failed test brings everything back to green...

Next step then, is to add a .Net Core project (plus test project), and work out what files it needs to contain. Looking at the PowerShell code used for the sync, the important bit seems to be the use of the SignatureService class - so that's a good starting point. Compiling that points out that ISignatureService, SignatureFactir and SignatureResult are needed too. And pulling SignatureServiceTests into the test project (along with the Nuget package for Fluent Assertions) allows it all to compile and tests to pass.

So with that built, the PowerShell module for running the sync needs a small tweak because I changed the DLL name above - (It needs to call Add-Type on MicroCHAP.Core.dll and the construct the SignatureService from this new namespace with New-Object) but that should allow the code to run.

Attempting a test... url copied!

This weekend tinkering didn't involve the client's systems – so I tried testing this using local docker containers...

Clearly there's something I don't understand about how Powershell Core images are built, or what the client's release agent config was. I tried running up a copy of mcr.microsoft.com/powershell using Linux Containers in docker, assuming this would show similar behaviour. But it was able to run the original code happily (and ran the new code too).

So I'll need to find some time to come back to this at some point, when I can do a more accurate test, and verify this actually fixes the issue. So for the moment I'll leave this here as a note to myself for later, along with the tweaked code on Github. And maybe it'll be of use to someone else too...

↑ Back to top