Jeremy Davis
Jeremy Davis
Sitecore, C# and web development
Article printed from: https://blog.jermdavis.dev/posts/2019/ooops-looks-like-wffm-broke-analytics

Ooops... Looks like WFfM broke analytics...

Published 18 March 2019

I got a fun support ticket recently, for a client on Sitecore v8.1 whose analytics processing had stopped working. Their outstanding queue of raw analytics data that needed to be aggregated was slowly growing, and some functionality which relied on the existence of recent analyics data had stopped working. Trying to fix this caused me to look at two issues which are related to how Web Forms for Marketers data is processed in analytics – and these seemed like issues that other people who do support work might need info on...

First off, how do you know the analytics queue is growing anyway? url copied!

Good question! The queue lives in the "Processing Pool" collection in MongoDB's "Tracking Live" database. So you can spin up your favourite Mongo analysis tool and count the rows in that table. As a Robo 3T/RoboMongo user, I run the query `db.getCollection('ProcessingPool').count()` and look at the results:

Processing Pool

You can watch this number over time, and see what's happening to your queue...

While I was doing this I realised being able to get this data in a more automated way would be helpful, so I worked out how to do the same query in PowerShell:

Add-Type -Path 'C:\inetpub\wwwroot\yourWebsiteFolder\Website\bin\MongoDB.Driver.dll'

$client = New-Object -TypeName MongoDB.Driver.MongoClient -ArgumentList "mongodb://yourMongoServer:yourPort"
$server = $client.GetServer()
$database = $server.GetDatabase("tracking_live")
$val = $database.GetCollection('ProcessingPool').count()

write-host "----"
write-host "At: $(Get-Date -Format 'dd/MM/yyyy HH:mm')"
write-host "Count: $val"

					

Note that Sitecore does batch processing of this data, so you'll find that the number in this table will tend to climb a bit, until the processing engine kicks in and processes a batch of data. Hence needing to look at it over a few minutes to see what the trend is.

And on the server I was looking at, this queue was pretty huge 😉

So what problems was I seeing?

Issue One: url copied!

The first big pile of errors of I found in the logs looked like this:
4544 13:07:23 ERROR Exception when storing an aggregation result into reporting database. Item will be postponed and retried later.
Exception: System.Data.SqlClient.SqlException
Message: Failed to insert or update rows in the [Fact_FormSummary] table.
Failed to insert or update rows in the [Fact_FormSummary] table.
Failed to insert or update rows in the [Fact_FormSummary] table.
Failed to insert or update rows in the [Fact_FormSummary] table.
Failed to insert or update rows in the [Fact_FormSummary] table.
Failed to insert or update rows in the [Fact_FormSummary] table.
Source: .Net SqlClient Data Provider
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
   at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds, Boolean describeParameterEncryptionRequest)
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.Commit(SqlDataApi api, IReportingStorageItemBatch batch, Func`2 filter)
   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.<>c__DisplayClass6.<StoreBatch>b__1()
   at Sitecore.Data.DataProviders.NullRetryer.ExecuteNoResult(Action action, Action recover)
   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.StoreBatch(IReportingStorageItemBatch batch)
   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.Store(IReportingStorageItemBatch batch)

					

Luckily for me, Google solved this one quickly, as Dmytro Shevchenko has done a really helpful post covering this issue.

Applying his patch got rid of these log errors, and the processing queue started to go down again...

Issue Two: url copied!

...but after a day or so of happy processing, a new error appeared:
6680 10:27:47 ERROR Exception when storing an aggregation result into reporting database. Item will be postponed and retried later.
Exception: System.Data.SqlClient.SqlException
Message: Column, parameter, or variable @p59. : Cannot find data type Fact_FormSummary_Type.
Source: .Net SqlClient Data Provider
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
   at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds, Boolean describeParameterEncryptionRequest)
   at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.Commit(SqlDataApi api, IReportingStorageItemBatch batch, Func`2 filter)
   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.<>c__DisplayClass6.<StoreBatch>b__1()
   at Sitecore.Data.DataProviders.NullRetryer.ExecuteNoResult(Action action, Action recover)
   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.StoreBatch(IReportingStorageItemBatch batch)
   at Sitecore.Analytics.Aggregation.SqlReportingStorageProvider.Store(IReportingStorageItemBatch batch)

					

Looking at the database, I couldn't find anything called "Fact_FormSummary_Type" in the Sys.Objects table, so the error was probably accurate even if it wasn't very helpful.

Sadly Google didn't help me here, as the only reference I got back was a community forums post about a different message. The fix for that talked about a permissions issue with the database, so I checked that anyway – but the site I was working on appeared to have a connection string user that had these permissions.

Having done a bit more digging and made no progress, I tried speaking to Sitecore Support. They quickly pointed out that this is a known bug. Though sadly a poorly documented one – hence this post.

They say the fix is included in WFfM v8.2 Update 6 – so they recommend updating your Sitecore / WFfM instance to at least that version if possible. But if you can't do that, then speak to support about Bug #120569 to get access to their patch.

Having applied that patch to the instance of Sitecore I was looking at, I stopped getting this error in my log, and my processing queue started going down again.

And hopefully that's the last one of these issues I need to resolve before the server gets it's queue emptied...

↑ Back to top