.net, Troubleshooting

The Entity Framework provider type ‘System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer’…

You receive the following when running an app with an Entity Framework dependency:

The Entity Framework provider type ‘System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer’ registered in the application config file for the ADO.NET provider with invariant name ‘System.Data.SqlClient’ could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application

This issue can occur if you have the correct references to the Entity Framework in your class library, but not in you main executable project in Visual Studio.

In my case it was just a case of using NuGet to install the relevant packages in both the business logic project and also the main executable project.

.net, Troubleshooting, Visual Studio

Stopping vshub Requests Spamming Fiddler

I’ve been having some issues lately when debugging ASP.net web applications using Visual Studio and Fiddler. If you use Fiddler whilst running the app in debug mode, you’ll find it’s being spammed with traffic being sent to Visual Studio as your app runs.

The key symptom is you have hundreds of requests that look something like:

http://localhost:49161/vshub/bb195f2e0d5c4765b9411f7bf479d012/DataWarehouseModule/dataWarehouse/getResult/

vshubIssue

Aside from being slightly irritating, the volume of traffic is sufficient to make Fiddler borderline useless as all genuine traffic will be drowned out.

I didn’t have much luck applying standard Fiddler Filters for some reason. Most likely I was just wrong on the syntax for the exclusion.

Digging in a bit further, the traffic is being sent to the Diagnostic Tools window in Visual Studio and is used to populate those kinda cool/kinda horrendously distracting memory and processor charts.

Given that I don’t spend my entire Visual Studio day preoccupied with memory usage and CPU load it seems a little overkill to be sending that data by default. As such, to stop the spamming, open the Diagnostic Tools window (Ctrl + Alt + F2) and deselect the Memory Usage and CPU Usage tools as shown below:

Diagnostic Tools Window

This should stop the spam traffic, but you will need to remember to manually re-enable the Diagnostic Tools if you are doing some profiling that requires them.

 

Disaster Recovery, High Availability, Performance, Recovery, SQL Server, Troubleshooting

How to halve your database recovery time in 60 seconds

Introduction

Picture the scene. Your sitting at you’re desk one morning and a user bursts in in a blind panic and tells you your company’s most critical system is down and your business is currently dead in the water.

Hundreds, maybe thousands of users are now sitting idle wondering what’s going on. Scores of customers are also getting increasingly irate with each passing second. Senior management are asking what the hell is going on.

After some frantic phone calls and emergency investigation work you realise that something catastrophic has happened to the single SQL Server database that underpins your entire operation. The damage to the database is so bad that the only way to recover it is a full database restore from last nights backup.

Oh, and the database is 500 GB+

  • Lets leave aside the time it takes you to figure out roughly what’s going on
  • Lets leave aside the time it takes for you to decide that a restore is the only option
  • Lets also leave aside the  fact that you have no replacement hardware and assume you have a perfectly sized location ready and waiting for you (really?)
  • Lets leave aside any time that may be required to find and transfer backups to where they need to be
  • Lets assume we don’t have the added complexity of needing to figure out how to do a transaction log restore because you’ve never actually needed to do one

All we’re interested in is how long it would take from the moment you ran the RESTORE command, for the database to be fully recovered?

Or put another way, how long would your business be well and truly up shit creek?

Instant File Initialisation

If you would be the poor happless schmuck who has to paddle the business out of said creek using only your hands, you owe it to yourself to take a look at whether your databases could take advantage of Instant File Initialisation.

If you deal in “large” databases, you should check this out right now because this feature may turn out to be the difference between a shit day and a very shit day.

Instant File Initialisation is a feature introduced in SQL Server 2005 that allows SQL Server to allocate MDF space instantly without having to zero initialise it first.

There are a range of scenarios that this improves (creating and autogrowing databases amongst them) but by far the most important scenario it improves is during a database recovery.

The concept of zero initialisation probably doesnt mean that much to you if you are dealing with small-ish databases. However if you deal in databases in the hundreds of gigabyte range or larger then you really need to understand the impact it will have.

In the scenario I described above, as part of a recovery of 500 GB SQL Server would have to write ~500 GB of zeros to the recovered data file before even starting to restore actual data.

I’m not going to explain how to switch Instant File Initialisation on or provide much background material on the feature. Far smarter people than I have already done this.

Check out the following couple of articles for the necessary details:

  • Kimberley Tripp did a great article on this back in the day. She also gave the most understandable description I’ve seen about why some hyper security concious setups might not want to do this (but in fact probably still should)
  • You can also look here for the basic details on how to enable this feature.

In the rest of this post I’m going to show a quick test I did on a 260 GB database to demonstrate the scale of improvement this simple change can have.

The Test

Equipment

  • HP Elite Book
  • Quad Core I7
  • 32GB RAM
  • C Drive: 7200 RPM disk with 20GB SSD cache
  • E Drive: 7200 RPM external disk over USB 3.0

Database to Recover:

  • 123 GB compressed backup
  • 267 GB Data file
  • 65 GB Log file

To avoid disk contention as much as possible, the backup file is located on E and the database will be recovered to C. Not exactly server hardware but its the best I could do!

Test 1: Instant File Initialisation Off

  • Total Time to Recover: 2 hours 15 Minutes

Test: 2 Instant File Initialisation On

  • Total Time to Recover: 1 hour 19 minutes

Conclusions

In a recovery crisis, having Instant File Initialisation enabled could reduce your restore time by approaching half.

In the scenario we have above, we could improve the situation even further by attempting to minimise the size of the log file. At 65 GB we still need to zero initialise a substantial amount of data. This is because unlike data files the log file must be zero initialised before use. If we could control the size of the log file (perhaps through changing bulk import or indexing strategies) we could potentially minimise this cost and reduce the time a bit further.

I say a bit because as far as I can tell SQL Server seems to recover both the data file and log file in parallel rather than sequentially. I’ve never looked in detail at what exactly SQL Server is doing whilst writing to both the data file and log file at the same time, but at a guess it could be recovering the data file whilst simultaneously zeroing out the log.

This means it’s not as though the recovery is completely stalled whilst zero initialisation takes place on the log.

If we extrapolate the numbers up a bit, if we were dealing with a 500 GB database we might expect that to restore in about 4 – 5 hours. With Instant File Initialisation enabled that number might be around the 2 – 2.5 hour mark.

This probably won’t be much comfort the day this happens to you for real, but at least you’ll know that you’ve made the best  you could have out of a pretty bad hand

I hope this helps someone!

Dynamics CRM, Troubleshooting

Fix: Table alias XYZ is not unique amongst all top-level table and join aliases

In CRM 2011 you can sometimes get the following sort of error message out of the blue:

Table alias a_9707c8e6f503e311835f00155d027e01 is not unique amongst all top-level table and join aliases

This will happen when you try to switch to a System View.

In this post I’m going to offer you my best guess as to when/why this happens and show you how to fix it.

When does this issue occur?

It could be coincidence but the issue has happened to me when both the following conditions are true:

  1. You have a view that shows a column from a related entity. For example, you have a System View showing Opportunities and that view pulls the geographic Region from the Account entity.
  2. You use the “Save As” feature to create a copy of a System View that you then go on to customise. You might do this if you have to create a lot of nearly identical views and don’t want to start from scratch with each one.

The issue basically occurs because at some point whilst a save is taking place, CRM corrupts the FetchXml portion of the view definition. In particular the FetchXml will be created containing duplicate link-entity sections.

To use SQL as an analogy it would be as though the tool you are using to create the SQL created two identical JOINS by mistake and used the same alias in each. When you come to execute that – the systems calls you out on it.

You can see a slightly more extreme example of this where the FetchXml has been corrupted not once but twice:

Error in link-entity FetchXml

How do I fix it?

There are two possible fixes to this. One that sometimes works but I have reservations about and the other way which is more time consuming but definitely works in all scenarios.

Option 1 – Just delete the related entity column from the view

I have two reservations about this so on balance I probably wouldn’t do it this way.

The first issue is that it won’t work in the scenario I showed above – where you have 3 errant “link-entity” sections. If you remove the problem field and republish it will remove one of the three problem clauses, which still leaves you with duplicates.

The second issue is that even if you remove the problem column and it removes one of the duplicate link-entity sections – I have a nasty suspicion that it will leave a pointless link-entity block in place which will probably store up issues for later. I haven’t checked this point however so you’re mileage may vary if you use this approach

Option 2 – Manually fix the XML

The “nuclear” option is just to export the primary entity via a solution and edit the relevant XML directly. This is a bit of a pain, but if you go in and surgically remove only the duplicate clauses and reimport, the issue should be fixed

HTH

Update: Just found this MS article that indicates the issue is more to do with modifying filters on views than my theory of using the Save As button. The article also indicates that the problem was solved in UR15 but this will not retrospectively fix pre-existing views that are corrupt

 

Dynamics CRM, Outlook, Troubleshooting

Tracing Network Traffic Between the Outlook Plugin and Dynamics CRM

Recently we had an issue where appointments deleted in CRM via the web application were not being deleted from certain users Outlook calendars.

Now, I enjoy a fruitless game of email ping pong with MS tech support as much as any man but this time I thought I’d take a look at how syncing works directly. I have to confess – the sync process has always been a bit of a black box to me so I was curious to see how it worked.

One of the key areas of interest is going to be what information is getting sent from CRM to the Outlook add in when a sync operation takes place. This would at least give us a fighting chance of understanding whether we are dealing with a server side issue or something in the way the client add in processes the change instruction.

Given that we know we want to take a look the web service calls between the client and the CRM Organization.svc endpoint my first port of call was the awesomeness that is Fiddler: http://www.telerik.com/download/fiddler

The problem we immediately face is that although we can see the back and forth easily enough – the message payload is encrypted, even in the absence of HTTPS:

Image
I don’t fancy our chances of decrypting that any time soon…

Unlike HTTPS encryption, which we could circumvent using a fiddler proxy, we have to take a different approach to the payload based encryption employed here.

In this instance we are going to rely on enabling WCF tracing within the Outlook add in to output our traffic in the clear. To achieve this, do the following:

1. Create a new folder for the resulting WCF trace files e.g. C:\WCF Trace\

2. Navigate to the install folder of the CRM Outlook Client – in my case C:\Program Files\Microsoft Dynamics CRM\Client

3. Locate the configuration file named CrmClient<GUID>.config. Add the following section of code to the file, just within the <configuration> element:


  <system.diagnostics>
    <sources>
      <source name ="System.ServiceModel.MessageLogging"
              switchValue="Verbose, ActivityTracing">
        <listeners>
          <add name="xml" />
        </listeners>
      </source>
    </sources>
    <sharedListeners>
      <add name="xml" type="System.Diagnostics.XmlWriterTraceListener"
           traceOutputOptions="LogicalOperationStack"
           initializeData="C:\WCF Trace\CRMTrace.svclog" />
    </sharedListeners>
    <trace autoflush="true" />
  </system.diagnostics>

  <system.serviceModel>
    <diagnostics>
      <messageLogging
           logEntireMessage="true"
           logMalformedMessages="false"
           logMessagesAtServiceLevel="true"
           logMessagesAtTransportLevel="false"/>
    </diagnostics>
  </system.serviceModel>

In my case this leaves the overall file looking like:


<configuration>
<system.net>
	<defaultProxy useDefaultCredentials="true" />
	<settings>
		<servicePointManager expect100Continue="false" />
	</settings>
</system.net>

  <system.diagnostics>
    <sources>
      <source name ="System.ServiceModel.MessageLogging"
              switchValue="Verbose, ActivityTracing">
        <listeners>
          <add name="xml" />
        </listeners>
      </source>
    </sources>
    <sharedListeners>
      <add name="xml" type="System.Diagnostics.XmlWriterTraceListener"
           traceOutputOptions="LogicalOperationStack"
           initializeData="C:\WCF Trace\CRMTrace.svclog" />
    </sharedListeners>
    <trace autoflush="true" />
  </system.diagnostics>

  <system.serviceModel>
    <diagnostics>
      <messageLogging
           logEntireMessage="true"
           logMalformedMessages="false"
           logMessagesAtServiceLevel="true"
           logMessagesAtTransportLevel="false"/>
    </diagnostics>
  </system.serviceModel>

</configuration>

At this point I would use the CRM Diagnostics tool to disable automatic synchronisation by the add in for the simple reason that the WCF traces can fill up fast and can become pretty difficult to decipher even with minimal traffic.

Either way, when you start Outlook and a sync is started you should see a file appear in your designated trace folder. Make sure you have initiated at least one (and ideally only one) sync operation with CRM and then close Outlook.

Once we have our WCF trace we need a tool to read it. I’ve been using the WCF Service Trace Viewer that comes as part of the Windows SDK

Once the viewer is installed open the trace file. Move to the Message tab and you should get a view similar to the following:

WCF Tracing between Outlook and CRM
WCF Tracing between Outlook and CRM

Now – I am not going to describe the full intricacies of the Outlook CRM Sync process – Microsoft have already done a pretty good job of that in whitepapers published on MSDN. However I am going to give you a couple of pointers about how to use the trace viewer tool as it can be a little tricky finding what we actually want.

In the above screenshot, notice the various entries ending /Execute and /ExecuteResponse. These pairs detail the outgoing request from the client and the subsequent response from the server shortly after. The messages that don’t end in /Execute or /ExecuteResponse can usually be ignored.

One of the first tips I have for you is to use a Filter to eliminate the junk rows as shown below:

Filtering the WCF Trace
Filtering the WCF Trace

This will give a much clearer view of the back and forth involved in doing a sync.

The next step is to find the method calls that we are actually interested in. The easiest way I have found to do this is to just jump through the messages in sequence whilst keeping an eye on the Parameters section shown below:

Spotting the write message in the trace
Spotting the right message in the trace

In the example above we can see that a request is going out to the Execute method and it is being passed “IsPrimaryClientSubscriptionClients” as the RequestName parameter and my Outlook client’s id as the ClientID parameter. Shortly afterwards the ExecuteResponse will come back with the server’s response.

Once you have focused in on the Execute/ExecuteResponse pair that you are interested in, you will then switch over to the XML tab to get the full details of the exchange:

WCF Tracing XML

WCF Tracing XML

And that’s it! Once you’ve managed to focus in on the messages of interest you at least stand a fighting chance of being able to figure out what’s going on!

One last note is that once you have done your tracing you should be sure to switch your config file back or comment out the section we added earlier. Your users won’t thank you for the performance or disk space hit you get from leaving Outlook spewing trace data 😉

Happy bug hunting!

Dynamics CRM, Security, Troubleshooting

Dynamics CRM – You do not have permission to access these records. AKA Sometimes CRM will lie right to your face

Sometimes CRM will give you the following message when you are attempting to assign a record to another user

Insufficient Permissions
*You* do not have permissions to access these records. Contact your Microsoft Dynamics CRM administrator

Insufficient Permissions

Be aware that sometimes this message is flat out misleading. If you are getting this error message whilst logged in a System Administrator there is a very good chance that the permission issue isn’t with your account, it’s an issue with the user you are attempting to assign the record to

If you are struggling to infer what the missing privilege is, the nuclear option is to run a CRM trace whilst attempting to assign the record. Run the trace and you should eventually find something like the following:

Missing privilege on the Invoice entity
Missing read privilege on the Invoice entity

In our case we were attempting to assign an account to a new user. In our environment we don’t use the Invoice feature of CRM meaning most users don’t even have the ability to Read Invoices. By an unhappy coincidence a user that did have the ability to play with invoices had created a test record under the account in question. At some point much later we then attempted to assign the account to one of the many users who don’t have the Read privilege – then BANG – unhelpful error message…

Lesson: If you get the Insufficient Permissions error message as a System Administrator whilst assigning a record – its probably not your permissions that are the problem

Start up a trace and it should hopefully lead you straight to the missing privilege

Good luck!