You receive the following when running an app with an Entity Framework dependency:
The Entity Framework provider type ‘System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer’ registered in the application config file for the ADO.NET provider with invariant name ‘System.Data.SqlClient’ could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application
This issue can occur if you have the correct references to the Entity Framework in your class library, but not in you main executable project in Visual Studio.
In my case it was just a case of using NuGet to install the relevant packages in both the business logic project and also the main executable project.
I’ve been having some issues lately when debugging ASP.net web applications using Visual Studio and Fiddler. If you use Fiddler whilst running the app in debug mode, you’ll find it’s being spammed with traffic being sent to Visual Studio as your app runs.
The key symptom is you have hundreds of requests that look something like:
Aside from being slightly irritating, the volume of traffic is sufficient to make Fiddler borderline useless as all genuine traffic will be drowned out.
I didn’t have much luck applying standard Fiddler Filters for some reason. Most likely I was just wrong on the syntax for the exclusion.
Digging in a bit further, the traffic is being sent to the Diagnostic Tools window in Visual Studio and is used to populate those kinda cool/kinda horrendously distracting memory and processor charts.
Given that I don’t spend my entire Visual Studio day preoccupied with memory usage and CPU load it seems a little overkill to be sending that data by default. As such, to stop the spamming, open the Diagnostic Tools window (Ctrl + Alt + F2) and deselect the Memory Usage and CPU Usage tools as shown below:
This should stop the spam traffic, but you will need to remember to manually re-enable the Diagnostic Tools if you are doing some profiling that requires them.
Picture the scene. Your sitting at you’re desk one morning and a user bursts in in a blind panic and tells you your company’s most critical system is down and your business is currently dead in the water.
Hundreds, maybe thousands of users are now sitting idle wondering what’s going on. Scores of customers are also getting increasingly irate with each passing second. Senior management are asking what the hell is going on.
After some frantic phone calls and emergency investigation work you realise that something catastrophic has happened to the single SQL Server database that underpins your entire operation. The damage to the database is so bad that the only way to recover it is a full database restore from last nights backup.
Oh, and the database is 500 GB+
Lets leave aside the time it takes you to figure out roughly what’s going on
Lets leave aside the time it takes for you to decide that a restore is the only option
Lets also leave aside the fact that you have no replacement hardware and assume you have a perfectly sized location ready and waiting for you (really?)
Lets leave aside any time that may be required to find and transfer backups to where they need to be
Lets assume we don’t have the added complexity of needing to figure out how to do a transaction log restore because you’ve never actually needed to do one
All we’re interested in is how long it would take from the moment you ran the RESTORE command, for the database to be fully recovered?
Or put another way, how long would your business be well and truly up shit creek?
Instant File Initialisation
If you would be the poor happless schmuck who has to paddle the business out of said creek using only your hands, you owe it to yourself to take a look at whether your databases could take advantage of Instant File Initialisation.
If you deal in “large” databases, you should check this out right now because this feature may turn out to be the difference between a shit day and a very shit day.
Instant File Initialisation is a feature introduced in SQL Server 2005 that allows SQL Server to allocate MDF space instantly without having to zero initialise it first.
There are a range of scenarios that this improves (creating and autogrowing databases amongst them) but by far the most important scenario it improves is during a database recovery.
The concept of zero initialisation probably doesnt mean that much to you if you are dealing with small-ish databases. However if you deal in databases in the hundreds of gigabyte range or larger then you really need to understand the impact it will have.
In the scenario I described above, as part of a recovery of 500 GB SQL Server would have to write ~500 GB of zeros to the recovered data file before even starting to restore actual data.
I’m not going to explain how to switch Instant File Initialisation on or provide much background material on the feature. Far smarter people than I have already done this.
Check out the following couple of articles for the necessary details:
Kimberley Tripp did a great article on this back in the day. She also gave the most understandable description I’ve seen about why some hyper security concious setups might not want to do this (but in fact probably still should)
You can also look here for the basic details on how to enable this feature.
In the rest of this post I’m going to show a quick test I did on a 260 GB database to demonstrate the scale of improvement this simple change can have.
HP Elite Book
Quad Core I7
C Drive: 7200 RPM disk with 20GB SSD cache
E Drive: 7200 RPM external disk over USB 3.0
Database to Recover:
123 GB compressed backup
267 GB Data file
65 GB Log file
To avoid disk contention as much as possible, the backup file is located on E and the database will be recovered to C. Not exactly server hardware but its the best I could do!
Test 1: Instant File Initialisation Off
Total Time to Recover: 2 hours 15 Minutes
Test: 2 Instant File Initialisation On
Total Time to Recover: 1 hour 19 minutes
In a recovery crisis, having Instant File Initialisation enabled could reduce your restore time by approaching half.
In the scenario we have above, we could improve the situation even further by attempting to minimise the size of the log file. At 65 GB we still need to zero initialise a substantial amount of data. This is because unlike data files the log file must be zero initialised before use. If we could control the size of the log file (perhaps through changing bulk import or indexing strategies) we could potentially minimise this cost and reduce the time a bit further.
I say a bit because as far as I can tell SQL Server seems to recover both the data file and log file in parallel rather than sequentially. I’ve never looked in detail at what exactly SQL Server is doing whilst writing to both the data file and log file at the same time, but at a guess it could be recovering the data file whilst simultaneously zeroing out the log.
This means it’s not as though the recovery is completely stalled whilst zero initialisation takes place on the log.
If we extrapolate the numbers up a bit, if we were dealing with a 500 GB database we might expect that to restore in about 4 – 5 hours. With Instant File Initialisation enabled that number might be around the 2 – 2.5 hour mark.
This probably won’t be much comfort the day this happens to you for real, but at least you’ll know that you’ve made the best you could have out of a pretty bad hand
In CRM 2011 you can sometimes get the following sort of error message out of the blue:
Table alias a_9707c8e6f503e311835f00155d027e01 is not unique amongst all top-level table and join aliases
This will happen when you try to switch to a System View.
In this post I’m going to offer you my best guess as to when/why this happens and show you how to fix it.
When does this issue occur?
It could be coincidence but the issue has happened to me when both the following conditions are true:
You have a view that shows a column from a related entity. For example, you have a System View showing Opportunities and that view pulls the geographic Region from the Account entity.
You use the “Save As” feature to create a copy of a System View that you then go on to customise. You might do this if you have to create a lot of nearly identical views and don’t want to start from scratch with each one.
The issue basically occurs because at some point whilst a save is taking place, CRM corrupts the FetchXml portion of the view definition. In particular the FetchXml will be created containing duplicate link-entity sections.
To use SQL as an analogy it would be as though the tool you are using to create the SQL created two identical JOINS by mistake and used the same alias in each. When you come to execute that – the systems calls you out on it.
You can see a slightly more extreme example of this where the FetchXml has been corrupted not once but twice:
How do I fix it?
There are two possible fixes to this. One that sometimes works but I have reservations about and the other way which is more time consuming but definitely works in all scenarios.
Option 1 – Just delete the related entity column from the view
I have two reservations about this so on balance I probably wouldn’t do it this way.
The first issue is that it won’t work in the scenario I showed above – where you have 3 errant “link-entity” sections. If you remove the problem field and republish it will remove one of the three problem clauses, which still leaves you with duplicates.
The second issue is that even if you remove the problem column and it removes one of the duplicate link-entity sections – I have a nasty suspicion that it will leave a pointless link-entity block in place which will probably store up issues for later. I haven’t checked this point however so you’re mileage may vary if you use this approach
Option 2 – Manually fix the XML
The “nuclear” option is just to export the primary entity via a solution and edit the relevant XML directly. This is a bit of a pain, but if you go in and surgically remove only the duplicate clauses and reimport, the issue should be fixed
Update: Just found this MS article that indicates the issue is more to do with modifying filters on views than my theory of using the Save As button. The article also indicates that the problem was solved in UR15 but this will not retrospectively fix pre-existing views that are corrupt
Every so often you might find that emails from users sit forever in a Pending Send state:
If you open the message you will see the message “This message has not yet been submitted for delivery” with no real guidance on why.
This issue comes up every so often so I’m going to highlight a few key things to check. Most of them you might be aware of, but the last two are a bit of a kicker and can cause you to waste a lot of time needlessly.
Step 1 – Approve the Sending Users Email Address
The user will only be able to send emails via CRM if they both have an email address listed on their user account record and it has been Approved. Many users forget this step.
Go to Settings -> Administration -> Users -> Open the user record and hit Approve Email on the record
Step 2: Ensure the User’s Email Access Type – Outgoing is set to “Email Router”
In order for mail to be picked up from CRM and dispatched by the email router, each user needs to be able to send email needs to have “Email Router” set as their Email Access Type at least for the Outgoing option.
Step 3: Restart the Email Router Service
I know this seems stupid, but trust me. I have had numerous cases where settings seem to take no effect until the service is restarted. For better or worse the CRM Email Router seems to like caching information like I like cake.
If nothing seems to be working, give it a try – you may save yourself a lot of time
Step 4: If all else fails – enable logging on the router
I’m not a huge fan of the email router, but it does do some pretty decent logging when its playing up and I do like that. If a restart doesn’t fix it you can normally see what the issue is by examining the log file generated when you do this:
Find the “Microsoft.Crm.Tools.EmailAgent.xml” configuration file in the installation directory of the router.
Find or add the <LogLevel> element and change it to 3 for advanced message level logging. See here for more details of the options
Set the following registry key to 3 as well: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSCRMEmail\LogLevel
Find or add the <LogFile> element and set it to something sensible e.g. <LogFile>C:\CRMRouterLog.txt</LogFile>
Restart the service and you should start getting some decent information back on what the router is doing.
Remember to disable the logging once you’ve run your tests.
Update: There is a reference from one of the Microsoft CRM Support engineers herethat seems to suggest that the act of enabling logging in itself causes certain issues to disappear. That would certainly tally with some behaviour I’ve seen where information seems to be cached somewhere and the email router is a bit erratic about when to update its cache.
Recently we had an issue where appointments deleted in CRM via the web application were not being deleted from certain users Outlook calendars.
Now, I enjoy a fruitless game of email ping pong with MS tech support as much as any man but this time I thought I’d take a look at how syncing works directly. I have to confess – the sync process has always been a bit of a black box to me so I was curious to see how it worked.
One of the key areas of interest is going to be what information is getting sent from CRM to the Outlook add in when a sync operation takes place. This would at least give us a fighting chance of understanding whether we are dealing with a server side issue or something in the way the client add in processes the change instruction.
Given that we know we want to take a look the web service calls between the client and the CRM Organization.svc endpoint my first port of call was the awesomeness that is Fiddler: http://www.telerik.com/download/fiddler
The problem we immediately face is that although we can see the back and forth easily enough – the message payload is encrypted, even in the absence of HTTPS:
Unlike HTTPS encryption, which we could circumvent using a fiddler proxy, we have to take a different approach to the payload based encryption employed here.
In this instance we are going to rely on enabling WCF tracing within the Outlook add in to output our traffic in the clear. To achieve this, do the following:
1. Create a new folder for the resulting WCF trace files e.g. C:\WCF Trace\
2. Navigate to the install folder of the CRM Outlook Client – in my case C:\Program Files\Microsoft Dynamics CRM\Client
3. Locate the configuration file named CrmClient<GUID>.config. Add the following section of code to the file, just within the <configuration> element:
At this point I would use the CRM Diagnostics tool to disable automatic synchronisation by the add in for the simple reason that the WCF traces can fill up fast and can become pretty difficult to decipher even with minimal traffic.
Either way, when you start Outlook and a sync is started you should see a file appear in your designated trace folder. Make sure you have initiated at least one (and ideally only one) sync operation with CRM and then close Outlook.
Once we have our WCF trace we need a tool to read it. I’ve been using the WCF Service Trace Viewer that comes as part of the Windows SDK
Once the viewer is installed open the trace file. Move to the Message tab and you should get a view similar to the following:
Now – I am not going to describe the full intricacies of the Outlook CRM Sync process – Microsoft have already done a pretty good job of that in whitepapers published on MSDN. However I am going to give you a couple of pointers about how to use the trace viewer tool as it can be a little tricky finding what we actually want.
In the above screenshot, notice the various entries ending /Execute and /ExecuteResponse. These pairs detail the outgoing request from the client and the subsequent response from the server shortly after. The messages that don’t end in /Execute or /ExecuteResponse can usually be ignored.
One of the first tips I have for you is to use a Filter to eliminate the junk rows as shown below:
This will give a much clearer view of the back and forth involved in doing a sync.
The next step is to find the method calls that we are actually interested in. The easiest way I have found to do this is to just jump through the messages in sequence whilst keeping an eye on the Parameters section shown below:
In the example above we can see that a request is going out to the Execute method and it is being passed “IsPrimaryClientSubscriptionClients” as the RequestName parameter and my Outlook client’s id as the ClientID parameter. Shortly afterwards the ExecuteResponse will come back with the server’s response.
Once you have focused in on the Execute/ExecuteResponse pair that you are interested in, you will then switch over to the XML tab to get the full details of the exchange:
WCF Tracing XML
And that’s it! Once you’ve managed to focus in on the messages of interest you at least stand a fighting chance of being able to figure out what’s going on!
One last note is that once you have done your tracing you should be sure to switch your config file back or comment out the section we added earlier. Your users won’t thank you for the performance or disk space hit you get from leaving Outlook spewing trace data 😉
Sometimes CRM will give you the following message when you are attempting to assign a record to another user
Insufficient Permissions *You* do not have permissions to access these records. Contact your Microsoft Dynamics CRM administrator
Be aware that sometimes this message is flat out misleading. If you are getting this error message whilst logged in a System Administrator there is a very good chance that the permission issue isn’t with your account, it’s an issue with the user you are attempting to assign the record to.
If you are struggling to infer what the missing privilege is, the nuclear option is to run a CRM trace whilst attempting to assign the record. Run the trace and you should eventually find something like the following:
In our case we were attempting to assign an account to a new user. In our environment we don’t use the Invoice feature of CRM meaning most users don’t even have the ability to Read Invoices. By an unhappy coincidence a user that did have the ability to play with invoices had created a test record under the account in question. At some point much later we then attempted to assign the account to one of the many users who don’t have the Read privilege – then BANG – unhelpful error message…
Lesson: If you get the Insufficient Permissions error message as a System Administrator whilst assigning a record – its probably not your permissions that are the problem
Start up a trace and it should hopefully lead you straight to the missing privilege