Tuesday, November 29, 2011

How to create a null destination node in a BizTalk Map

This is the second time recently I came across this requirement so I guess others need it too. This post describes how to set a destintation node to null based upon a condition.

My situation was the source node contained a DateTime which was either valid or set to a zero length string. As some of you will know setting a date to a zero length string will actually set the date to 0001-01-01.  That is the date when the world began, allegedly.  My destination node is a date field too so I want to ignore the invalid date.  But here's the thing - I need to set the target node to null because I need the node present in the output.

The solution is to use a logical operator (= or <>) and a value mapping functoid.  In my case I am using a Script functoid to convert the source node value from DateTime to Date.  So the Script output is a date string.  By the way if you are wondering why I just don't test for this in the Script and set the return parameter to null, its becasue this doesn't work!
The <> functoid checks if the output is 0001-01-01 and sends true or false to the Value Mapping Functoid. As you can see I added 0001-01-01 as a constant for the second parameter.  Remember this is a Not Equal operator so it will return true if its a valid date and false if it is 0001-01-01. 

The magic happens in the Value Mapping functoid.  The description explains what it does. If the value of the first input parameter is true then the value of the second input parameter is returned.  It should add if the first input parameter is false, then the destination node is set to null
So all you have to do is select the correct logical operator (either = or <>) so that its output is true if you want to pass the value to the destination and false when you want to set it to null.  But be sure you have the parameters the right way round.  The logical operator needs to be at the top, the value of the source node is below.

Update 24/04/2012: I also ran into a problem where the source node was a Record, that did not contain data but held other elements.  I found that for the child elements the trick I used above worked well but, the parent element always appeared in the output as  < ParentNode />

I struggled for a long while to get rid of the offending  < ParentNode /> but in the end found that simply using the Logical Existence (?) functoid on the ParentNode was enough.  The result was the ParentNode was completely absent from the output except when any of the child elements contained data.

Saturday, November 19, 2011

Dynamics CRM 2011 Date Picker Displays US format

This post will only help if you are using the On Premise version - I don't have a solution for the Online version. BTW, I was using SQL Server 2008 R2.

If  you add date parameters to a report that you want to run in Dynamics CRM 2011 then you get prompted to enter the dates. If you set the parameters as type DateTime then a date picker is also displayed.

But when you select a date, the picker puts the date in as US format.  It will still filter the data correctly but I wanted UK format displayed. 

The clue came from this blog.  Now in fact when I ran my report in Report Server (i.e. outside CRM) the date format displays correctly.
So I finally found the solution. You need to add Culture="en-GB" to the reportviewer.aspx page located at C:\Program Files\Microsoft Dynamics CRM\CRMWeb\CRMReports\rsviewer.  Note its the rsviewer directory not the viewer directory. I added it to the end of the first line - the <%@ Page declaration.  

Then the date displayed in UK format in CRM.

Wednesday, November 16, 2011

Filtered Fetch XML Reports for CRM 2011 On-line

I found very few blogs that explained how to create a Filtered Report for CRM 2011 using Fetch XML. My approach had been to use Advanced Find to create the Fetch XML and then copy and paste that into a Report in Business Intelligence Development Studio.  Now that works fine but when you publish the report to CRM it will only run on all records of the entity, it does not work on selected rows or on a single entity record.  The report just does not show up as selectable on the entity view of on the entity form. 

The missing attributes are on the the entity element of the Fetch XML.  Mine is a custom entity called new_identifiedneed and so I need to change
<entity name="new_identifiedneed">
<entity="new_identifiedneed" enableprefiltering="1" prefilterparametername="CRM_Filterednew_identifiedneed">

Now what is not made clear is that setting the correct prefilterparametername is crucial. It needs to be the name of the Filtered View for the entity prefixed by CRM_  (not CRMAF_ as for SQL based reports).

If you do this on the query  you will find that it will automatically add a parameter to your dataset. When you run the Query you will get this prompt for the parameter.

Don't bother with entering a value, just click OK and you should get unfiltered results. 
Check out the parameters on the DataSet which should look like this:

If that is present you're home free. Make the report look pretty, build it to create the rdl and then load it into CRM.  When you add the report be sure to set the Related Records (Identified Needs in my case) and select the Display in properties to include both the Form and the List.
Now go to the entity view and select on or more records. Click on the Run Report icon on the ribbon and you should see your report. When you run it on Selected Records you should see what you wanted.  The same applies when you open a form on a selected entity record. 

Happy Reporting!
BTW I found this out by using the Report Wizard for ad hoc reports in CRM. It will not only create the appropriate Fetch XML but gives you an RDL file that you can open up and play with in Business Intelligence Development Studio. 

Monday, November 14, 2011

Problems Configuring BizTalk 2010

I had two problems configuring BizTalk 2010 in a distributed environment I thought I would share.  The first was a problem on assigning the SSO Administrators group.  Since I was using a distributed environment I had to replace the local group name with the domain group name.  When I did so I got the warning icon with the detail.

Failed to add the user 'Domain\btsadmin' to the domain group 'Domain\SSO Administrators'

Now this user was already in the group and I was logged on as a Domain Admin so the message could not be correct.

There are blogs that say you need to re-register SSOSQL.DLL that is located in C:\Program Files\Common Files\Enterprise Single Sign-On   (and also in the sub-directory  \Win32) but I had done that.  The answer was: delete the existing entry and manually type in the account name -DON'T click on the ellpises (...). Stupid error I know, but that fixes it.

The second problem arose when configuring BAM.  I had installed the pre-requisites including SQLServer2005_NS_x64.MSI as well as installing SQL Server 2008 R2 Integration Services locally which is a requirement.  But I got the following error when applying the configuration

Could not load file or assembly 'Microsoft.SqlServer.Instapi, Version=, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified.

Sure enough, this DLL was not in the GAC, although the 10.0 one was there.
Turns out you need to install the correct versions of the MSIs - I had somehow managed to download the wrong ones.  They correct ones are:

sqlncli_x64.msi                                            6430KB
SQLServer2005_NS_x64.msi                     3337KB
SQLServer2005_XMO_x64.msi               15066KB
sqlxml.msi 06                                             12683KB

They can be downloaded from here

Aftr that, the configuration went smoothly

Sunday, October 9, 2011

There is a problem communicating with the Microsoft Dynamics CRM Server

I was trying to configure the CRM Client for Outlook by adding an organization and I ran into this error.

Now I had just deployed my customizations and I was able to access CRM using IE from the same client and could see all my customizations.  I didn't get any errors when importing or publishing the customizations. 

What I tried was to create a brand new organization with no customizations and I was able to add that organization to the Outlook client without problems.  So the issue was that there was something in the customizations that was causing an error only in the Outlook client.  After several days of effort we tracked it down to a site map customization. 

We had added a custom section under Workplace that had some links to a particular view of the Account entity.  BAD IDEA.  That won't work in the Outlook client. After removing the link I was able to connect to my organization without this error. 

Some things you find out the hard way.

Saturday, September 10, 2011

Consume a WCF service that uses Federated Security

This post is not about Active Directory Federated Security, but it is about using a custom Security Token Service (STS) to create a token.   I needed to connect to a third party web service that used Federated Security. Now they supported ADFS but I only needed to access using a single account and I really didn't want to set up ADFS just for that.  But I could use a custom STS service to create the token and avoid all the infrastruture overhead of ADFS.

Setup begins with the third party sending me the url to their STS and a copy of the public key certificate they will use to sign their token. I also need to have the url to their web service I want to call.

I'll say right away I'm no expert on this. There are examples in the Windows Identity SDK and that is a good starting point. The way it would work is my client application (a  Windows Service) would call a local STS and pass it a username and password.  My STS would check the credentials and if OK issue a token. Now the token is signed and encrypted by my X509 certificate so I have to get one of those to begin with.  I need to install my X509 certificate in the Personal  store of  the computer I run the code on.
Start MMC, add the Certficate add-in and select the Local Computer. Navigate to the Personal node, and Certificates, right click and import my X509 certificate.  Since I am running my Windows Service under the Network Service account I need to assign permissions.  On the Certificate, right click, select All Tasks then Manage Private key. On the Security tab, add   Network Service and give it full permissions. I can also add the thrid party's certificate here too - it is imported into the Personal Certificate store in the same way but since they gave me the public key, I don't need to bother with the permissions step (its not available anyway).

I also need to send the public key of my certificate to the third party.  Right click on the certificate, select Export and choose the option to export the public key only. 

Now the token issued by my STS contains a claim, in my case it is simply the role of Reader.  I then pass my token to the third party's STS. These guys will want to check the token is from me and they use the certificate I sent them to do so.  They check the claim, and if all is well issue me a token from their STS. 

I suppose I should validate their token with the certificate they sent me, but I'm not going to bother as I'm going to send it straight back to them when I call their web service. 
When I look at the WSDL of their STS service I can see it is different from a typical WCF service because it has a section at the bottom which looks like this.

<identity xmlns="http://schemas.xmlsoap.org/ws/2006/02/addressingidentity">
<keyinfo xmlns="http://www.w3.org/2000/09/xmldsig#">
<x509certificate>MIIE8zCCA9ugAwIBAgILAQAAAAABLl3i0lAwDQYwSQY2ZNWvOf2k</x509certificate> <x509data>

The trick is to find a way to send my token to their WCF web service. Add a Service Reference to their WCF web service in the usual way. My example uses MyWS.Service with the url of http://company.co.uk/service.svc.  Note that when I setup the binding to this WCF service I have to include a reference to the issuer of the token (the url to their STS). I also needed to include a reference to the DNS identity.  You can usually assume the DNS Identity from the Subject name on the third party certificate so if that is CN=company.co.uk then the DNS Identity will be company.co.uk.

// add constants for their STS service and DNS identity
public const string ServiceAddress = "http://company.co.uk/service.svc";

public const string DNSIdentity = "company.co.uk";
public const string STSAddress = "http://security.company.co.uk/security.svc";

 public void RetrieveData(SecurityToken smToken)
   // Instantiate the ChannelFactory as usual.
   //Be sure to set the DNS Identity on the Endpoint

   EndpointAddress endpointAddress = new EndpointAddress(new Uri(ServiceAddress), new DnsEndpointIdentity(DNSIdentity), new AddressHeaderCollection());

   ChannelFactory clientFactory = new ChannelFactory(GetServiceBinding(ServiceAddress), endpointAddress);
   clientFactory.Credentials.SupportInteractive = false;

   // Make sure to call this prior to using the
   // extension methods on the channel factory that the Windows
   //Identity Foundation provides.
   ICommunicationObject channel = null;

   bool succeeded = false;

   { // create an instance of the Pension Service client

      MyWS.Service client = clientFactory.CreateChannelWithIssuedToken(smToken);
      channel = (ICommunicationObject)client;
      // Now its plain sailing
      // I can call the method on the WCF service
      // in may case GetData returns an array of objects
      MyWS.MyObject[] psArray = client.GetData();
      // TO DO something with psArray
      succeeded = true;

   catch (CommunicationException e)
   {  // TO DO log error
   catch (TimeoutException)
   {  // TO DO log error
       if (!succeeded && channel != null)
   return ;

public Binding GetServiceBinding(string uri)
   // Use the standard WS2007FederationHttpBinding
   WS2007FederationHttpBinding binding = new WS2007FederationHttpBinding();
   binding.Security.Message.IssuerAddress = new EndpointAddress(STSAddress);
   binding.Security.Message.IssuerBinding = GetSMSecurityTokenServiceBinding(STSAddress);
   binding.Security.Message.IssuerMetadataAddress = new EndpointAddress(STSAddress + "/mex");
   return binding;

Good luck. In terms of difficulty on a scale of 1-10 this is a twelve. I wish you success.

Attach to Process is Greyed Out in Visual Studio

If you Google "Attach to Process is greyed out" as I have, all the responses will tell you to make sure you check the boxes "Show processs from all users" and "Show processs in all sessions".

What if you do that and it's still greyed out?

Answer: Use a different approach. Add the line
to your code and it will launch the dialog box which allows you to connect to a new Visual Studio session or an existing one.

Once you do so, the debugger will stop at the line you just added and allow you to step through the code.

Tip: If you are using a timer in your windows service then disable the timer before you call the debugger. That way you stop the timer from kicking you back to the start every time it fires.

For example:

Installing a Windows Service

There was a time when you could just create a new project in Visual Studio as a Windows Service and you could install it using the installutil command.

Somewhere along the way that changed so now you have to add a Project Installer class to the Windows Service.

So lets assume you've created your windows service and changed the name from Service1 to say MyService. Microsoft explain the steps for adding an installer and I repeat them here.

1. In Solution Explorer, access Design view for the service for which you want to add an installation component.

2. Click the background of the designer to select the service itself, rather than any of its contents.

3. With the designer in focus, right-click, and then click Add Installer.

4. A new class, ProjectInstaller, and two installation components, ServiceProcessInstaller and ServiceInstaller, are added to your project, and property values for the service are copied to the components.

5. Click the ServiceInstaller component and verify that the value of the ServiceName property is set to the same value as the ServiceName property on the service itself. In my example you would have to change Service1 to MyService.

6. Change the StartType to Manual, Automatic or Disabled.

7. Click on the ServiceProcesInstaller and set the User property to Local serice, Network Service, System or User.

That's it. Open the Visual Studio command prompt using "Run as Administrator", chage to the directory where your EXE file is and run

installutil MyWindowsService.exe

You might be interested in my next post about debugging a Windows Service. Frankly I find debugging Windows Services a pain, so I tend to write and test my code as a Console Application, and then I convert it to a Windows Service when its all working.

Sunday, August 21, 2011

Moving CRM Attachments to SharePoint

CRM 2011 allows you to store documents in SharePoint and view them from within CRM.  Someone called Maria left me a comment asking if you could move attachments from CRM to SharePoint. My previous blogs about creating SharePoint document locations from CRM were actually prerequisites for doing exactly this.  I wanted to take a CRM attachment and move it to SharePoint. In my case it was an attachment on a letter activity as the result of a mailmerge. To be accurate it is an attachment on an annotation (or note) on a letter activity. I needed to move the attachment to SharePoint and then leave a link to the document on the letter activity.  To achieve this we created a plugin on the PreCreate action of an annotation. Then in the execute method of the plugin you can instantiate the annotation entity

if (null != context && null != context.InputParameters)
      if (context.InputParameters.Contains("Target") &&
          context.InputParameters["Target"] is Entity)
           // Obtain the target entity from the input parmameters.
          Entity entity = (Entity)context.InputParameters["Target"];
Once you have the annotation then you can get the contents of the attachment as a byte array.

// Retrieve the base64Encoding document body
// and convert it to byte base64encoding
// see below for the DecodeFrom64 function
    byte[] documentBody = DecodeFrom64(entity.Attributes["documentbody"].ToString());
    // get the filename
   System.IO.FileInfo annotationFileInfo = new System.IO.FileInfo(fileName);
To remove the attachment you need to use this code

entity.Attributes["documentbody"] = null;
entity.Attributes["filename"] = null;
entity.Attributes["filesize"] = null;

The helper method for decoding the attachment to a byte array

internal static byte[] DecodeFrom64(string encodedData)
     byte[] encodedDataAsBytes = System.Convert.FromBase64String(encodedData);
     return encodedDataAsBytes;

To upload the document to SharePoint you should use the CopyIntoItems method of the Copy web service located at http://sharepointurl/_vti_bin/Copy.asmx. But before you do, you need to replace any illegal characters in the filename - in our case we replaced an illegal character with a hyphen.

internal static string ReplaceSpecialCharacters(string input)
   Regex r = new Regex("(?:[^a-z0-9 ]|(?<=['\"])s)", RegexOptions.IgnoreCase |                         RegexOptions.CultureInvariant | RegexOptions.Compiled);
   return r.Replace(input, "-");
I may have posted this code too late to help Maria, but I hope someone else finds it useful.

Same blog, different theme

I've grown bored of the theme that I use for the blog and somebody posted a comment about changing the theme so I've done so.

I hope this makes the blogs easier to read.

Monday, July 4, 2011

Transforming XML using BizTalk Mapper generated XSLT

If I have to write XSLT I try and make use of the BizTalk Mapper because it can generate XSLT in a fraction of the time.  It also supplies a test harness so you can use an input file to test out your map.
You can add functoids to your map and I make use of the Scripting functoid so that I can write inline C# script.  Here is an example of a function that takes a string source element and will truncate it if it is longer than the string length the target element can handle.  It writes out an Information message to the event log if the maximum length is exceeded.
The function takes two extra parameters apart from the source element:  the name of the source element and the maximum length.  Add them by clicking on the ellipses next to Configure Functoid Inputs.

Here is the inline C# script.
public string Transform(string param1, string fieldname, int32 maxlength)
       if (param1.Length > maxlength)
             System.Diagnostics.EventLog.WriteEntry("Transformation", fieldname + " in excess of " +   maxlength.ToString() + " chars, truncating", System.Diagnostics.EventLogEntryType.Warning);
             return param1.Substring(0, 20);
             return param1;

What BizTalk does is to write this out as a function at the botton of the XSLT.  To produce the XSLT, use Validate Map and then navigate to where the output window has written the file. 

Now I struggled a bit to get this XSLT to work until I finally found the answer was to use an XPathDocument instead of an XmlDocument. That's it. A transformation using the output from the BizTalk Mapper.  Deep, deep joy. 

Add the following using statements
using System.Xml;
using System.Xml.Xpath;
using System.Xml.Xslt;

// Transforms an XML document
// using an XSLT generated by BizTalk
// note use of XPathDocument
private XmlDocument Transform()

     XslCompiledTransform xslt = new XslCompiledTransform();
     // load the xslt
     xslt.Load(@"C:\Projects\Import\MySchema.xsl", new XsltSettings(false, true), new XmlUrlResolver());
     string filePathName = @"C:\Projects\Import\MyXML.xml";
     //Load the XML data file.
     XPathDocument doc1 = new XPathDocument(filePathName);
     // create a memory stream
     MemoryStream ms = new MemoryStream();
    //Create an XmlTextWriter to write to the memory stream
    XmlTextWriter writer = new XmlTextWriter(ms, Encoding.Unicode);
    writer.Formatting = Formatting.Indented;

    //Transform the file.
    xslt.Transform(doc1, null, writer, null);
    ms.Seek(0, SeekOrigin.Begin);  // ** UPDATE changed from ms.Position=0 ****//
     if (ms.Length == 0)
           Exception ex = new Exception ("Transform error , output is null");
           throw ex;
    // load the memory stream into a XML document
    XmlDocument output = new XmlDocument();
    return output;

Monday, April 4, 2011

Darwin Awards 2010

A non-technical blog this month, dear reader, forced upon me because of a misleading blog I read today. Imagine that! What is the Internet coming too when you can't trust a blog?

Anyway, I Googled for the "Darwin Awards 2010" and stumbled on a blog that claimed to announce the Darwin Awards results for 2010. While some of the alleged awards were amusing - I particularly liked the guy who tried to siphon off diesel from a camper van but mistakenly put the hose into the sceptic tank - they were clearly not true Darwin Awards.  I need scarcely remind you dear reader that the Darwin Awards "commemorate those who improve our gene pool by removing themselves from it". In short, they are only ever awarded posthumously for unparalleled stupidity. 

The real Darwin Awards web site can be found at this location so please don't confuse it with imitations.

For reasons of good taste, I won't reproduce the winning entry here.  However I do recount this runner up which unfortunately has not been confirmed as true.

In the late fall and early winter months, snow-covered mountains become infested with hunters. One ambitious pair climbed high up a mountain in search of their quarry. The trail crossed a small glacier that had crusted over. The lead hunter had to stomp a foot-hold in the snow, one step at a time, in order to cross the glacier.

Somewhere near the middle of the glacier, his next stomp hit not snow but a rock. The lead hunter lost his footing and fell. Down the crusty glacier he zipped, off the edge and out of sight.

Unable to help, his companion watched him slide away. After a while, he shouted out, "Are you OK?"

"Yes!" came the answer.

Reasoning that it was a quick way off the glacier, the second hunter plopped down and accelerated down the ice, following his friend. There, just over the edge of the glacier, was his friend...holding onto the top of a tree that barely protruded from the snow.

There were no other treetops nearby, nothing to grab, nothing but a hundred-foot drop onto the rocks below. As the second hunter shot past the first, he uttered his final epitaph: a single word, which we may not utter lest our mothers soap our mouths.

Monday, March 7, 2011

Windows Azure AppFabric and Integration with CRM 2011 Online

This post describes how to send messages in real time from CRM 2011 Online back to a company premises using Windows Azure.  If you were using CRM 2011 On Premise you would use a plug-in and attach it to the create or update event of an entity.  The plug-in code might write out the mesage in XML format for integration with an internal system.  So how do you do this with CRM 2011 Online? The answer is to use the Windows Azure AppFabric because there is built in support for this in CRM 2011 Online. 

Basically what you need to do is create a Service Bus on Windows Azure and then create a namespace.  So you need to go through the steps of creating a Windows Azure account and set up a unique namespace.  That will generate a "Current Management Key" whic you will need when you register the endpoint in the Plugin-tool.  This post has a good description on how to do it. 

You need to log into CRM Online and dowlnload the certificate that you need for the Plugin. Goto Settings -> Customizations -> Developer Resources and select download certificate. Make a note of the Issuer name directly above the link (e.g. crm4.dynamics.com).

Back in your development environment you will need to ensure you have the Windows Azure SDK and CRM 2011 SDK installed. You can use the sample code provided in the CRM SDK located at C:\CRMSDK\sdk\samplecode\cs\azure\onewaylistener.   This is the application that will listen for messages that are sent to the Azure endpoint.  The class implements IServiceEndpointPlugin and the class name serves as your "Contract" (you will use it as the "Path"  when you configure the endpoint).

Open the CRM Plugin tool and configure a Service EndPoint.  This post describes how to configure the Service Endpoint. The Solution Namespace is the same as the namespace you created in Azure.  The Path is where you enter the class name of your listener application.  Click Save and Configure ACS. You will be prompted for the location of the certificate and the issuer name. You are also prompted for the Management Key and this is current Management Key from the Windows Azure namespace.  Click on "Save and verify authentication" which should verify the connection with a success message.  Click on "Close". 

Beneath the Service Endpoint you can now add a Step and configure it as you would have done for an On Premise plugin only now the event handler is the service endpoint you just created. You could for example set the plugin to fire on the Create event for the Contact entity. 

That done you can now configure the app.comfig with your (Azure) Service Namespace, Isssuer Secret (your Azure Management Key)  and the Service Path which is the name of your listener class.  Run your OneWayListener application, and that should open a connection to your Azure endpoint and will print out messages as they are received to the console window.

So now go to CRM 2011 Online and perform the action that your Plugin is attached to (e.g. create contact). After you save the Contact, the Asynchronous process runs and your listener will receive the message. 

The Onewaylistener code uses a RemoteExecutionContext object which has all the information you need from the User Guid to the Organization Guid. The interesting stuff  is in the context.InputParameters collection. It contains all the data you entered. Its a simple task to turn this into XML and then you are all set to integrate with whatever you want.

Saturday, March 5, 2011

Connecting to CRM 2011 in the Cloud

Connecting to CRM 2011 in the Cloud is trickier than connecting to CRM 2011 On Premise.
There is a great article by Deepak Kumar but it is worth recapping on his points.
Check c:\Program Files to see if you have Windows Identity Foundation. If not you will need to install it.

You need to get some device credentials and this is done using a utility in the CRMSDK called DEVICEREGISTRATION. Go to \sdk\tools\deviceregistration directory and load deviceregistration.csproj. Compile the application and navigate to the \bin\debug folder. Open a command window and type

deviceregistration.exe /operation:Register

The Device ID and Password that are generated are used in the method GetDeviceCredentials (that comes next).

Preparation work done you can now create your VS 2010 project, in this example I am using a WCF Service (I left the default name of Service)

Copy the CRMServiceHelper.cs file from \sdk\samplecode\cs\helpercode into your project and modify three methods:


I used the code in Deepak Kumar's blog and set my Windows LiveID credentials in the GetUserLogonCredentials method. When you set the endpoints in GetServerConfiguration be aware that endpoints vary depending where you are in the world so see this blog .

Then if you want Intellisense on your custom entities and attributes you need to generate a class with them all in using CRMSVCUTIL.EXE. This tool is in the SDK too in the \sdk\bin directory. Amend your Environment Path so you have a reference to its location. Open a Command window, navigate to where you want the class to be generated then use

CrmSvcUtil.exe /url:https://{organisation}.crm4.dynamics.com/XRMServices/2011/Organization.svc /out:GeneratedCode.cs /username:"windows live id" /password:"live id password" /deviceid:"deviceid" /devicepassword:"device password"

Add the class you just generated to your project. OK, this takes a while and you will have to keep regenerating it every time you add an attribute but think of the time you'll save by having fewer bugs.

You will need to add references to the CRM SDK dlls (microsoft.crm.sdk.proxy.dll, microsoft.xrm.sdk.dll and microsoft.crm.sdk.dll) and then add these using statements

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Messages;
using Microsoft.Xrm.Sdk.Query;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Crm.Sdk.Messages;

Add 3 declarations in the class Service.
private OrganizationServiceContext context;
private OrganizationServiceProxy proxy;
private IOrganizationService service;

In the class constructor add a call to ConsumeIOrganization().

public Service()
Then add two methods

public void ConsumeIOrganization()
ServerConnection serverConnection = new ServerConnection();
ServerConnection.Configuration serverConfig = serverConnection.GetServerConfiguration();

public void Connect(ServerConnection.Configuration serverConfig)
using (proxy = new OrganizationServiceProxy(serverConfig.OrganizationUri, serverConfig.HomeRealmUri,
serverConfig.Credentials, serverConfig.DeviceCredentials))
proxy.ServiceConfiguration.CurrentServiceEndpoint.Behaviors.Add(new ProxyTypesBehavior());
context = new OrganizationServiceContext(proxy);
service = (IOrganizationService)proxy;
catch (FaultException <microsoft.xrm.sdk.organizationservicefault>)

Now you can add your code and reference service or context methods depending on which you prefer. Welcome to programming in the cloud!

Saturday, February 19, 2011

Create SharePoint Document Locations in CRM 2011 - Part 3

This is the last of 3 posts describing how to add SharePoint document locations into CRM.
Part 1
Part 2

This code should be added to that in the previous blog post. I had a devil of a job with a 401 error, but this finally cracked it.

Add a web reference to
http://server01/_vti_bin/Lists.asmx and name the reference listservice
http://server01/_vti_bin/Views.asmx and name the reference views

Add a using statement
using system.Xml;

Add this method into the code in previous blog post.

private static void CreateSharePointFolder(string docfolderUrl)
if (docfolderUrl == String.Empty || docfolderUrl.IndexOf("/") == -1)
// last part is the folder name
string folderName = docfolderUrl.Substring(docfolderUrl.LastIndexOf("/") + 1);
// remove the folder name
docfolderUrl = docfolderUrl.Replace("/" + folderName, "");
// get the document libray name
string docLib = docfolderUrl.Substring(docfolderUrl.LastIndexOf("/") + 1);
// now remove the doc lib to leave the sharepoint site url
string sharePointSiteUrl = docfolderUrl.Replace("/" + docLib, "");

listservice.Lists myLists = new listservice.Lists();
views.Views myViews = new views.Views();

myLists.Url = sharePointSiteUrl + "/_vti_bin/lists.asmx";
myViews.Url = sharePointSiteUrl + "/_vti_bin/views.asmx";
myLists.UseDefaultCredentials = true;
myViews.UseDefaultCredentials = true;

XmlNode viewCol = myViews.GetViewCollection(docLib);
XmlNode viewNode = viewCol.SelectSingleNode("*[@DisplayName='All Documents']");
string viewName = viewNode.Attributes["Name"].Value.ToString();

/*Get Name attribute values (GUIDs) for list and view. */
System.Xml.XmlNode ndListView = myLists.GetListAndView(docLib, viewName);

/*Get Name attribute values (GUIDs) for list and view. */
string strListID = ndListView.ChildNodes[0].Attributes["Name"].Value;
string strViewID = ndListView.ChildNodes[1].Attributes["Name"].Value;
// load the CAML query
XmlDocument doc = new XmlDocument();
string xmlCommand;
xmlCommand = "<Method ID='1' Cmd='New'><Field Name='FSObjType'>1</Field><Field Name='BaseName'>" + folderName + "</Field> <Field Name='ID'>New</Field></Method>";
XmlElement ele = doc.CreateElement("Batch");
ele.SetAttribute("OnError", "Continue");
ele.SetAttribute("ListVersion", "1");
ele.SetAttribute("ViewName", strViewID);

ele.InnerXml = xmlCommand;

XmlNode resultNode = myLists.UpdateListItems(strListID, ele);

// check for errors
NameTable nt = new NameTable();
XmlNamespaceManager nsmgr = new XmlNamespaceManager(nt);
nsmgr.AddNamespace("tns", "http://schemas.microsoft.com/sharepoint/soap/");
if (resultNode != null)
{ // look for error text in case of duplicate folder or invalid folder name
XmlNode errNode = resultNode.SelectSingleNode("tns:Result/tns:ErrorText", nsmgr);
if (errNode != null)
// Write error to log;

catch (Exception ex)
throw ex ;

Create SharePoint Document Locations in CRM 2011 - Part 2

This is part 2 of three posts
Part 1
Part 3

This code expects the Guid for the Contact and then returns the SharePointDocumentLocation AbsoluteUrl. For example it should return <a href="http://server01/contact/Charles Emes">http://server01/contact/Charles Emes</a>. That folder should exist in the SharePoint document library 'Contact'. Note that the code for creating the SharePoint folder is in the following blog post.

Add references to Microsoft.Crm.Sdk.Proxy.dll and Microsoft.Xrm.Sdk.dll to your project. Add the file crmsdktypes.cs into your project (note the SharePointDocumentLocation object is defined within this class and the code won't work without it). All of these files you will find in the CRM SDK directory.

Add the following using statements
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Crm.Sdk.Messages;

public class IntegrationService

// private static string ContactId = "3E339B88-C632-E011-BCDE-00155D110735";

public static string GetSharePointLocation(string ContactId)

// Connect to the Organization service.
System.ServiceModel.Description.ClientCredentials cred = new System.ServiceModel.Description.ClientCredentials();
cred.Windows.ClientCredential = new System.Net.NetworkCredential("Username", "Password", "Domain");

Uri organizationUri = new Uri("http://server01:5555/orgName/XRMServices/2011/Organization.svc");

Uri homeRealmUri = null;

OrganizationServiceProxy orgService = new OrganizationServiceProxy(organizationUri, homeRealmUri, cred, null);

// This statement is required to enable early-bound type support.
orgService.ServiceConfiguration.CurrentServiceEndpoint.Behaviors.Add(new ProxyTypesBehavior());

Guid guidSPDocLoc = RetrieveSharePointLocation(orgService, ContactId);

string absouteUrl = GetAbslouteUrl(orgService, guidSPDocLoc);


return absouteUrl;


// Catch any service fault exceptions that Microsoft Dynamics CRM throws.
catch (FaultException<Microsoft.Xrm.Sdk.OrganizationServiceFault> ex)
// You can handle an exception here or pass it back to the calling method.
return ex.Message;

private static Guid RetrieveSharePointLocation(OrganizationServiceProxy orgService, string ContactId)
Guid _spDocLocId = Guid.Empty;
// get the fullname from the contactid - that will be foolder name
string FolderName = GetEntityNamefromGuid(orgService, ContactId);
// replace any illegal chars with '-'
// TO DO
string fetch2 = @"
<fetch mapping='logical'>
<entity name='sharepointdocumentlocation'>
<attribute name='sharepointdocumentlocationid'/>

<filter type='and'>
<condition attribute='regardingobjectid' operator='eq' value='[GUID]' />

</fetch> ";
fetch2 = fetch2.Replace("[GUID]", ContactId);

EntityCollection result = orgService.RetrieveMultiple(new FetchExpression(fetch2));
foreach (var c in result.Entities)
// TO DO there can be more than one so add condition
_spDocLocId = (Guid) c.Attributes["sharepointdocumentlocationid"];
if (_spDocLocId == Guid.Empty)
// there is no location so create one
_spDocLocId = CreateSharePointDocLocation(orgService, FolderName, ContactId);

// get the abslouteURL from the doc location just created
string absouteUrl = GetAbslouteUrl(orgService, _spDocLocId);
// We still need to create a SharePoint folder

return _spDocLocId;

private static string GetEntityNamefromGuid(OrganizationServiceProxy orgService, string ContactId)
string fetch1 = @"
<fetch mapping='logical'>
<entity name='contact'>
<attribute name='fullname'/>

<filter type='and'>
<condition attribute='contactid' operator='eq' value='[GUID]' />

</fetch> ";
fetch1 = fetch1.Replace("[GUID]", ContactId);
string fullname = string.Empty;
EntityCollection result = orgService.RetrieveMultiple(new FetchExpression(fetch1));
foreach (var c in result.Entities)
{ // there can be more than one so add condition
fullname = c.Attributes["fullname"].ToString();
return fullname;


private static Guid CreateSharePointDocLocation(OrganizationServiceProxy _serviceProxy, string FolderName, string ContactId)

// use the Parent Location Id NOT the SharePointSiteId
// Parent Location will create url http://sharepoint/contact/CharlesEmes
// SharePointSiteID will create url http://sharepoint/CharlesEmes
Guid _spParentLocId = new Guid("415FF5BA-CA39-E011-92D1-00155D110735");
// Instantiate a SharePoint document location object.

SharePointDocumentLocation spDocLoc = new SharePointDocumentLocation
Name = "Documents on Default Site 1",
Description = null,
// Set the Regarding Object id - in this case its a contact
RegardingObjectId = new EntityReference(Contact.EntityLogicalName , new Guid(ContactId)),

// Set the Parent Location ID
ParentSiteOrLocation = new EntityReference(SharePointDocumentLocation.EntityLogicalName, _spParentLocId),
RelativeUrl = FolderName

// Create a SharePoint document location record named Documents on Default Site 1.
Guid _spDocLocId = _serviceProxy.Create(spDocLoc);
// Console.WriteLine("{0} created.", spDocLoc.Name);
return _spDocLocId;


private static string GetAbslouteUrl(OrganizationServiceProxy orgService, Guid _spDocLocId)
IOrganizationService _service = (IOrganizationService)orgService;

RetrieveAbsoluteAndSiteCollectionUrlRequest retrieveRequest = new RetrieveAbsoluteAndSiteCollectionUrlRequest
Target = new EntityReference(SharePointDocumentLocation.EntityLogicalName, _spDocLocId)
RetrieveAbsoluteAndSiteCollectionUrlResponse retrieveResponse = (RetrieveAbsoluteAndSiteCollectionUrlResponse)_service.Execute(retrieveRequest);

return retrieveResponse.AbsoluteUrl.ToString();


Create SharePoint Document Locations in CRM 2011 - Part 1

This post is the first of three on how to programmatically create SharePoint 2010 Document Folders in CRM 2011 so that documents can be uploaded.
Part 2
Part 3

There is quite a lot of code for this solution which is why I've broken it up into 3 parts. This first part outlines the scenario and the assumptions.

So here is the scenario. I have CRM 2011 and I want to store documents related to the Contact entity in SharePoint 2010. In my development environment I have both of these on the same virtual image but I've designed the code to work as a web service so it can sit it anywhere.

What CRM 2011 does is create a document library for each Entity that is enabled for document storage, and then creates a folder for each record. This example focuses on the Contact entity but it can be easily applied to other entities. When a new entity record is created in CRM 2011 and the Document menu item is clicked, it will create a) a SharePointDocumentLocation record in CRM that has the path to the SharePoint folder and b) warns you that it is about to create a document folder. The name of the folder for a Contact if based on full name. So what happens if you have two John Smiths? If the first already has created a document folder it detects that and prompts with 'the folder already exists do you want to use it?' If not, then you can modify the name of the folder to ensure it is unique.

This is an imprtant point. Assume you have a contact called Robin Wright. When you create a document folder it will be called 'Robin Wright'. When Robin gets married and changes her name to 'Robin Wrigth-Penn' then the document folder name remains the same name. CRM 2011 uses the SharePoint document location record to point to the orginal folder name. So you need to be careful, you can't assume that the full name in CRM is the same as the document folder name.

For this example I am going to assume you have the Guid for the Contact in question. This unique identifier will be used to determine if there is already a SharePoint Document Location created for this record. If so, it returns the url for you. If not, it will create the SharePoint Document Location in CRM and then create the actual folder in SharePoint based on the full name and finally returns the url to the folder.

So this code either uses the existing SharePoint Document Location or will create a new one for you and create the folder in SharePoint.

I have made another assumption. I assume you have created the SharePoint site already with the document library for the entity "Contact". It will help if you have a few SharePoint document locations already created because then you will be able to see how the database field 'ParentSiteorLocation' is populated in the SharePointDocumentLocationBase table. You need to use the 'root' Guid for the Contact document library and it should be obvious what this is if you have a few records in the table.

You can recognise this row in the table as the relativeurl reads 'contact' and the RegardingObjectID is null. The Guid you need is the SharePointDocumentLocationId from this record. It will become clearer (I hope) when you see the code.

There is another issue you will need to be aware of. Folder names in SharePoint will not allow certain characters. If they exist in your CRM Contact record e.g. the first name reads 'John & Jane' then CRM replaces the & with -. I have not checked all illegal characters are substituted with hyphen but it is my working assumption. So be warned, you will need to modify full name of contact to replace illegal characters. Also please note this code does NOT handle duplicate names - you will need to check for a duplicate and create a unique folder name.

So the next blog has the CRM code for using an existing SharePoint Document Location or create a new location. There is a reference in the code to a function that will create the actual SharePoint folder but the code for that is in the last blog.

Friday, February 11, 2011

BizTalk Receive Location Error - Verify the schema deployed properly.

I was getting an error with Biztalk Server when receiving a file from MSMQ although you get a similar error on FILE receive locations. The message was

There was a failure executing the receive pipeline: "Microsoft.BizTalk.DefaultPipelines.XMLReceive, Microsoft.BizTalk.DefaultPipelines, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Source: "XML disassembler" Receive Port: "Portname" URI: "FORMATNAME:DIRECT=OS:.\PRIVATE$\QNAME" Reason: Finding the document specification by message type "RootNode" failed. Verify the schema deployed properly.

It is caused because the message does not conform to a schema that Biztalk recognises, in my case because the RootNode is not recognised. The problem is BizTalk has not saved the original message. It's gone. The method described here shows how to keep a copy of the original file. There maybe other solutions but this works well for me.

The solution I outline is for FILE ports but once you understand the principle you will see how to apply it for MSMQ. The principle is to use a Receive Location and a Send Port Group configured for the Pass Through pipeline.

The Receive Location becomes the new port which polls for inbound messages that arrive in the directory c:\inbound. The Receive Location is bound to a Recieve Port which I'll call ReceivePort1.

The Send Port Group has 2 Send Ports in it. Both Send Ports, for the sake of simplicity, are FILE ports that are configured to write out the file using %SourceFileName% to the following directories c:\filecopy and c:\biztalkinbound. Both are configured to use the Pass Through pipeline.

Got to the Filter on the Send Port Group and add the following filter
BTS.ReceivePortName == ReceivePort1

Enable the Receive Location and start both Send Ports and the Send Port Group. If you drop a file into c:\inbound it will appear in both directories c:\filecopy and c:\biztalkinbound and it will retain the original file name.

Your original Receive Location should now point to the location c:\biztalkinbound and it will be configured for whatever pipeline you were using (XML Receive or a custom Flat File pipeline).

The benefit of this is you always have a copy of the original file in its original format in the directory c:\filecopy. As an added treat I got my orchestration to delete files that were successfully processed from this directory. So c:\filecopy became the place which stored files that failed to be processed - the classic being that they failed to match the correct schema as described by the error at the beginning of the post. Send an email alert to the Sysem Administrator and you'll be a hero.

Access Denied message when dropping a file into the GAC (or it doesn't work)

I was using Windows Server 2008 R2 and trying to add a file to the GAC. As you know the easiest way to do this is to drag and drop the file from one window into a window with the location C:\windows\assembly.

I kept getting the Access Denied error message. I tried running the command prompt as the Administrator, navigating to C:\Windows\assembly and then typing "Start . " to open an explorer window as the Administrator. Repeated the file drop but got Access Denied again.  At the time this was a .NET 3.5 assembly.  Now with .NET 4.0 the location of the GAC is no longer C:\Windows\assembly but C:\Windows\Microsoft.Net\assembly and drag and drop is disabled.

OK, so  the other way to install it is to use GACUTIL but guess what, its not installed on the production server. And no you can't use an old version of GACUTIL from .Net Framework 1.1, it has to be the .NET version that your DLL is using. Google said it was in the Microsoft SDK for Windows but I didn't have time to download it.

I managed to copy it from a Hyper-V image I had which had Visual Studio 2010 installed. The file for .Net v3.5 is located in C:\Program Files (x86) \Microsoft SDKs\Windows\v7.0A\bin. You just need to copy GACUTIL.EXE and GACUTIL.EXE.CONFIG.

For .Net 4.0 the location is C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools and you need a third file 1033\gacutlrc.dll for gacutil to work.  When you copy this to the production server all 3 files can be in the same directory. 

The syntax is
gacutil /i myassemblyname.dll /f

To check the file is in the GAC use
gacutil /l myassemblyname           (don't add the .dll)

Hope this gets you out of the same hole I was in.

Biztalk Server 2010 Receive Location won't Enable

I configured my Receive Location on Biztalk Server 2010 but when I tried to enable it, it remained stubbornly disabled. No error message, it just won't Enable.

In my case it was a FILE receive port and what I had forgotten to do was to make sure the service account the Biztalk Application Host is running under has FULL permissions on the file location.

Remember "Read" is not enough because BizTalk insists on deleting the inbound file. That was it. Simples!

BizTalk Server 2010 ConfigurationFailure

I had a failure configuring BizTalk Server 2010. The Enterprise Single Sign-On installed but I got an error with the Group and the BizTalk RunTime. When I tried to run the installation again it showed and error beside the BizTalk Managment database which stated that it had been installed and I needed to choose another database name.

This told me that the account I was using to inst all BizTalk did have the correct permissions to create databases. The error was because MSDTC was not configured correctly. This needs to be done on BOTH the BizTalk Server and the SQL Server. The correct settings are shown below.

First though, delete the Biztalk Management database from the SQL Server because you don't want to end up with two. To configure MSDTC on Windows Server 2008 R2 go to Server Manager, select Roles -> Application Server -> Component Services. Select Distributed Transaction Coordinator and then Local DTC. Right click, select Properties then click on the Security tab. Make sure you are using the Network Service account as well as checking the boxes shown.

Remember you have to do this on both the SQL Server box and the Biztalk Server.

After I had configured MSDTC I re-ran the Biztalk Server Configuration tool and the configuration completed successfully. Happy Days!

Friday, January 21, 2011

Hiding Menu Items on a SharePoint List form

This might be a bit basic but I thought I would give a step by step guide on how to customise a SharePoint list form. The first step is you need a web control.

In your Visual Studio project you need a TEMPLATES folder and a CONTROLTEMPLATES folder beneath it. This is where you add your user control. Lets not worry too much about the code on the user control just yet.

Next we want to make the web control available when the SharePoint list form loads.
To do this add the line

<%@ Register TagPrefic="xyz" TagName="MyWebControl" src="~/_controltemplates/WebControl.ascx" %>

into the web page.
Now there are two ways to do this depending on what you want to achieve.

Firstly you can add the line into the master page so that is available to every page on the site.

Secondly you can add it to the custom page associated to the list.

In either event, the result is that the code on the web control is executed when your page loads. Since your web control is injected into the HTML of the page you can do all kindes of things from adding inline css styles that can be applied to the page or by injecting javascript onto the page to hide a menu or menu item.

One approach is to add an asp:panel onto the ASCX and embed your css styles between the start and end tags of the panel. You can also embed javascript to change the way the page is displayed.

Your code behind can then determine when to make the panel visible or hidden which will determine if the styles are applied to the page or not.

If you need to have more control over the javascript go to the code behind and use the RegisterClientScript() method or add a literal control and set the text property to the javascript function. When the literal control is rendered, the javascript is executed.

SharePoint 2007 Creating Custom Forms for Custom Lists

I was recently wondering how I was going to create custom forms (EditForm.aspx, DispForm.aspx) for a custom list.

Now I want to deploy my custom list as a feature of course so I need to add these forms and the associated schema.xml file into my Visual Studio project.

But how was I to go about creating the form?

I though about using SharePoint Designer but a cold shiver ran down my spine.

Than a friend of mine pointed out this wonderful utility: OCDExportList
It is an extension to the stsadm command and is available as a WSP. It's deployed globally to your SharePoint site.
After that you can use the command

stsadm -o ocdexportlist -url http://mysiteurl -name listname -dir c:\temp

and it will generate in the c:\temp directory the schemal.xml file and assoicated forms for this list. Copy this across to the feature folder of the custom list and you should be good to start customizing.

That's a great time saver.

SharePoint 2007 Approval Workflow US Date format in Emails

I am using SharePoint 2007 with SP2 in a 64bit environment. The SharePoint Server regional settings are set to English (UK) and the SharePoint site I am using is also set English UK settings. I am also using the out-of-the-box approval workflow so you would expect that any dates are shown in UK format both in the workflow history and in emails sent by the workflow.

Wrong. They display in US format (MM/dd/yyyy).

Now when I Googled this I found this site which suggested it is fixed in the October 2008 Cumulative update. Indeed the Microsoft support page explicitly states as fixed "Sharepoint workflow notification e-mail messages do not use locale date and time formats".

Wrong again.

Maybe it was fixed back in October 2008 (pre SP2) but I can tell you it's not working post SP2. I raised a support call with Microsoft and proved that the dates are indeed in US format when they shouldn't be.

You require the August 2010 Cumulative Update from Microsoft to be applied to get this to work. Once the hotfix is applied then it all works prefectly with the due date correctly appearing in UK format both in the workflow history and in the emails. This even works on workflows that are in flight before you applied the hotfix.

While I am glad to have a solution, Microsoft should be ashamed that something as basic as this has had to be fixed twice. Zero points, Microsoft!

Saturday, January 15, 2011

BizTalk Server Evaluation Version Upgrade to Full Version

One of my customers had installed an evaulation version of BizTalk Server 2009 for their test environment, and sure enough the version expired and BizTalk stopped working.
So they needed me to upgrade it to the full version (in this case we used the Developer Edition but the same process is used for upgrading to Standard or Enterprise Editions).
But when I started the Install process it immediatley gave the message that you can't upgrade the evaluation version and you must uninstall and reinstall.
At first sight this looks disastrous but infact you can recover everything because unsinstall does not delete the Biztalk databases. But there are a number of steps you should take before uninstalling BizTalk.

1. Export the applications to MSI files and Export the Bindings just in case it all goes horribly wrong.

2. Make a screen shot of the host instances in BizTalk Administration because these will be deleted and you will need to recreate them.

3. The host instances use service accounts and you'll need the passwords for these accounts.

4. Ensure that the account you are going to use to install BizTalk is the same one you used for the Evaluation version. If like me you don't know, use an account who is a local administrator and is sysadmin on the SQL Server box where the databases are installed (dbowner of all the BizTalk databases may be enough).

5. Make sure you have a copy of the BTSNTSVC.exe.config file as this may contain configuration settings.

6. Have a backup of the Enterprise Single Sign-On secret - and you must know what the password is (ideally the password hint will help). This is vital. If you don't have the password for the backup you will be hosed. I don't know if you are able to do the backup once the evalaution has expired. If you can open a command window and navigate to c:\Program Files\Common Files\Enterprise Single Sign-On. Then use
ssoconfig –backupSecret backupfilename

OK, now you can uninstall the evaluation version and install the full version. Next you need to run the BizTalk Configuration wizard. Be sure to select Custom configuration and enter the database server name and the account credentials for services.
The Enterprise SSO option is currently configured so select Join an existing SSO system. You will get a warning symbol indicating you may need to restore the SSO master secret (you will definitely need to).
On the Group otion select Join an existing BizTalk Group. On the BizTalk Server RunTime do not Create In-Process Host and Instance because In-Process Host is still present. Liekwise do not create Isolated Host and Instance.
Configure the other options appropriately and Apply the Configuration.
If all goes well the Configuration will complete successfully. But you are not done.

7. First open a command window and navigate to c:\Program Files\Common Files\Enterprise Single Sign-On. Then use
ssoconfig –restoreSecret backupfilename
It will prompt you for the password showing you the password hint. This is why point 6 was so important if you don't have a backup or you don't know the password you are stuffed.

8.Open BizTalk Administration go to the Host Instances and add the ones that were originally there (which is why we did point 2 above). Use the same accounts and passwords and start them up.

9. Edit the BTSNTSvc.exe.config file and add any custom entries that you saved from point 5. Stop and restart the BizTalk Service(s).

That should be it, you should be up and running again. Its a pity Microsoft don't offer a better upgrade path than this.  If the full version is available on MSDN my advice is avoid using the evaluation copy.