Tuesday, November 4, 2014

Dynamics CRM 2013 Microsoft Web Page Dialog error

If you use Dynamics CRM 2013 for any length of time then you will have noticed the Microsoft dialog box that pops up intermittently.  It seems to occur randomly and is hard to reproduce.  As far as I can see it doesn't result in any data loss so I just ignore it.

But the screen is irritating and doesn't give a good impression to first time users.

It can be disabled by logging on to CRM as the System Administrator and going to Administration and Privacy Preferences. On the Privacy Preferences dialog click the Error Reporting tab. Select "Specify the Web application error notification preferences on behalf of users" checkbox and then select the radio button "Never send an error report to Microsoft".

The errors are still happening but at least the screen isn't popping up. 

Saturday, July 26, 2014

Unit Testing Dynamics CRM Plugins

Now there are lots of approaches to unit testing Dynamics CRM plugins and some frameworks for creating Mocks

Recently an XRM Test Framework has been made available

But I like the simple approach(es) outlined in this blog.  It doesn't use any tools because you write the code yourself which can be an advantage or a drawback.  If you don't have time to evaluate a tool and think you can build unit tests quickly, then this is probably a good approach. 

The first approach described in this blog may involve refactoring your code but it is the best approach if you want to run unit tests during an automated build process.  It involves moving most of the code out of the Execute method of the Plug-in and putting it into a "logic" Class Library.  So the unit tests simply call the Class library and you avoid having to go through the executing the plug-in.  OK, it may not test that the attribute filter you included is working properly but at least it tests your code prior to deployment and so would catch any errors.  These approaches, are not mutually exclusive, you would combine them to ensure the quality of the code. 

 

Wednesday, July 23, 2014

VS 2012 and TFS 2013 Build Process Templates - How to add DefaultTemplate.11.1.xaml

Today was one of those days that forced me to blog. It was a deeply frustrating day but ended on a high and I thought to myself why hasn't anyone blogged that before?

I have a VS 2012 solution that was held in TFS 2012 and I've now moved it to TFS 2013.  I did not want to upgrade the development team to VS 2013 just at this moment. Besides, I thought VS 2012 and TFS 2013 were compatible, right?  I was trying to set up automated builds so I created a Build Controller on my build server using the TFS 2013 DVD image.

I opened my solution and selected create Build Definition and selected the Default Template (TfvcTemplate.12.xaml). I immediately saw lots of errors and a few minutes Googling revealed that you can't use this TFS 2013 template for VS 2012 projects.

Now the link just below the drop down list of build process templates should direct you to where the templates are stored in Source Code Explorer. Now my link showed #/1/BuildProcessTemplates/TfvcTemplate.12.xaml and that navigated to nowhere. I mean the link doesn't even start with $ so how was that ever going to work?

So I started trying to find the XAML files for the Build Process Template that the Build Controller was using. If they weren't in TFS then presumably they were on the hard drive. No. Then how do I add a Build Process template to the list? Hours of searching revealed nothing and this was the deeply frustrating part of the day.

Then I found this blog. The author very kindly provides the source code to create a Console application that will allow you to list the Build Process Templates you have and crucially to add a new one.  BTW, the two references that you need to add for this to work can be found in the GAC (C:\Windows\assembly\GAC_MSIL)

Now I already had the DefaultTemplate.11.1.xaml file that I needed for my VS 2012 solution because it was sitting there in the BuildProcessTemplates directory beneath the root of my project in TFS. 

I used the command line similar to this:

ManageBuildTemplates.exe http://jpricket-test:8080/tfs/TestCollection0 TestProject add  $/TestProject/BuildProcessTemplates/MyTemplate.xaml

to add my Build Process Template.  As soon as I selected a new Build Definition I could see my newly added template and I was cooking.  A few minutes later I had my first successful automated build.  From the depths of despair to deep joy in just a few minutes. 

Which of course made me think - why hasn't anybody blogged this before? And Microsoft what the hell were you thinking? Why isn't this essential tool included with TFS?  Thanks again Jason Prickett for providing the source.

Friday, July 11, 2014

Automated Builds and Incrementing Version Numbers

Incrementing the version number when you do a build in Visual Studio seems an obvious requirement.

I found this great post on using a T4 template.  I followed the instructions carefully by creating a common DLL library project, removing the Class1.cs and adding a AssemblyVersion.tt file using the code that was provided. 

As the author points out you just need to save the T4 template file and it will create a .cs file with the assembly information in it with the appropriate build number.  In my case it created a AssemblyVersion.cs file.

As advised, I removed the  AssemblyVersion and AssemblyFileVersion attributes from AssemblyInfo.cs files of all the projects  that I wanted to apply this version number to. 

I then added the AssemblyVersion.cs file to the projects as a link but I moved the file under the Properties folder so it appears directly under AssemblyInfo.cs. If you've not added a link before, select Add existing.. and select the file and you will see the Add button in the dialog has a little down arrow which will reveal the "Add as a link" option. 

Sure enough when I built the solution, my DLLs had the correct version  number. 

But I realised I had to manually save the T4 file each time to refresh the version.cs file.  Now I want to have automated builds so I was looking for a way to process the T4 template before each build.

I finally found the answer from this blog entry. There is a TextTransform.exe file that will take the T4 template and produce the cs file. 

TextTransform.exe is located here
\Program Files\Common Files\Microsoft Shared\TextTemplating\11.0

or

\Program Files (x86)\Common Files\Microsoft Shared\TextTemplating\11.0

In the Pre-Build event of the version project I added this
"C:\Program Files (x86)\Common Files\Microsoft Shared\TextTemplating\11.0\TextTransform.exe" ($ProjectDir)AssemblyVersion.tt -out ($ProjectDir)AssemblyVersion.cs

When I build the solution the version number is incremented.  Brilliant.

You can of course use lots of different variations for incrementing the build  or revision number. But remember the 4 parts are

major.minor.build.revision

I changed the original template to modify the AssemblyFileVersion only and not the AssemblyVersion.  I am still able to distinguish between the DLLs from different builds but the all important AssemblyVersion I control.  The first deployment will be version 1.0.0.0, I can then branch the code and change the T4 template to the next release 1.1.0.0.

To increment from the previous revision number using a T4 template there is a good blog post here. If you want to change both build and revision number use methods which use the declaration starting with <#+
using System.Reflection;
[assembly: AssemblyCompany("C Hoare & Co")]
[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.<#= BuildNumber() #>.<#= RevisionNumber() #>")]
<#+
private int BuildNumber()
{
    int buildNumber = (int)(DateTime.UtcNow - new DateTime(2014,7,1)).TotalDays;
 return buildNumber;
 }
#>
<#+
private int RevisionNumber()
{
    int revisionNumber = 0 ; 
   // other code to increment revision number
    return revisionNumber;
}
#>

A couple of further points. 
1. Make sure there is no white space after the final #> or you will get an error.
2. If using TFS then add to the pre-build event, get latest version of AssemblyVersion,cs, check out before the TextTransform. In the Post Build event check the cs file back in.
3. To pause incrementing the build number, just unload the project from the solution.
4. SharePoint 2010 only recognises the major and minor number of the AssemblyVersion.  That's why controlling the AssemblyVersion is important, but you can increment the AssemblyFileVersion instead.

UPDATE
When I started using this method for automatic builds, I realised that using Pre and Post Build events to process the T4 template produced some errors.  I moved the commands into a PowerShell script and I execute this as a scheduled task an hour before my scheduled builds run.  I still have the version project but it becomes simple  container for the T4 template and the CS file, I don't need to add it to any solutions. I still like this approach. I have one place where I control Assembly and Assembly File Versions.

Wednesday, June 25, 2014

Activation error occurred while trying to get instance of type LogWriter, key ""

I have blogged about this error before and this post has become one of my most viewed. I hope it provides the answer you need and it makes sense. Please feel free to leave a comment!


Microsoft.Practices.ServiceLocation.ActivationException : Activation error occurred while trying to get instance of type LogWriter, key ""
 ----> Microsoft.Practices.Unity.ResolutionFailedException : Resolution of the dependency failed, type = "Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter", name = "(none)".
Exception occurred while: while resolving.
Exception is: InvalidOperationException - The type LogWriter cannot be constructed. You must configure the container to supply this value.



It occurs when using Microsoft.Practices.EnterpriseLibrary for logging errors. There are lots of blogs that reference this error and they mostly say the error is caused by:

a) an error in your configuration (often the database connection string)
b) not all the DLLs are present

That was not my situation. I was logging just to the event log and I was certain I had all the DLLs in the bin directory of my web service. The error occurred when I deployed the web service to a new environment. 

I spent many hours tracking down the cause. In this case it was because Microsoft.Practices.EnterpriseLibrary was installed on the target environment and the DLLs were registered in the GAC. 

Now as you know, .NET will look for the location of DLLs in a specific order and the GAC is at the top of the list.  The problem is, when the Enterprise Library DLLs are installed in the GAC then it is unable to determine the location of the configuration file.  I guess that when they are just present in the bin directory the location of the configuration file is presumed to be "up a level".  Removing the DLLs from the GAC was not an option, I needed to find a way that would work with the DLLs in the bin directory or in the GAC.

I have a DLL that I use to simplify the calls when logging an error.  It has several constructors to create log entries of different severities.  Below is the code that includes just the critical error constructor. 

using Microsoft.Practices.EnterpriseLibrary.Logging;

namespace Ciber.Common.Logging
{

 public static class Logging
    {

       public static void LogCritical(string message, string title, string category)
        {
            Logger.Write(message, category, 3, 1, TraceEventType.Critical, title);
        }

     }
}

Now I have many places in my code where I reference the LogCritical method and I didn't want to change them. What I needed was a way to link up to the loggingConfiguration section in my web.config.

Firstly I added two additional references and using statements

using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;
using Microsoft.Practices.EnterpriseLibrary.Logging.Configuration;


Then I added this method to my class:

public static LogWriter CreateLogger()
{
   EnterpriseLibraryContainer.Current = EnterpriseLibraryContainer.CreateDefaultContainer(ReadConfigSource());
   var logWriter = EnterpriseLibraryContainer.Current.GetInstance();
   return logWriter;
}


What that allows is to create a default container from an XML location that I specify and which in my case contains the loggingConfiguration section. 

I can then get an instance of the LogWriter class by calling EnterpriseLibraryContainer.Current.GetInstance(). 

Before I list the ReadConfigSource method let me first show the modified constructor from above which now reads:

public static void LogCritical(string message, string title, string category)
{
   LogWriter logger = CreateLogger();
   logger.Write(message, category, 3, 1, TraceEventType.Critical, title);
}

Note the lower case L on the logger object.  So great, my calls that write to the event log don't need to be changed and I have the ability to specify where my loggingConfiguration section is located. 

Below is the code for the ReadConfigSource() method.  Two very important points here.

1) I use OpenWebConfiguration() with HttpRuntime.AppDomainAppVirtualPath which will resolve correctly both in debug mode and in deployed mode.  Don't be tempted to use OpenWebConfiguration(null) or OpenWebConfiguration("/") which will work fine in Debug mode but will fail when deployed.

2) I added this to the start of my loggingConfiguration string @"<?xml version=""1.0"" encoding=""utf-8"" ?>".  Without it the loggingSection.ReadXml(xmlReader); line will fail. 

 public static IConfigurationSource ReadConfigSource()
{
   var configSource = new DictionaryConfigurationSource();
   string virpath = HttpRuntime.AppDomainAppVirtualPath;
   Configuration configFile = System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration(virpath);

   ConfigurationSection section = configFile.GetSection("loggingConfiguration");
   if (section != null)
   {
      string loggingConfig = @"<?xml version=""1.0"" encoding=""utf-8"" ?>";
      loggingConfig = loggingConfig + section.SectionInformation.GetRawXml();

      LoggingSettings loggingSection = null;
      using (var stringReader = new StringReader(loggingConfig))
      {
          var xmlReader = XmlReader.Create(stringReader,
             new XmlReaderSettings() { IgnoreWhitespace = true });

          loggingSection = new LoggingSettings();
          loggingSection.ReadXml(xmlReader);
          xmlReader.Close();
       }

       configSource.Add(LoggingSettings.SectionName, loggingSection);
     }

      return configSource;
}

I am pleased with the result.  Hope it works for you.

Wednesday, May 28, 2014

BizTalk Rules Engine - Lessons Learned

If you are looking for a few hours to while away then the BizTalk Rules Engine will certainly occupy the time.  Hopefully this post might give you some hours back. This post is about using the BizTalk Rules Engine within an orchestration.  In my case I want to set Boolean values that can then be used in decision shapes within the orchestration.  The sort of thing would be "if IsCreateCase is true then do this, else do that".  A quick summary of the things I learnt is:
  1. Write the results of the BRE rule back into the message
  2. Keep your rules really simple
  3. Find a workaround for null nodes
  4. Put the Call Rules shape in a scope and add an exception handler
The easiest thing is to write the BRE rule results back into the message.  When you set up the Call Rules shape it asks for only two things, the name of the BRE policy and the input parameters.
In my case the input is an XML message and if you create an Action to write back to a node in the message, it will actually clone the message. 

For me it was a benefit to have these results within the message as it provided a means of checking the logic was correct. I created some xs:boolean elements in my message and set the default value to false.  That is key because the target element must exist within the message.  Now all I had to think about was writing the conditions that will evaluate to true.

During testing of the policy in the Business Rules Composer I soon realised that it is best to keep the rules really simple if you can.  That is because when testing it displays the name of the rule that was fired so it is easier to check if these are simple rules. 

My next problem was around the message instances.  I am comparing the value of two nodes (both called CreateCase) in the message but in some valid message instances one of the nodes can be completely absent.

In code you would of course check for the existence of the Before/CreateCase node before checking its value because otherwise you would get en exception with the first example message.  I thought that BRE would do this if I used a condition which checked for its existence, but I just could not get that to work.  No combination of logical AND and OR would do the trick. 

In frustration I gave up and used the map which transforms the external schema to my internal one to add the necessary elements as nulls. I blogged about creating a null node here. I used the logical existence operator ? with the logical not ! along with the Value Mapping functoid.  Having a null element for CreateCase made all the difference and simplified my rules even more. 

Having tested my rules thoroughly in the Business Rules Composer I published and deployed it to BizTalk.  In the orchestration I added the Call Rules shape within a scope and created an Exception Handler that trapped errors of type PolicyExecutionException (you need to add a reference to Microsoft.Rule.Engine.DLL).

In the end I am very pleased with the result.  One simple shape determines the logic that my message will follow through the orchestration.  If I have to amend the rules I can do so without redeploying the orchestration. Sweet. 

Monday, April 28, 2014

BizTalk Deployment Framework 5.5 Errors

I was working with the BizTalk Deployment Framework version 5.5 and I kept hitting an error when deploying. The error arose with the xml pre-processor step as it tries to create the port bindings file form the master and the environment settings file.  The error was

InitSettingsFilePath:
Invalid settings file path (OK on server undeploy)

I traced it to the SettingsFilePath parameter being blank which is created during the MSBUILD process. 

It took me a while to realise it was because the Install Wizard with 5.5 does not prompt for the location of the Settings File as it did in previous versions. It now prompts for the account name used for configuring FILE Send and Receive Ports because it will now automatically create the file locations for you and set up permissions (hooray).

To solve the "Invalid settings file path" problem I had to edit the InstallWizard.XML file and add a new SetEnvUIConfigItem section to prompt for the settings file.  You can find the XML in the BTDF documentation.  After that, my deployment worked without error.

Another point is the BTDFPROF sample file has changed and it specifies several PropertyGroup sections where the name of the BizTalkHosts are specified.  I found this didn't work so I went back to adding the BizTalkHosts element in the ItemGroup section and included the host instance names  I wanted to bounce. 


Monday, March 10, 2014

Unit Tests and Microsoft Practices Logging

I created a WCF Web Service where I am logging errors using the Microsoft.Practices.EnterpriseLibrary tools for handling exceptions and logging.
When I called the web service using the WCF Test Client, all was well and I recorded the errors successfully.

But when I added some unit tests that would also generate an exception when calling the web service, I received the following error:

Microsoft.Practices.ServiceLocation.ActivationException : Activation error occured while trying to get instance of type LogWriter, key ""
 ----> Microsoft.Practices.Unity.ResolutionFailedException : Resolution of the dependency failed, type = "Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter", name = "(none)".
Exception occurred while: while resolving.
Exception is: InvalidOperationException - The type LogWriter cannot be constructed. You must configure the container to supply this value.


I spent some time Googling and finally found this post which explains all.
You need to copy the relevant sections of the web.config that refer to Enterprise Library into the app.config of the Unit Test Project. 

This also applies if you have an AppSettings entry (e.g. a Dynamics CRM Connection string) as this also needs to be copied into the app.config.






 

Tuesday, February 4, 2014

SharePoint 2013 - Word has encountered a problem trying to open the file

I experienced this error "Word has encountered a problem trying to open the file" when trying to open a Word template (dotx) from SharePoint 2013.  I should explain that I was using Word 2010 and could open the template from its location in the file system, but when I uploaded it as a content type into SharePoint 2013 I received this error when trying to use the template.

Some blogs suggested the following
a) install Office 2010 32bit instead of Office 2010 64bit
b) upgrade to SP2
c) remove the SharePoint Foundation Services as a component of Office, and then do a repair.

I tried all of that and yet it still would not open the template.  Then I found a blog that advised switching off Protected mode - go to File, Options, and Trust Centre Settings and disable all the Protected options which are enabled by default.  That did the trick.

My advice - try switching Protected mode off first before trying anything else.  Now I come to think of it I had seen this before and had forgotten about it.  Hence the blog.

Tuesday, January 28, 2014

Change Date Format in Dynamics CRM 2013

Surprisingly I could not find a blog that told me how to change the date format in Dynamics CRM 2013. 

There are plenty of blogs which say how to di it for CRM 2011 but I already knew that.  So what has happened to the Personal Options that was accessible from the File menu?

You will find it in the top right hand corner as the little settings wheel.  that will give you access to the Options and the About box.  Once you have the Personal Options dialog open, then you change the date as you did before.  Go to the Formats tab, and change the current format to whatever you want. 

Sunday, January 26, 2014

Load a Word document into Internet Explorer and set Response.ContentType

About two years ago I wrote some code to construct a PDF document on the fly and display it in the browser. Yesterday I was trying to do the same thing for a Word document. If you've seen earlier posts this month you will see I am using Aspose Words for .Net to perform a mail merge.  I wanted the ability to preview the result before saving the output. 
 
Now when I did this previously for a PDF file I had a web service that returned a memory stream and I was able to load that into the Response.OutputStream object without difficulty. But I immediately ran into a problem with my web service which is built with .Net FW 4.0.  The memory stream I returned became a marshalled object which does not have the same properties for WriteTo() or ToArray() which meant I could not easily load it into Response.OutputStream.
 
I suppose I could have found a solution but instead I thought I would return it as a string instead. Alas that gave a new set of problems which I suspect was down to encoding when converting between the stream and the string.  This morning I tried converting to a base base64 string and that did the trick. 
 
So firstly here is the code in the web service that converts the memory stream to base64.


public string PreviewMerge(DataMergeRequest req)
{   // code to do mailmerge goes here
   MemoryStream msRawData = merge.MergeDataSet(ds, templatelocation);

   string base64;
   // ENCODE TO BASE 64
   base64 = Convert.ToBase64String(msRawData.GetBuffer(), 0, (int)msRawData.Length);

    return base64;
}

On the ASPX page you need to remove everything below the Page directive.
In the Page_Load event you need this code. Note you should NOT use Response.End - there is a known issue with it creating a Threading exception.  Use HttpContext.Current.ApplicationInstance.CompleteRequest() instead. 


Response.Clear();
Response.ClearContent();
Response.ClearHeaders();
Response.Charset = "";

// get results of merge as base 64 encoded stream
string strBase64 = ds.PreviewMerge(req);

// DECODE into memory stream
byte[] raw = Convert.FromBase64String(strBase64);

using (MemoryStream decoded = new MemoryStream(raw))
{
   // load the stream into the Response Output stream
   decoded.WriteTo(Response.OutputStream);
}
// set the content type for docx
Response.ContentType = "application/vnd.openxmlformats-officedocument.wordprocessingml.document";
Response.Flush();
HttpContext.Current.ApplicationInstance.CompleteRequest();





 

Friday, January 24, 2014

Downloading a Document Template from SharePoint

Since I've wasted a couple of hours on this I'll share my findings.  I'm creating a web service that will do a mail merge for me using Aspose Words. The document template is stored in a SharePoint document library as a content type and the merged document I want to store back into the document library. 

I tackled the upload first.  Since my web service and SharePoint will be installed on separate servers, I call the SharePoint Copy.asmx web service to upload the document. That works fine.  My next task was to use the document template from SharePoint which I need to retrieve as a memory stream. 

The preferred method is to use the GetItem method of the Copy.asmx service. No problem I thought as I already has a reference to it. 

But try as I might I could not manage to download the template.  When you install a content type as a template the path to the document is something like this

DocLib/Forms/Some Letter/SomeLetter.dotx

I suspect it was something to do with this location that it would not work.  If I tried the same code on a document in the library itself it worked fine.  I gave up in frustration and started Googling for alternatives.

I came up with this incredibly simple approach which I share below.  It simply uses the DownloadData method of the WebClient.  Since I know the absolute path of the template, it works a treat.


public MemoryStream DownloadSharePointDocument(string sourceUrl)
{
   string sharePointSiteUrl = ConfigurationManager.AppSettings["sharepointsiteurl"];

   if (!sharePointSiteUrl.EndsWith("/"))
   {
 
       sharePointSiteUrl = sharePointSiteUrl + "/";
   }
   sourceUrl = sharePointSiteUrl + sourceUrl;
 
   WebClient wc = new WebClient();
   wc.UseDefaultCredentials = true;

   byte[] response = wc.DownloadData(sourceUrl);
   MemoryStream ms = HelperMethods.MemoryStreamFromBytes(response);
   return ms;

}

Thursday, January 23, 2014

Convert FetchXML results to XML.

With the earlier versions of Dynamics CRM you could construct FetchXML queries and the results would be returned as XML.  I liked that feature but is has been deprecated in later releases. I had a need for this again recently because I wanted to get data from CRM in a format where it was easy to mail merge.  I'm a big fan of Aspose Words for .Net as I have used it several times in the past with great success. 

Aspose has the advantage of being able to design normal mail merge templates in Word so it is easy for a user to amend the template.  Aspose uses mail merge fields just as the typical mail merge template does but you just add them manually to your template rather than from a data source.

Aspose uses data tables as the data source and I wanted a way to convert the results of a FetchXML query into a data table so it was ready to mail merge.  In fact because Aspose can merge multiple data tables I need to create a dataset which I will convert to an XML string using GetXml().

A key factor was I wanted the code to be entirely ignorant of the columns in the result set.  I want to be able to select any number of columns from a CRM entity, use lookup values and pick lists and columns from related tables.  Unfortunately when querying pick lists you only get the value and not the text. You will need another method to get the text equivalent. 

I am not detailing how I do the Aspose merge in this post. It should be easy enough if you follow their samples.  Just remember for this approach to work, the field names in the data table need to be unique. 

To get started I used the CRMSVCUTL.EXE utility to generate the classes of the custom entities.

crmsvcutil /url:http://localhost:5555/Org1/XRMServices/2011/Organization.svc
/out:Xrm.cs  /namespace:Xrm  /serviceContextName:XrmServiceContext

I added the Xrm.cs to my project and references to Microsoft.Xrm.Sdk.dll and Microsoft.Xrm.Client.dll which can be found in the SDK\bin directory.  I also added System.RunTime.Serialization and System.Data.  The connection string for CRM I added to the app.config. I have tried this code on Dynamics CRM 2011 and CRM 2013 and it works fine.


using System;
using System.Collections.Generic;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Text;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Messages;
using Microsoft.Xrm.Sdk.Metadata;
using Microsoft.Xrm.Sdk.Query;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Xrm.Client;
using Microsoft.Xrm.Client.Services;
using Xrm;
 
       static IOrganizationService _service;
        static OrganizationService _orgService;
        static string FetchExpression()
        {
           
            String connectionString = ConfigurationManager.AppSettings["xrmConnectionString"];

            // Establish a connection to the organization web service using CrmConnection.
            Microsoft.Xrm.Client.CrmConnection connection = CrmConnection.Parse(connectionString);

            // Obtain an organization service proxy.
            // The using statement assures that the service proxy will be properly disposed.
            using (_orgService = new OrganizationService(connection))
            {
                string fetchquery = @"<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
                      <entity name='contact'>
                        <attribute name='fullname' />
                        <attribute name='new_referencenumber' />
                        <attribute name='new_locationid' />
                        <attribute name='contactid' />
                        <attribute name='new_contactmechanism' />
                        <order attribute='fullname' descending='false' />
                        <filter type='and'>
                          <condition attribute='statecode' operator='eq' value='0' />
                          <condition attribute='fullname' operator='not-null' />
                        </filter>
                        <link-entity name='systemuser' from='systemuserid' to='owninguser' visible='false' link-type='outer' alias=user'>
                          <attribute name='domainname' />
                        </link-entity>
                      </entity>
                    </fetch>";

                _service = (IOrganizationService)_orgService;
                RetrieveMultipleRequest req = new RetrieveMultipleRequest();
                
                FetchExpression fetch = new FetchExpression(fetchquery);
                req.Query = fetch;
                RetrieveMultipleResponse resp = (RetrieveMultipleResponse)_service.Execute(req);
               
                // Create a dataset and datatable
                DataSet results = new DataSet("Results");
                DataTable Table1 = new DataTable("SingleRowTable");
                DataColumn column;
                DataRow workRow;


                foreach (Entity entity in resp.EntityCollection.Entities)
                {
                    //create the columns in the data table
                    foreach (KeyValuePair<String, Object> attribute in entity.Attributes)
                    {
                        column = new DataColumn();
                        switch (attribute.Value.GetType().Name)
                        {
                            case "AliasedValue":
                                column.DataType = entity.GetAttributeValue<AliasedValue>(attribute.Key).Value.GetType();
                                break;
                            case  "EntityReference":
                                column.DataType = entity.GetEntityReferenceValue<EntityReference>(attribute.Key).Name.GetType();
                                break;
                            case "OptionSetValue":
                                column.DataType = entity.GetAttributeValue<OptionSetValue>(attribute.Key).Value.GetType();
                                break;
                            default :
                                column.DataType = attribute.Value.GetType();
                                break;
                        }

                        column.ColumnName = attribute.Key;
                        Table1.Columns.Add(column);

                    }
                    // add the values to the row
                    workRow = Table1.NewRow();
                    foreach (KeyValuePair<String, Object> attribute in entity.Attributes)
                    {
                        switch (attribute.Value.GetType().Name)
                        {
                            case "AliasedValue":
                                workRow[attribute.Key] = entity.GetAttributeValue<AliasedValue>(attribute.Key).Value;
                                break;
                            case "EntityReference":
                                workRow[attribute.Key] = entity.GetEntityReferenceValue<EntityReference>(attribute.Key).Name;
                                break;
                            case "OptionSetValue":
                                workRow[attribute.Key] = entity.GetAttributeValue<OptionSetValue>(attribute.Key).Value;
                                break;
                            default:
                                workRow[attribute.Key] = attribute.Value;
                                break;
                        }

                    }
                    Table1.Rows.Add(workRow);
                    // only one row expected so exit
                    break;
                }
                results.Tables.Add(Table1);
                return results.GetXml();
            }
        }

 
The results are in a very simple XML structure and already to use for the mail merge. 

<Results>
  <SingleRowTable>
    <fullname>Adam Ant</fullname>
    <new_referencenumber>PSN-00000286</new_referencenumber>
    <new_locationid>Brighton</new_locationid>
    <contactid>2d31f962-5f44-e211-ae45-00155d010a10</contactid>
    <new_contactmechanism>100000009</new_contactmechanism>
    <user.domainname>MSCRM2011\Administrator</user.domainname>
  </SingleRowTable>
</Results>


Saturday, January 4, 2014

Securing a Web Service with Azure ACS

The Identity and Access Add-In to Visual Studio does a great job of securing a web site with a variety f mechanisms including Azure ACS.  I was bitterly disappointed to find it did not offer the same ability for a Web Service.  

In the end my salvation came by using the Sentinet Service Repository which does allow me to virtualize the web service and can include authentication with ACS.  It does so using binding configuration and this extract does the job of providing ACS authentication.  Note that you need to be using HTTPS protocol.

I've not had a chance yet to add this directly to a web service to see if it works.  My hope is that just adding it will be enough and then all I need to do is pass in the user name and password when I call the web service.  What ACS will do is produce a SAML token which will be encrypted within the SOAP message. 

 <bindings>
  <customBinding>
    <binding name="IssuedToken">
      <security authenticationMode="IssuedToken">
        <issuedTokenParameters>
          <issuerMetadata address="
https://mynamespace.accesscontrol.windows.net/v2/wstrust/mex" />
          <issuer address="
https://mynamespace.accesscontrol.windows.net/v2/wstrust/13/username" binding="ws2007HttpBinding" bindingConfiguration="AcsBinding" />
        </issuedTokenParameters>
      </security>
      <httpTransport />
    </binding>
  </customBinding>
  <ws2007HttpBinding>
    <binding name="AcsBinding">
      <security mode="TransportWithMessageCredential">
        <message clientCredentialType="UserName" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="false" />
      </security>
    </binding>
  </ws2007HttpBinding>
</bindings>
   

Calling a WCF Web Service secured with Azure ACS


This is some sample code for a client console application calling a WCF web service that is secured with Windows Azure Access Control Service.  In fact all I need is to pass the username and password in the client credentials and my web service will do the authentication for me.  If I supply an incorrect password then a MessageSecurityException is created so I need to ensure I have captured that an handled it in some way.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ACE.TestClient.ACS
{
    class Program
    {
        static void Main(string[] args)
        {
            try
            {
                VirtualService.VirtualInterfaceClient c = new VirtualService.VirtualInterfaceClient();
                c.ClientCredentials.UserName.UserName = "MyService";
                c.ClientCredentials.UserName.Password = "Hydrogen1";

                VirtualService.Event input = new VirtualService.Event() { Date = DateTime.Now, Id = "123", Name = "Peter Rabbit" };
                string result = c.PostEvent(input);
                Console.WriteLine("Service Returned: " + result);
            }
            catch (System.ServiceModel.Security.MessageSecurityException mex)
            {
                Console.WriteLine("Failed to authenticate." );
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.Message);
            }
        }
    }
}

Calling a REST Web Service with an X509 certificate

I was recently looking at what client code I needed in order to call a REST web service that was secured with an X509 certificate.

In the first case I assume that the client device (maybe a tablet) has an X509 certificate installed.  So what code do I need to send some JSON to this REST web service and include the X509 certificate.  The simple example below works.  After that you will see the sample code for sending a user name and password with

using System;
using System.Net;
using System.Security.Cryptography.X509Certificates;

namespace TestClientBizTalkService
{
    class Program
    {
        const string JsonPayload = "{\"ns0:Event\":{\"@xmlns:ns0\":\"
http://RESTDEMO.Event\",\"Id\":\"444\",\"Date\":\"1999-05-31\",\"Name\":\"A new event\"}}";
        static void Main(string[] args)
        {
            try
            {
                WebClientWithSslCertificate c = new WebClientWithSslCertificate();
               
                c.Headers[HttpRequestHeader.ContentType] = "application/json";
                string result = c.UploadString("
https://acesentinetpot/SelfHostedNode/BizTalkRestMutualX509", "POST", JsonPayload);
                Console.WriteLine("Service Returned: " + result);
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.ToString());
            }

            Console.WriteLine("Done");
            Console.ReadLine();
        }

        class WebClientWithSslCertificate : WebClient
        {
            protected override WebRequest GetWebRequest(Uri address)
            {
                HttpWebRequest request = (HttpWebRequest)base.GetWebRequest(address);
                request.ClientCertificates.Add(GetMyCertificate());
                return request;
            }

            private X509Certificate2 GetMyCertificate()
            {
                X509Store store = new X509Store(StoreName.My, StoreLocation.LocalMachine);

                try
                {
                    store.Open(OpenFlags.OpenExistingOnly);
                    X509Certificate2Collection collection = (X509Certificate2Collection)store.Certificates;
                    X509Certificate2Collection fcollection = (X509Certificate2Collection)collection.Find(X509FindType.FindBySubjectName, "ClientTestCertificate", true);
                    if (fcollection.Count > 0)
                    {
                        return fcollection[0];
                    }
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.ToString());
                }
                finally
                {
                    if (store != null)
                    {
                        store.Close();
                    }
                }
               
                return null;
            }
        }
       
    }
}