Quantcast
Channel: briancaos – Brian Pedersen's Sitecore and .NET Blog
Viewing all 264 articles
Browse latest View live

Sitecore Media Library integration with Azure CDN using origin pull

$
0
0

If your Sitecore website is heavy on content from the media library you can offload your Sitecore instances by allowing images to be retrieved from a Content Delivery Network (CDN). If you use Microsoft Azure, you do not need to upload images to the CDN, as Azure support origin pull.

Origin pull is a mechanism where the CDN automatically retrieves any missing content item from an origin host if the content is missing. In Azure, even parameters to the content is supported, so if you scale images with ?w=100, these parameters are supported, and the Azure CDN will store the scaled image.

To set up origin pull in Azure CDN, you first go to your CDN profile:

Azure CDN Profile

Azure CDN Profile

Then you click the + sign to add an endpoint:

Azure CDN Add Endpoint

Azure CDN Add Endpoint

And add an endpoint with the type “Custom Origin”:

Azure CDN Endpoint with Custom Origin

Azure CDN Endpoint with Custom Origin

The “name” is the name of the endpoint. The “Origin hostname” is the URL to your public Sitecore website. And remember to specify the correct protocol. If your website is running HTTPS, the CDN should use HTTPS as well.

SETTING UP SITECORE:

The rest is configuration in Sitecore. You control the CDN properties using these settings, found in the Sitecore.config file:

<setting name="Media.MediaLinkServerUrl" value="https://myendpoint.azureedge.net" />
<setting name="Media.MediaLinkPrefix" value="-/media" />
<setting name="Media.AlwaysIncludeServerUrl" value="true" />
<setting name="MediaResponse.Cacheability" value="public" />
  • Media.MediaLinkServerUrl = The URL to the Azure CDN, as defined when creating the Azure Endpoint
  • Media.MediaLinkPrefix = The media library link URL. Together with the Media.MediaLinkServerUrl, the complete server URL is created. In the example, my url is https://myendpoint.azureedge.net/-/media/%5Bmedia library content]
  • Media.AlwaysIncludeServerUrl = Tells Sitecore to always include the server URL in the media requets
  • MediaResponse.Cacheability = Allows the cache settings of any item to be publicly available, allowing the Azure CDN to access the MaxAge, SlidingExpiration and VaryHeader parameters.

DRAWBACKS OF USING A CDN:

  • Your website needs to be public. When developing and testing you need to disable the CDN settings as the Azure CDN cannot read from a non-public website. Testing is therefore in production as the website runs.
  • Security settings on media library items cannot be used. Once a media library item is on the CDN it is public to everyone.

MORE TO READ:

 



Sitecore contact cannot be loaded, code never responds

$
0
0

In Sitecore, it is possible to encounter a situation where the calls identifying or locking a contact never responds, but there is no errors returned.

A call to identify:

Tracker.Current.Session.Identify(contactName);

And a call to Load a contact:

Contact contact =
    contactRepository.LoadContactReadOnly(username);

Can both take forever without any errors or any timeout.

This situation can occur if the Contact Successor points to the original Contact in a loop. When merging a contact, Sitecore will create a new contact, the Surviving contact. The existing contact (called the Dying contact) still contains all the interaction data from before the merge, so instead of Sitecore having to update all data fields with a new ID, it creates a “Successor” pointer to the Dying Contact.

Surviving Contact

Surviving Contact

But in certain situations, the Dying Contact will also have a Successor, which points back to the Surviving Contact, creating an infinite loop:

The Dying Contact's Successor points to the surviving contact.

The Dying Contact’s Successor points to the surviving contact.

The patterns for this situation are many, but usually involves merging and changing contact identifiers, and can be reproduced like this:

  • Create a contact “A”
  • Create a new contact “B”
  • Merge contact “B” with “A”
  • Merge contact “A” with “B”

To avoid this situation, it is customary to rename the dying contact’s (“A”) identifier to an obscure name (a guid). But the renaming might fail if the dying contact is locked, leaving a contact with a reusable identifier. The “Extended Contact Repository” which I have described previously will unfortunately gladly create a new contact with an existing name.

HOW TO RESOLVE IT:

The situation needs to be resolved manually. Find the contact, open RoboMongo and search for the contact:

identifier = db.Identifiers.findOne({_id: /NAME_OF_IDENTIFIER/i});
contact = db.Contacts.find({_id: identifier.contact});

Copy the “Successor” value from the contact, and find the Successor:

successor = db.Contacts.find({_id: NUUID("b1e760d7-7c60-4b1d-818f-e357f303ebef9")});

Right click the “Edit Document” button and delete the “Successor” field from the dying contact:

Delete Successor Field from the Dying Contact, breaking the infinite loop

Delete Successor Field from the Dying Contact, breaking the infinite loop

This can be done directly in production, and the code reacts instantly when the loop have been broken.

MORE TO READ:

 


Requesting Azure API Management URL’s

$
0
0

The Azure API Management is a scalable and secure API gateway/proxy/cache where you can expose your API’s externally and still have secure access.

In Azure API Management you create a “Product” which is a collection of API’s that is protected using the same product key.

2 Azure API Management products, protected with a key

2 Azure API Management products, protected with a key

The 2 products above contains a collection of API’s, and each product have it’s own key.

As a developer you can find the API Keys using the Azure API Management Service Developer Portal:

APIM Developer Portal

APIM Developer Portal

When clicking around you will end up finding the “Try it” button where you are allowed to test your API endpoints:

Try it button

Try it button

And here you can get the subscription key by clicking the icon shaped as an eye:

Find the key here

Find the key here

When calling any API, you simply need to add the subscription key to the request header in the field:

  • Ocp-Apim-Subscription-Key

This is an example on how to GET or POST to an API that is secured by the Azure API Management. There is many ways to do it, and this is not the most elegant. But this code will work in production with most versions of .NET:

using System;
using System.IO;
using System.Net;
using System.Text;

namespace MyNamespace
{
  public class AzureApimService
  {
    private readonly string _domain;
    private readonly string _ocp_Apim_Subscription_Key;

    public AzureApimService(string domain, string subscriptionKey)
    {
      _domain = domain;
      _ocp_Apim_Subscription_Key = subscriptionKey;
    }

    public byte[] Get(string relativePath, out string contentType)
    {
      Uri fullyQualifiedUrl = GetFullyQualifiedURL(_domain, relativePath);
      try
      {
        byte[] bytes;
        HttpWebRequest webRequest = (HttpWebRequest) WebRequest.Create(fullyQualifiedUrl);
        webRequest.Headers.Add("Ocp-Apim-Trace", "true");
        webRequest.Headers.Add("Ocp-Apim-Subscription-Key", _ocp_Apim_Subscription_Key);
        webRequest.Headers.Add("UserAgent", "YourUserAgent");
        webRequest.KeepAlive = false;
        webRequest.ProtocolVersion = HttpVersion.Version10;
        webRequest.ServicePoint.ConnectionLimit = 24;
        webRequest.Method = WebRequestMethods.Http.Get;
        using (WebResponse webResponse = webRequest.GetResponse())
        {
          contentType = webResponse.ContentType;
          using (Stream stream = webResponse.GetResponseStream())
          {
            using (MemoryStream memoryStream = new MemoryStream())
            {
              byte[] buffer = new byte[0x1000];
              int bytesRead;
              while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
              {
                memoryStream.Write(buffer, 0, bytesRead);
              }
              bytes = memoryStream.ToArray();
            }
          }
        }
        // For test/debug purposes (to see what is actually returned by the service)
        Console.WriteLine("Response data (relativePath: \"{0}\"):\n{1}\n\n", relativePath, Encoding.Default.GetString(bytes));
        return bytes;
      }
      catch (Exception ex)
      {
        throw new Exception("Failed to retrieve data from '" + fullyQualifiedUrl + "': " + ex.Message, ex);
      }
    }

    public byte[] Post(string relativePath, byte[] postData, out string contentType)
    {
      Uri fullyQualifiedUrl = GetFullyQualifiedURL(_domain, relativePath);
      try
      {
        byte[] bytes;
        HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(fullyQualifiedUrl);
        webRequest.Headers.Add("Ocp-Apim-Trace", "true");
        webRequest.Headers.Add("Ocp-Apim-Subscription-Key", _ocp_Apim_Subscription_Key);
        webRequest.KeepAlive = false;
        webRequest.ServicePoint.ConnectionLimit = 24;
        webRequest.Headers.Add("UserAgent", "YourUserAgent");
        webRequest.ProtocolVersion = HttpVersion.Version10;
        webRequest.ContentType = "application/json";
        webRequest.Method = WebRequestMethods.Http.Post;
        webRequest.ContentLength = postData.Length;
        Stream dataStream = webRequest.GetRequestStream();
        dataStream.Write(postData, 0, postData.Length);
        dataStream.Close();
        using (WebResponse webResponse = webRequest.GetResponse())
        {
          contentType = webResponse.ContentType;
          using (Stream stream = webResponse.GetResponseStream())
          {
            using (MemoryStream memoryStream = new MemoryStream())
            {
              byte[] buffer = new byte[0x1000];
              int bytesRead;
              while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
              {
                memoryStream.Write(buffer, 0, bytesRead);
              }
              bytes = memoryStream.ToArray();
            }
          }
        }
        // For test/debug purposes (to see what is actually returned by the service)
        Console.WriteLine("Response data (relativePath: \"{0}\"):\n{1}\n\n", relativePath, Encoding.Default.GetString(bytes));
        return bytes;
      }
      catch (Exception ex)
      {
        throw new Exception("Failed to retrieve data from '" + fullyQualifiedUrl + "': " + ex.Message, ex);
      }
    }

    private static Uri GetFullyQualifiedURL(string domain, string relativePath)
    {
      if (!domain.EndsWith("/"))
        domain = domain + "/";
      if (relativePath.StartsWith("/"))
        relativePath = relativePath.Remove(0, 1);
      return new Uri(domain + relativePath);
    }
  }
}

The service is simple to use:

AzureApimService service = new AzureApimService("https://yourapim.azure-api.net", "12a6aca3c5a242f181f3dec39b264ab5");
string contentType;
byte[] response = service.Get("/api/endpoint", out contentType);

MORE TO READ:


Webhook Event Receiver with Azure Functions

$
0
0

Microsoft Azure Functions is a solution to run small pieces of code in the cloud. If your code is very small and have only one purpose, an Azure Function could be the cost effective solution.

This is an example of a generic Webhook event receiver. A webhook is a way for other systems to make a callback to your system whenever an event is raised in the other system. This Webhook event receiver will simply receive the Webhook event’s payload (payload = the JSON that the other system is POST’ing to you), envelope the payload and write it to a Queue.

STEP 1: SET UP AN AZURE FUNCTION

Select an Function App and create a new function:

Create New Azure Function

Create New Azure Function

 

STEP 2: CREATE A NEW FUNCTION

Select New Function and from the “API & Webhooks”, select “Generic Webhook – C#:

Create Generic Webhook

Create Generic Webhook

Microsoft will now create a Webhook event receiver boilerplate code file, which we will modify slightly later.

STEP 3: ADD A ROUTE TEMPLATE

Because we would like to have more than one URL to our Azure Function (each webhook caller should have it’s own URL so we can differentiate between them) we need to add a route template.

Select the “Integrate” section and modify the “Route template”. Add {caller} to the field:

Add a Route Template

Add a Route Template

STEP 4: INTEGRATE WITH AZURE QUEUE STORAGE

We need to be able to write to an Azure Queue. In Azure Functions, the integration is almost out of the box.

Select the “Integrate” section and under “Outputs”, click “New Output”, and select the “Azure Queue Storage”:

Azure Queue Storage

Azure Queue Storage

Configure the Azure Queue Settings:

Azure Queue Settings

Azure Queue Settings

  • Message parameter name: The Azure Function knows about the queue through a parameter to the function. This is the name of the parameter.
  • Storage account connection: The connection string to the storage where the azure queue is located.
  • Queue name: Name of queue. If the queue does not exist (it does not exist by default) a queue will be created for you.

STEP 5: MODIFY THE BOILERPLATE CODE

We need to make small but simple modifications to the boilerplate code (I have marked the changes form the boilerplate code with comments):

#r "Newtonsoft.Json"

using System;
using System.Net;
using Newtonsoft.Json;

// The string caller was added to the function parameters to get the caller from the URL.
// The ICollector<string> outQueue was added to the function parameters to get access to the output queue.
public static async Task<object> Run(HttpRequestMessage req, string caller, ICollector<string> outQueue, TraceWriter log)
{
    log.Info($"Webhook was triggered!");

    // The JSON payload is found in the request
    string jsonContent = await req.Content.ReadAsStringAsync();
    dynamic data = JsonConvert.DeserializeObject(jsonContent);

    // Create a dynamic JSON output, enveloping the payload with
	// The caller, a timestamp, and the payload itself
    dynamic outData = new Newtonsoft.Json.Linq.JObject();
    outData.caller = caller;
    outData.timeStamp = System.DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff");
    outData.payload = data;

	// Add the JSON as a string to the output queue
    outQueue.Add(JsonConvert.SerializeObject(outData));

	// Return status 200 OK to the calling system.
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        caller = $"{caller}",
        status = "OK"
    });
}

STEP 6: TEST IT

Azure Functions have a built in tester. Run a test to ensure that you have pasted the correct code and written the correct names in the “Integrate” fields:

Test

Test

Use the Microsoft Azure Storage Explorer to check that the event was written to the queue:

Azure Storage Explorer

Azure Storage Explorer

STEP 7: CREATE KEYS FOR THE WEBHOOK EVENT SENDERS

Azure Functions are not available unless you know the URL and the key. Select “Manage” and add a new Function Key.

Function Keys

Function Keys

The difference between Function Keys and Host Keys are that Function Keys are specific to that function, but the Host Keys are global keys that can be used for any function.

To call your Azure Function, the caller need to know the URL + the key. The key can be send in more than one way:

  • Using the URL, parameter ?code=(key value) and &clientid=(key name)
  • In the request header, using the x-functions-key HTTP header.

STEP 8: GIVE URL AND KEY TO CALLING SYSTEM

This is a Restlet Client example that calls my function. I use the QueryString to add the code and clientid parameters:

MORE TO READ:

 


Sitecore Scheduled Task – Schedule time format and other quirks

$
0
0

The Sitecore task runner, usually called Scheduled Tasks, is a simple way of executing code with intervals. You configure scheduled tasks in Sitecore, at /sitecore/system/Tasks/Schedules:

Scheduled Task

Scheduled Task

The quirkiest configuration setting is the “Schedule” field, which is a pipe separated string determining when the task should run:

{start timestamp}|{end timestamp}|{days to run bit pattern}|{interval}

  • Start timestamp and End timestamp: Determines the start and end of the scheduled task.
    Format is the Sitecore ISO datetime, YearMonthDayTHoursMinutesSeconds.
    Example: 20000101T000000 = January 1st 2000 at 00:00:00.
    (the font Sitecore uses does not help reading the timestamp at all, I know).
    NOTE: If you do the format wrong, the task will run.
  • Days to run: A 7 bit pattern determining which days the task must run:
    1 = Sunday
    2 = Monday
    4 = Tuesday
    8 = Wednesday
    16 = Thursday
    32 = Friday
    64 = Saturday
    So, 127 means to run the task every day. To run the task on Saturday and Sunday, add the 2 values, 1+64 = 65.
  • Interval: How long time between each run. 00:05:00 means that the task will run with 5 minute intervals.

WHY DOESN’T MY TASK RUN WITH MY SPECIFIED INTERVALS?

Sitecore uses no less than 2 sitecore.config settings to determine when the task runner should run:

<scheduling>
  <frequency>00:05:00</frequency>
  <agent type="Sitecore.Tasks.DatabaseAgent" method="Run" interval="00:05:00">
    <param desc="database">master</param>
    <param desc="schedule root">/sitecore/system/Tasks/Schedules</param>
    <LogActivity>true</LogActivity>
  </agent>
</scheduling>

The frequency setting determine when the global Sitecore task runner should run at all.

The agent determine when the tasks configured in the master database at the root /sitecore/system/Tasks/Schedules should run.

So, in the example above, my task runner runs every 5 minutes, checking the config file for tasks to run. It will then run the agent with 5 minute intervals. If another task is running, it could block the task runner, delaying the agent from running. With the above settings, my best case scenario is that my agent runs every 5 minutes.

The tasks configured in Sitecore could also block. If a task should run every 5 minutes, but the execution time is 11 minutes, the agent would run the task again after 15 minutes, in the best case scenario. To avoid this, you can mark your task as “async” in the configuration, but beware that long running (or never ending) tasks will then run simultaneously, slowing down Sitecore.

CAN I HAVE TASKS RUNNING ON MY CM SERVER ONLY?

Yes, you can add a new folder in Sitecore, and then add a new agent that points to the new folder as root, to the sitecore.config file of the CM server.

See more here: Sitecore Scheduled Tasks – Run on certain server instance.

CAN I RUN TASKS AT THE SAME TIME EVERY DAY?

Kind of. You can have your task running once a day within the same interval, using a little code.

See more here: Run Sitecore scheduled task at the same time every day.

IN WHAT CONTEXT DOES MY TASK RUN?

Sitecore have created a site called “scheduler” where the context is defined:

<sites>
  <site name="scheduler" database="master" language="da" enableTracking="false" domain="sitecore" />
</sites>

To run the task in a different context, use a context switcher.

DO I HAVE A HTTP CONTEXT WHEN RUNNING SCHEDULED TASKS?

No.

DO I HAVE A USER WHEN RUNNING SCHEDULED TASKS?

Do not expect to have a user. Expect the Sitecore Scheduled Task – Schedule time format and other quirks to be NULL, unless you use a UserSwitcher.

CAN I RUN THE SAME CODE FROM DIFFERENT TASKS?

Yes. Sitecore have split the definition of the code to run from the definition of the schedule. The code is defined as a “command” where you define the class and the method to run:

Task Commands

Task Commands

The schedule simply points to the command to run, and you can have as many schedules as you want:

Pointing to a command

Pointing to a command

WHAT ARE THE “ITEMS” FIELD FOR?

Items Field

Items Field

No one really knows what the items field are for, but according to old Sitecore folklore, you can add a pipe separated list of item GUIDS (or even item paths), and the “itemArray” property of the method you call will contain the list of items:

public void Execute(Item[] itemArray, CommandItem commandItem, ScheduleItem scheduleItem)
{
  foreach (Item item in itemArray)
  {
    // do something with the item
  }
}

MORE TO READ:

 


Edit special field types in Sitecore Experience Editor – Custom Experience Editor Buttons replaces the Edit Frame

$
0
0

The Sitecore Experience Editor allows inline editing of simple field types like text and rich text (HTML) field, and a few complex ones like links. But editing checkboxes, lookup values, multiselect boxes, or any custom field you might have developed yourself requires some custom setup.

Previously, the Edit Frame have been the weapon of choice. The Edit Frame opens a tiny shell with the fields of your choice when clicking on the control to edit.
Unfortunately this has the downside that it hides the Experience Editor’s own buttons, so it is becoming deprecated, and isn’t even available when using MVC to render the front end.

The Edit Frame will hide the standard Experience Editor Buttons

The Edit Frame will hide the standard Experience Editor Buttons

But fear not, as the Edit Frame functionality have just been moved to the Experience Editor Buttons.

STEP 1: SET UP THE AVAILABLE BUTTONS

Go to the CORE database. Find the /sitecore/content/Applications/WebEdit/Custom Experience Buttons.

For your own pleasure, create a nice folder structure that matches your component structure, and add a “Field Editor Button” in the structure:

A Field Editor Button placed in a folder below Custom Experience Buttons.

A Field Editor Button placed in a folder below Custom Experience Buttons.

In the “Fields” field of that button, add the fields that needs to be editable, as a Pipe separated list, like this:

  • FieldName1|FieldName2|FieldName3

STEP 2: CONFIGURE THE RENDERING

In the “Experience Editor Buttons”, add the button you created:

The button is added to the Experience Editor Buttons

The button is added to the Experience Editor Buttons

STEP 3: TEST IT

Now, when clicking the rendering, the button you added is available:

Experience Editor Buttons

Experience Editor Buttons

And when clicking it, the Edit Frame opens, and the fields are available for editing:

Edit Frame

Edit Frame

MORE TO READ:


.NET Session state is not thread safe

$
0
0

When working with the .NET session state you should bear in mind that the HttpContext.Current.Session cannot be transferred to another thread. Imagine that you, from the Global.asax would like to read the SessionID each time a session is started:

// This method inside Global.asax is called for every session start
protected void Session_Start(object sender, EventArgs e)
{
  MyClass.DoSomethingWithTheSession(HttpContext.Current);
}

To speed up performance you wish to use a thread inside DoSomethingWithTheSession. The thread will read the Session ID:

public class MyClass
{
  public static void DoSomethingWithTheSession(HttpContext context)
  {
    if (context == null)
	  return;

    // Here the context is not null
	ThreadPool.QueueUserWorkItem(DoSomethingWithTheSessionAsync, context);
  }

  private static void DoSomethingWithTheSessionAsync(object httpContext)
  {
    HttpContext context = (HttpContext)httpContext;

	// Oops! Here the context is NULL
	string sessionID = context.Session.SessionID;
  }
}

The code above will fail because the HttpContext is not thread safe. So in DoSomethingWithTheSession(), the context is set, but in DoSomethingWithTheSessionAsync, the context will null.

THE SOLUTION: TRANSFER THE SESSION VALUES INSTEAD OF THE SESSION OBJECT:

To make it work, rewrite the DoSomethingWithTheSessionAsync() method to reteieve the values needed, not the HttpContext object itself:

public class MyClass
{
  public static void DoSomethingWithTheSession(HttpContext context)
  {
    if (context == null)
      return;

    // Transfer the sessionID instead of the HttpContext and everything is fine
    ThreadPool.QueueUserWorkItem(DoSomethingWithTheSessionAsync,
      context.Session.SessionID);
  }

  private static void LogReportFeatureflagsAsync(object session)
  {
    // This works fine, as the string is thread safe.
    string sessionID = (string)session;

    // Do work on the sessionID
  }
}

MORE TO READ:

 


C# Using Newtonsoft and dynamic ExpandoObject to convert one Json to another

$
0
0

The scenario where you convert one input Json format to another output Json is not uncommon. Before C# dynamic and ExpandoObject you would serialize the input Json to POCO model classes and use a Factory class to convert to another set of POCO model classes, that then would be serialized to Json.

With the dynamic type and the ExpandoObject you have another weapon of choice, as you can deserialize the input Json to a dynamic object, and convert the contents to another dynamic object that is serialized. Imagine the following input and output Json formats:

Input format:

{
	"username": "someuser@somewhere.com",
	"timeStamp": "2017-09-20 13:50:16.560",
	"attributes": {
		"attribute": [{
			"name": "Brian",
			"count": 400
		},
		{
			"name": "Pedersen",
			"count": 100
		}]
	}
}

Output format:

{
	"table": "USER_COUNT",
	"users": [{
		"uid": "someuser@somewhere.com",
		"rows": [{
			"NAME": "Brian",
			"READ_COUNT": 400
		},
		{
			"NAME": "Pedersen",
			"READ_COUNT": 100
		}]
	}]
}

Converting from the input format to the output format can be achieved with a few lines of code:

// Convert input Json string to a dynamic object
dynamic input = JsonConvert.DeserializeObject(myQueueItem);

// Create a dynamic output object
dynamic output = new ExpandoObject();
output.table = "USER_COUNT";
output.users = new dynamic[1];
output.users[0] = new ExpandoObject();
output.users[0].uid = input.username;
output.users[0].rows = new dynamic[input.attributes.attribute.Count];
int ac = 0;
foreach (var inputAttribute in input.attributes.attribute)
{
    var row = output.users[0].rows[ac] = new ExpandoObject();
    row.NAME = inputAttribute.name;
    row.READ_COUNT = inputAttribute.count;
    ac++;
}

// Serialize the dynamic output object to a string
string outputJson = JsonConvert.SerializeObject(output);

I’ll try to further explain what happens. The Newtonsoft.Json DeserializeObject() method takes a json string and converts it to a dynamic object.

The output Json is created by creating a new dynamic object of the type ExpandoObject(). With dynamic ExpandoObjects we can create properties on the fly, like so:

// Create a dynamic output object
dynamic output = new ExpandoObject();
// Create a new property called "table" with the value "USER_COUNT"
output.table = "USER_COUNT";

This would, when serialized to a Json, create the following output:

{
"table": "USER_COUNT"
}

To create an array of objects, you need to first create a new dynamic array and then assign an ExpandoObject to the position in the array:

// Create a dynamic output object
dynamic output = new ExpandoObject();
// Create a new array called "users"
output.users = new dynamic[1];
// An an object to the "users" array
output.users[0] = new ExpandoObject();
// Create a new property "uid" in the "users" array
output.users[0].uid = input.username;

This generates the following Json output:

{
	"users": [{
		"uid": "someuser@somewhere.com"
		}]
}

MORE TO READ:



Sitecore Rule – Personalize based on any field in any facet in your Contact

$
0
0

This Sitecore Personalization Rule was developed by my colleague Martin Rygaard with the purpose of being able to personalize on any field in any facet on a contact.

Contact Facet Rule Set Editor

Contact Facet Rule Set Editor

STEP 1: CREATE THE CONDITION

Create a new “Condition” below /sitecore/system/Settings/Rules/Definitions/Elements/???

The text of the Condition is:

where the [facetpath,,,facetpath] has [facetvalue,,,facetvalue]

STEP 2: CREATE A WHENCONDITION

This condition traverses the Contact path and returns true if the value matches the value described:

using System.Collections;
using Sitecore.Analytics;
using Sitecore.Analytics.Model.Framework;
using Sitecore.Analytics.Tracking;
using Sitecore.Diagnostics;
using Sitecore.Rules;
using Sitecore.Rules.Conditions;

namespace MyNamespace
{
  public class ContactFacetHasValue<T> : WhenCondition<T> where T : RuleContext
  {
    public string FacetValue { get; set; }

    public string FacetPath { get; set; }

    protected override bool Execute(T ruleContext)
    {
        Contact contact = Tracker.Current.Session.Contact;

        if (contact == null)
        {
          Log.Info(this.GetType() + ": contact is null", this);
          return false;
        }

        if (string.IsNullOrEmpty(FacetPath))
        {
          Log.Info(this.GetType() + ": facet path is empty", this);
          return false;
        }

        var inputPropertyToFind = FacetPath;

        string[] propertyPathArr = inputPropertyToFind.Split('.');
        if (propertyPathArr.Length == 0)
        {
          Log.Info(this.GetType() + ": facet path is empty", this);
          return false;
        }

        Queue propertyQueue = new Queue(propertyPathArr);
        string facetName = propertyQueue.Dequeue().ToString();
        IFacet facet = contact.Facets[facetName];
        if (facet == null)
        {
          Log.Info(string.Format("{0} : cannot find facet {1}", this.GetType(), facetName), this);
          return false;
        }

        var datalist = facet.Members[propertyQueue.Dequeue().ToString()];
        if (datalist == null)
        {
          Log.Info(string.Format("{0} : cannot find facet {1}", this.GetType(), facetName), this);
          return false;
        }

        if(typeof(IModelAttributeMember).IsInstanceOfType(datalist))
        {
          var propValue = ((IModelAttributeMember)datalist).Value;
          return (propValue != null ? propValue.Equals(FacetValue) : false);
        }
        if(typeof(IModelDictionaryMember).IsInstanceOfType(datalist))
        {
          var dictionaryMember = (IModelDictionaryMember) datalist;

          string elementName = propertyQueue.Dequeue().ToString();
          IElement element = dictionaryMember.Elements[elementName];
          if (element == null)
          {
            Log.Info(string.Format("{0} : cannot find element {1}", this.GetType(), elementName), this);
            return false;
          }

          string propertyToFind = propertyQueue.Dequeue().ToString();
          var prop = element.Members[propertyToFind];
          if (prop == null)
          {
            Log.Info(string.Format("{0} : cannot find property {1}", this.GetType(), propertyToFind), this);
            return false;
          }

          var propValue = ((IModelAttributeMember) prop).Value;
          return (propValue != null ? propValue.Equals(FacetValue) : false);
        }
        if (typeof(IModelCollectionMember).IsInstanceOfType(datalist))
        {
          var collectionMember = (IModelCollectionMember)datalist;
          var propertyToFind = propertyQueue.Dequeue().ToString();
          for (int i = 0; i < collectionMember.Elements.Count; i++)
          {
            IElement element = collectionMember.Elements[i];
            var prop = element.Members[propertyToFind];
            if (prop == null)
            {
              Log.Info(string.Format("{0} : cannot find property {1}", this.GetType(), propertyToFind), this);
              return false;
            }
            var propValue = ((IModelAttributeMember) prop).Value;
            if (propValue.Equals(FacetValue))
              return true;
          }
        }

      return false;
    }
  }
}

STEP 3: TEST IT

This is an example of a Contact, with facets, among these is the “Personal” facet with the “FirstName” attribute:

Facet

Facet

When creating a Personalization rule where “Personal.FirstName” has “Brian” and applying it to my page:

Contact Facet Rule Set Editor

Contact Facet Rule Set Editor

Rule In Use

Rule In Use

I should only be able to see this title when logged in as a user which contact facet FirstName is “Brian”:

Yes, I am a Brian

Yes, I am a Brian

MORE TO READ:

 

 


In Sitecore 9, the ProxyDisabler have been retired completely

$
0
0

Sitecore have finally retired the ProxyDisabler in Sitecore 9. Proxy items were the early version of item cloning and were deprecated in Sitecore 6. And now the ProxyDisabler have been removed.

There are no replacement. All you need to do is to remove the line from your code.

// Old Sitecore 5,6,7,8 code:
public void Put(Item source)
{
  using (new ProxyDisabler())
  {
    // Do stuff with your item
  }
}

// New Sitecore 9 code:
public void Put(Item source)
{
  // Do stuff with your item
}

MORE TO READ:

So we are doing Sitecore MVP announcements now, are we?

$
0
0
Sitecore MVP 2018

Sitecore MVP 2018

Yes, again Sitecore thought that my rants about contacts, experience editor, SOLR, and other Sitecore related topics, are good enough to be awarded with the Sitecore MVP title.

My first award was given back in 2010. Back then, the Sitecore MVP title was a 2 year nomination, which only gave you a badge for your website. Today, the title bring more than just the glory. Access to early product releases, MVP forums and the legendary MVP Summit adds value to the award.

This year, 5 of my colleagues in Pentia have also been awarded, making Pentia the only danish company with 6 MVP awards:

  • Alan Coates, Technology MVP, for most of you known as one of the organizers behind SUGCON. Oh, and he also knows everything.
  • Christina Hauge Engel, the only Danish Digital Strategist MVP, and the go-to-girl when the personalization and analytics powers of Sitecore needs to be unleashed.
  • Jens Gustafsson, Technology MVP, my Swedish colleague, with the excellent blog on Sitecore issues.
  • Mads-Peter Jakobsen, former digital strategist, now holds the Ambassador MVP title. Where Christina solves the how’s behind analytics, Mads-Peter finds the why’s.
  • Thomas Stern, Technology MVP, is also organizing SUGCON’s, and like Alan Coates, knows everything.

Congratulations to all of the MVP’s around the world. I hope to see many of you at the upcoming SUGCON’s and the MVP Summit in Orlando.

Sitecore Object of type ‘System.Runtime.Serialization.TypeLoadExceptionHolder’ cannot be converted to type ‘Sitecore.Analytics.Model.Framework.IFacet’.

$
0
0

When deleting or refactoring Sitecore Contact Facets, and when using a Shared Session Manager, this error can exhaust your solution to the point where the IIS recycles:

ERROR Error executing the session end callback. Id: dd7b0466-af93-4130-a388-9e9eca9c0839 Exception: System.ArgumentException Message: Object of type ‘System.Runtime.Serialization.TypeLoadExceptionHolder’ cannot be converted to type ‘Sitecore.Analytics.Model.Framework.IFacet’. Source: mscorlib

The problem arises, because the existing sessions Shared Session Manager contains the old facets, and when Sitecore tries to deserialize them, the objects have nowhere to go.

THE SOLUTION:

After you have deployed your solution, you need to delete all existing sessions in the shared session manager.

Please note that Sitecore stores the sessions in the TEMPDB database of your SQL Server. The table is called “dbo.SessionState“. Deleting is easy:

delete from SessionState

If you deploy to several front end servers in sequence, you will need to delete the SessionState table several times, as the non-changed server will continue to create new sessions containing Facets unknown to the updated servers.

MORE TO READ:

COMPLETE ERROR MESSAGE:

ERROR Error executing the session end callback. Id: dd7b0466-af93-4130-a388-9e9eca9c0839 Exception: System.ArgumentException Message: Object of type ‘System.Runtime.Serialization.TypeLoadExceptionHolder’ cannot be converted to type ‘Sitecore.Analytics.Model.Framework.IFacet’. Source: mscorlib at System.RuntimeType.TryChangeType(Object value, Binder binder, CultureInfo culture, Boolean needsSpecialCast) at System.Reflection.RtFieldInfo.UnsafeSetValue(Object obj, Object value, BindingFlags invokeAttr, Binder binder, CultureInfo culture) at System.Runtime.Serialization.ObjectManager.DoValueTypeFixup(FieldInfo memberToFix, ObjectHolder holder, Object value) at System.Runtime.Serialization.ObjectManager.CompleteObject(ObjectHolder holder, Boolean bObjectFullyComplete) at System.Runtime.Serialization.ObjectManager.DoNewlyRegisteredObjectFixups(ObjectHolder holder) at System.Runtime.Serialization.ObjectManager.RegisterObject(Object obj, Int64 objectID, SerializationInfo info, Int64 idOfContainingObj, MemberInfo member, Int32[] arrayIndex) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.RegisterObject(Object obj, ParseRecord pr, ParseRecord objectPr, Boolean bIsString) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ParseObjectEnd(ParseRecord pr) at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, IMethodCallMessage methodCallMessage) at System.Web.Util.AltSerialization.ReadValueFromStream(BinaryReader reader) at System.Web.SessionState.SessionStateItemCollection.ReadValueFromStreamWithAssert() at System.Web.SessionState.SessionStateItemCollection.DeserializeItem(String name, Boolean check) at System.Web.SessionState.SessionStateItemCollection.get_Item(String name) at Sitecore.Analytics.Tracking.SharedSessionState.SharedSessionStateManager.OnItemExpired(String id, SessionStateStoreData item) at Sitecore.SessionProvider.SessionStateStoreProvider.ExecuteSessionEnd(String id, SessionStateStoreData item)

Sitecore 9 Configuration not available on Dependency Injection – LockRecursionException: Recursive upgradeable lock acquisitions not allowed in this mode

$
0
0

Form Sitecore 8.2, Sitecore have implemented Dependency Injection for their own classes. Sitecore uses Microsoft’s Dependency Injection library.

Sitecore uses dependency injection to inject many things, including configurations. Therefore, you cannot access configuration before after your code have been injected.

Take the following example:

using Sitecore.Configuration;

namespace MyCode
{
  public class ServicesConfigurator() : IServicesConfigurator
  {
    public void Configure(IServiceCollection serviceCollection)
    {
      // This line will fail:
      var configuration = Factory.GetConfiguration();
      serviceCollection.AddTransient<MyClass>();
    }
  }
}

This code will thrown an error:

[LockRecursionException: Recursive upgradeable lock acquisitions not allowed in this mode.]
System.Threading.ReaderWriterLockSlim.TryEnterUpgradeableReadLockCore(TimeoutTracker timeout) +3839391
System.Threading.ReaderWriterLockSlim.TryEnterUpgradeableReadLock(TimeoutTracker timeout) +45
Sitecore.Threading.Locks.UpgradeableReadScope..ctor(ReaderWriterLockSlim mutex) +107
Sitecore.DependencyInjection.ServiceLocator.get_ServiceProvider() +85 Sitecore.Configuration.Factory.<.cctor>b__0() +9
System.Lazy1.CreateValue() +709 System.Lazy1.LazyInitValue() +191 Sitecore.Configuration.Factory.GetConfiguration() +44

The implications is that none of your injected constructors can contain references to:

  • Databases
  • Site information
  • Settings

HOW TO WORK AROUND INJECTED CODE WITH CONSTRUCTORS:

Imaging you like to inject a repository with the Sitecore Database as a constructor. You also would like to inject the database of the current context.

First you create an interface:

using Sitecore.Data;

public interface IDatabaseFactory
{
  Database Get();
}

Then you create a concrete implementation of the interface:

public class ContextDatabaseFactory :IDatabaseFactory
{
  public Database Get()
  {
    return Sitecore.Context.Database;
  }
}

In the ServicesConfigurator you can now register the abstract implementation:

public class ServicesConfigurator : IServicesConfigurator
{
  public void Configure(IServiceCollection serviceCollection)
  {
    // The database factory to inject:
    serviceCollection.AddTransient<IDatabaseFactory, ContextDatabaseFactory>();
    // The class that needs the database in the constructor:
    serviceCollection.AddTransient<MyRepository>();
  }
}

And in the MyRepository you reference the IDatabaseRepository in the constructor instead of the concrete Sitecore Database implementation:

public class MyRepository
{
  private readonly IDatabaseFactory _database;
  
  public MyRepository(IDatabaseFactory database)
  {
    _database = database;
  }
  
  public void DoTheActualCode()
  {
    _database.Get().GetItem("/sitecore/content/...");
  }
}

Many thanks to my cool colleagues who helped forge this solution:

MORE TO READ:

 

Azure API Management configure CORS in the policy

$
0
0

Cross-Origin Resource Sharing (CORS) allows a user agent gain permission to access a web- or REST service from a server on a different domain than the site currently in use.

Modern browsers does not allow websites with Javascript that calls external URL’s. Unless you have CORS configured, you will experiences a Cross  Site Scripting error.

Microsoft Azure API Management also supports CORS. When setting up CORS you set up the following:

So in the Azure API Management publisher portal, go to Policies, select the Product and API to configure and select “Configure Policy”:

API Management Policies

API Management Policies

Add CORS to the inbound rules and set the headers in the outbound rules:

<policies>
  <inbound>
    <base />
    <cors allow-credentials="true">
      <allowed-origins>
        <origin>http://website1.com</origin>
        <origin>http://website2.com</origin>
      </allowed-origins>
      <allowed-methods>
        <method>GET</method>
      </allowed-methods>
      <allowed-headers>
        <header>content-type</header>
        <header>accept</header>
      </allowed-headers>
    </cors>
  </inbound>
  <backend>
    <base />
  </backend>
  <outbound>
    <base />
    <set-header name="Access-Control-Allow-Origin" exists-action="override">
      <value>@(context.Request.Headers.GetValueOrDefault("Origin",""))</value>
    </set-header>
    <set-header name="Access-Control-Allow-Credentials" exists-action="override">
      <value>true</value>
    </set-header>
  </outbound>
  <on-error>
    <base />
  </on-error>
</policies>

Configuration breakdown:

  • Inbound:
    • CORS allow-credentials=true allows API Management to accept credentials
    • The allowed-origins is a list of origins that have access to your service. You can add as many domains as you like.
    • Allowed-methods lists the methods you allow
    • Allowed-headers lists the headers you allow
  • Outbound:
    • The 2 headers Access-Control-Allow-Origin and Access-Control-Allow-Credentials are set in the header. This code automatically adds the calling domain to the Access-Control-Allow-Origin header.

The cool part of this configuration is that you not only allow cross scripting, but you also control which domains have access to your service. If you call the API Management endpoint from a Restlet or POSTMAN Client you get the following error:

{
"statusCode": 401,
"message": "Access denied due to invalid subscription key. Make sure to provide a valid key for an active subscription."
}

MORE TO READ:

Sitecore – what is the hash property in the image query string?

$
0
0

Have you also wondered why Sitecore adds a “hash=” property to the image query string?

https://yourwebsite.com/-/media/image.jpg?w=200&hash=A1FFA19B634EDF53A3AB3B757887E671F1C452A0

The hash key will protect your images from being scaled by others than your own server. The image above will only render if the hash key matches the width parameter:

The media request protection feature restricts media URLs that contain dynamic image-scaling parameters, so that only server-generated requests are processed. This ensures that the server only spends resources and disk space on valid image-scaling requests.

Sitecore, Protect media requests

This protects your server from using resources scaling images, if anyone tries to get an image from your server in another size. If the hash doesn’t match, the image is not scaled.

The feature can be disabled. In App_config/Include/Sitecore.Media.RequestProtection.config, Set Media.RequestProtection.Enabled to false:

<setting name="Media.RequestProtection.Enabled" value="false" />

MORE TO READ:


Azure Functions – How to retry messages in the poison queue

$
0
0

When working with a scenario where your Microsoft Azure Function reads data from a queue, your azure functions is automatically triggered when an entry is added to the queue you wish to read from.

The SDK will call your function 5 times, and the message is only removed from the trigger queue if:

  • The function was successful
  • The function fails 5 times

On the 5th fail, the message is moved to the poison queue, a separate queue named [queuename]-poison.

Queues and poison queues

Queues and poison queues

HOW TO RETRY THE POISON QUEUE:

Microsoft have built in methods for manual and automatically poison message handling, but there is no description of how to just retry the messages. In many situations, the problem causing your issue is fixed elsewhere, and all you need to do is retry the messages.

The solution is easier that I thought: The poison queue is just another queue, so all you need to do is to point the Azure Function to the poison queue, and the messages will be executed:

Read from Poison Queue

Read from Poison Queue

MORE TO READ:

 

Sitecore install local SSL certificate for shared xConnect SOLR server

$
0
0

In Sitecore 9, the SOLR connection is secure by default. The effect is that if your development environment includes a shared SOLR server, your local IIS requires a SSL certificate issued for that server, even when your local site does not run HTTPS.

This makes installing a local dev environment a little but more complicated, which is why my colleague Kristian Gansted made this guide for me:

STEP 1: GET THE SSL CERTIFICATE FOR THE SOLR SSL SERVER

DevOps should provide you with the appropriate .pfx file.
Double click the file and follow the guide:

STEP 2: USE THE CERTIFICATE IMPORT WIZARD TO INSTALL THE CERTIFICATE

Select the Local Machine and click next:

Certificate Import Wizard

Certificate Import Wizard – Select the Local Machine

On the “File to Import“, the file name is already chosen so just click next, and go the “Private key protection“. Do not select a password, just click “Next”:

No Password

Do not select a password. Just click Next

In the Certificate Store, click “Browse…” and select the “Personal” store:

Certificate Store

Select “Personal” store

Click “Next” and click “Finish”. The certificate is now installed.

The import was successful

The import was successful

STEP 3: ALLOW AppPool ACCESS TO THE CERTIFICATE

Find the “Manage computer certificates” control panel:

Manage Computer Certificates

Manage Computer Certificates

Find the certificate under the “Personal” certificates, right click and find the “Manage Private Keys…” under “All Tasks“:

Certificates

Find the certificate, right click and find Manage Private Keys under “All tasks”

Press “Add” (1) to add a new user.
Press “Locations” (2) and select the machine to search in the correct location.

Select the correct location

Select the correct location

Then type “IIS APPPOOL/[name of IIS site]”:

Select User

Select IIS APPPOOL\[name of iis site]

Give the user full control.

Your IIS site is now ready to access the SOLR server.

MORE TO READ:

Sitecore open internal links in new window

$
0
0

Frequent users of Sitecore have already noticed that the “Insert Sitecore Link” dialog does not have a target selector:

Insert Internal Link

Insert Internal Link

Yes it’s true, if you wish to open an internal link in a new windows, it’s a 2 step process. First you add the internal link. Then you select the link and click the “Hyperlink Manager“. From here you can choose the target of the link:

Hyperlink Manager

Hyperlink Manager

MORE TO READ: 

Creating dynamic arrays and lists using Dynamic and ExpandoObject in C#

$
0
0

In this previous post, C# Using Newtonsoft and dynamic ExpandoObject to convert one Json to another, I described how you can use the dynamic keyword and the ExpandoObject class to quickly transform JSON without the need for any concrete implementations of either the source or destination JSON.

This is an example of a dynamic list where you do not know the number of objects in the output array:

dynamic output = new List<dynamic>();

dynamic row = new ExpandoObject();
row.NAME = "My name";
row.Age = "42";
output.Add(row);

USAGE IN REAL LIFE:

Imagine you need to convert the following JSON by taking only those rows where the age is above 18:

{
	"attributes": [{
			"name": "Arthur Dent",
			"age": 42,
		},
		{	"name": "Ford Prefect",
			"age": 1088,
		},
		{	"name": "Zaphod Beeblebrox",
			"age": 17,
		}]
}

The code to transform the JSON would look something like this:

// Convert input JSON to a dynamic object
dynamic input = JsonConvert.DeserializeObject(myQueueItem);

// Create a list of dynamic object as output
dynamic output = new List<dynamic>();

foreach (var inputAttribute in input.attributes)
{
  if (inputAttribute.Age >= 18)
  {
    // Create a new dynamic ExpandoObject
    dynamic row = new ExpandoObject();
	row.name = inputAttribute.name;
	row.age = inputAttribute.age;
	// Add the object to the dynamic output list
	output.Add(row);
  }
}

// Finally serialize the output array
string outputJson = JsonConvert.SerializeObject(output);

The output is this:

[
  {  "name": "Arthur Dent",
	 "age": 42,
  }, 
  {	 "name": "Ford Prefect",
	 "age": 1088,
  }
]

MORE TO READ:

Sitecore and WebApi

$
0
0

So you have some legacy WebApi code that needs to run in your Sitecore solution? Or are just just a WebApi expert and need to use your favorite tool in the toolbox? Fear not, WebApi will run fine in your Sitecore solution.

You don’t need to use the native Sitecore 8.2 support for WebApi, you can use your own routes as well, and implement your nasty controller selectors, formatters and message handlers.

The API routes can be registered as a processor in the /sitecore/pipelines/initialize pipeline:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <sitecore>
    <pipelines>
      <initialize>
        <processor type="MyProject.RegisterApiRoutes, MyDll" />
      </initialize>
    </pipelines>
  </sitecore>
</configuration>

Please be aware, that Sitecore have taken the /api/sitecore and the /api/rest/ routes for it’s own code already, so use another route and you will avoid clashes with the Sitecore API.

This is my sample route registering, using the /myapi/ route instead of /api/:

using System.Web.Http;
using Newtonsoft.Json;
using Sitecore.Pipelines;

namespace MyProject
{
  public class RegisterApiRoutes
  {
    public void Process(PipelineArgs args)
    {
      HttpConfiguration config = GlobalConfiguration.Configuration;

      SetRoutes(config);
      SetSerializerSettings(config);
    }

    private void SetRoutes(HttpConfiguration config)
    {
      config.routes.MapHttpRoute("Features", "myapi/features", new { action = "Get", controller = "Feature" });
      config.routes.MapHttpRoute("Default route", "myapi/{controller}", new { action = "Get" });
    }

    private void SetSerializerSettings(HttpConfiguration config)
    {
      JsonSerializerSettings settings = new JsonSerializerSettings { ContractResolver = new DefaultContractResolver() };
      config.Formatters.JsonFormatter.SerializerSettings = settings;
      config.Formatters.Remove(config.Formatters.XmlFormatter);
      config.EnsureInitialized();
    }
  }
}

And I can implement my “Features” controller:

using System.Collections.Generic;
using System.Web.Http;

namespace Myproject
{
  public class FeatureController : ApiController
  {
    public dynamic Get()
    {
      return "hello world";
    }
  }
}

MORE TO READ:

 

Viewing all 264 articles
Browse latest View live




Latest Images