I have a Custom Resolver configured for SDL Tridion 2011 which is designed to prevent Pages and Components which use a Multimedia Component from being published when a user publishes the linked Multimedia Component. This Custom Resolver is replacing an old event handler which looked like this:
private void MMCmpPublishHandler(Component source, PublishEventArgs args,
EventPhases phase)
{
if (source.ComponentType == ComponentType.Multimedia)
{
args.PublishInstruction.ResolveInstruction.IncludeComponentLinks = false;
}
}
The old event handler used to be called before the resolvers were invoked. I have configured my new resolver to fire after the default the resolver by configuring my Tridion.ContentManager.config file with the following extract:
<add itemType="Tridion.ContentManager.ContentManagement.Component">
<resolvers>
<add type="Tridion.ContentManager.Publishing.Resolving.ComponentResolver" assembly="Tridion.ContentManager.Publishing, Version=6.1.0.996, Culture=neutral, PublicKeyToken=360aac4d3354074b"/>
<add type="UrbanCherry.Net.SDLTridion.CustomResolvers.DynamicBinaryLinkResolver" assembly="UrbanCherry.Net.SDLTridion.CustomResolvers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=e7729a00ff9574fb"/>
</resolvers>
</add>
The code works fine, although it seems counter intuitive (from a performance perspective) to place the new resolver after the default resolver as the default resolver takes time to find all the resolved items only to have them all removed again.
I tried changing the order of the resolvers so that the new resolver is called first, but the new resolver is never called, and the following error appears in the event log:
Object reference not set to an instance of an object.
Component: Tridion.ContentManager.Publishing
Errorcode: 0
User: NT AUTHORITY\SYSTEM
StackTrace Information Details:
at Tridion.ContentManager.Publishing.Resolving.ResolveEngine.ResolveItems(IEnumerable`1 items, ResolveInstruction instruction, IEnumerable`1 contexts)
at Tridion.ContentManager.Publishing.Resolving.ResolveEngine.ResolveItem(IdentifiableObject item, ResolveInstruction instruction, PublishContext context)
at Tridion.ContentManager.Publishing.Handling.DefaultPublishTransactionHandler.HandlePublishRequest(PublishTransaction publishTransaction)
at Tridion.ContentManager.Publishing.Handling.DefaultPublishTransactionHandler.ProcessPublishTransaction(PublishTransaction publishTransaction)
at Tridion.ContentManager.Publishing.Publisher.QueueMessageHandler.HandleMessage()
Does anyone know if it is possible to call a Custom Resolver before a Default Resolver, and if not can you suggest an efficient way to achieve the same behavior as the old event handler?
We have opened an incident request to Tridion SDL support. Please find below their answer:
R&D department have confirmed that the issue you have found is a
defect staring with the migration to SP1. It is now no longer possible to place a custom resolver before the default resolver.The
issue is expected to be handled in a future release.
It is certainly possible to call your resolver first, however you will need it to create the initial list of resolved items. Since that is basically what the default resolver already does, it doesn't make much sense trying to add yours in front of it in my mind.
So yes performance wise it would make more sense to only have your resolver and have it replace the default one. But then you should really decompile the default one and rewrite it with your logic in there. Which is counter productive certainly considering hotfixes and upgrades would mean your resolver code might need to change in the future.
I found that the resolvers actually are so fast that I ignored the performance impact of removing the few items I needed removed in mine. It realistically is only adding items to a list and then you are removing a few of those items from the list again.
Related
I have an MVC website (v5, though I don't think it's related) where I have intentionally introduced an error upon when attempting to establish a database connection (wrong server IP in the connection string). When the user hits the HomeController one dependencies for the constructor is a UserRepository (to get the current user profile data) which depends on a database connection/session to be available. When it's not, the Dependency Resolver can't inject the UserRepository and when that happens it causes an error (as it does with any dependency of any controller), and I get a generic "No parameterless constructor defined for this object". Which is pretty useless.
So I'm trying to use a custom error page to retrieve the inner exception and display it in a friendly manner. (Because this error is happening when trying to acquire the HomeController, it never actually reaches the HandleErrorAttribute, hence the relying on CustomErrors).
So I have an ErrorsController with a series of actions...
Snippet from ErrorsComtroller.cs
public ActionResult Error()
{
return View("Error_500");
}
public ActionResult NotFound()
{
return View("Error_404");
}
Snippet from web.config
<customErrors mode="On">
<error statusCode="404" redirect="~/errors/notfound" />
<error statusCode="500" redirect="~/errors/error" />
</customErrors>
The Error_500 page is pretty basic, it has a model type of HandleErrorInfo, but if it's not present it checks for Exception details using Server.GetLastError(). Problem is, GetLastError() is always null, and I get my custom error page but no additional details beyond my generic feedback of "An unexpected error has occured". After doing some digging I found that the method doesn't work after a redirect, which is the default way the CustomErrors functions. So I changed the web.config to use this line instead...
Snippet from web.config
This way it won't cause a redirect and the GetLastError() should have my exception details about the database connection problem. Thing is, now I get the default ASP.NET error page with this message.
An exception occurred while processing your request. Additionally,
another exception occurred while executing the custom error page for
the first exception. The request has been terminated.
So I did some more digging using intellitrace, and I see the exception about the database connection. A little farther down I see the error about not having a parameterless constructor on HomeController and then one about encountering an error trying to create the controller of type 'HomeController'. But then I see one that says
Error executing child request for /errors/error
So I navigated directly to that path and the page works fine. But when it's used in customerrors WITH the ResponseRewrite for the redirectmode, it errors out. I put a break line on the first (and only) line of the ErrorsController.Error() action, but it never gets hit. If I substitute the redirect path in the custom errors to a static file it works, but if I change it back to the ~/errors/error it fails again.
Is there a issue when using MVC actions as url's for the CustomErrors when ResponseRewrite is specified?
"This happens because "ResponseRewrite" mode uses Server.Transfer under the covers, which looks for a file on the file system. As a result you need to change the redirect path to a static file, for example to an .aspx or .html file:"
<customErrors mode="On" redirectMode="ResponseRewrite" defaultRedirect="~/Error.aspx"/>
See: https://dusted.codes/demystifying-aspnet-mvc-5-error-pages-and-error-logging
"Apparently, Server.Transfer is not compatible with MVC routes, therefore, if your error page is served by a controller action, Server.Transfer is going to look for /Error/Whatever, not find it on the file system, and return a generic 404 error page!"
See: CustomErrors does not work when setting redirectMode="ResponseRewrite"
In other words, you cannot use ResponseRewrite with views.
This is a well known issue that has been problematic for developers because it does not afford itself to either an easy or elegant solution. Bottom line, MVC does not play nice when using custom views for exception handling and customer user-friendly pages for HTTP errors. The stock error.cshtml file (i.e., a View) in the Views\Shared folder is a great thing to have because it maintains the layout of the web page and provides exception errors. But, when you get HTTP errors then you need to create a view to handle the status code errors (e.g., 404, 500, etc.). Note: if you go the route of sending HTTP errors to a view then the URL line will contain non-ideal info (see weblinks below for further explanation).
You could route HTTP errors to the Error view, but I don't recommend it because the Error view should be for application errors (i..e, exceptions) whereas a separate custom user-friendly page should be created for generic HTTP errors. The difference is that the former is an application problem that the site developer needs to look at whereas the latter is a user error (or at least should be) that does not require the developer to look at it (just my 2 cents).
An alternative is to bypass the views and use custom user-friendly pages for both application exceptions and HTTP errors. But, beware of two problems:
1.) The wrong status code is returned (usually 200), which can be a problem because it will be picked up and indexed by search engines (you do not want this!)
2.) The URL specifies a non-sensical URL in the web browser
These can be handled easy enough. See the following link (go down to the section customErrors in web.config): https://dusted.codes/demystifying-aspnet-mvc-5-error-pages-and-error-logging
Below are other weblinks that I also found useful:
http://benfoster.io/blog/aspnet-mvc-custom-error-pages
https://msdn.microsoft.com/en-us/library/bb397417.aspx
https://www.youtube.com/watch?v=nNEjXCSnw6w
How do I display custom error pages in Asp.Net Mvc 3?
The last one appears to be yet another alternative: a custom hack to get around the problem of not being able to couple views with ResponseRewrite. This works by completely bypassing CustomErrors (i.e., CustomErrors mode="Off"). I have not yet tried this yet, but I am looking into it.
Final thought, keep an eye on all site status codes when either error or exception codes are thrown - make sure there are no 200 (i.e., OK) codes.
At my workplace we are in the process of upgrading our Time and Attendance setup. Currently, we have physical terminals that employees use to check in and check out. These terminal communicate to a 3rd party T&A system via web service calls.
About the T&A web service:
Hosted on IIS 6
Communication is with WCF over HTTP
We're only interested in one of the exposed methods (let's call it Beep())
What I need to do:
Leave the original T&A system in place, untouched
Write a custom service that also reacts to calls to Beep()
So, essentially, I need to piggy-back on all the calls to Beep(), but I'm not sure what the best approach is.
What has been considered already:
Write a custom webservice that implements the exact same same contract as the T&A service and direct all the terminals to that custom service. The idea being that I can then invoke the original T&A service from my custom service, as well as applying any other logic required.
This seems overly invasive to me, and seems needlessly risky. We want to leave the original system as unmodified as possible.
Write a custom HTTP Handler to intercept calls to the original T&A service.
We've actually already done something like this in house, but our implementation takes the original HttpRequest, extracts the contents, invokes a custom service, and finally create a new HttpRequest based on the original request so that the original web service call to Beep() is made.
What I don't like about this approach is that the original HttpRequest is lost. Yes, a second, supposedly identical, request is created, but I don't know enough about HttpRequests to guarantee this is safe.
I prefer option 2, but it's still not perfect. Ideally we wouldn't need to destroy the original HttpRequest. Does anyone know if this is possible?
If not, can anyone suggest another way of doing this? Can IIS be configured to fork requests to two destinations?
Thanks
UPDATE #1
I have found a solution (documented here), but I'm still open to other options.
UPDATE #2
I like flup's solution (and justification). He gets the bounty :) Thanks flup!
You can configure the web service to use a custom operation invoker, an IOperationInvoker.
WCF deserializes the original HTTP request as always, but instead of calling Beep() on the existing web service class, it will call your invoker instead. The invoker does its special thing, and then calls Beep() on the original service.
Advantage over implementing an IHTTPModule would be that all things HTTP are still handled by the original web service's configuration, unchanged. You fork off at a higher level of abstraction, namely on the web service's interface, at the Beep() method.
The nitty gritty of setting up a custom operation invoker without changing the existing service class (which makes it harder):
Implement a custom IOperationBehavior which sets the custom IOperationInvoker on the service's Beep() method in its ApplyDispatchBehavior method.
Implement a custom IEndpointBehavior which sets the custom IOperationBehavior in its ApplyDispatchBehavior method.
Put these two behaviors, with your IOperationInvoker, in a class library and add it to the existing service
Then configure the service to use the IEndpointBehavior.
See When and where to set a custom IOperationInvoker? and http://blogs.msdn.com/b/carlosfigueira/archive/2011/05/17/wcf-extensibility-ioperationinvoker.aspx for the invoker bit.
See Custom Endpoint Behavior using Standard webHttpEndpoint on how to configure a custom endpoint.
Sounds actually like you want to integrate your system into an ESB pattern. Now the MS solution to the ESB problem is Biztalk. Biztalk is the thermonuclear warhead nut cracker in this case. You don't want Biztalk.
Check out the results here for lightweight alternatives
I have found a solution using a custom IHttpModule. See sample below:
using System;
using System.Text;
using System.Web;
namespace ForkHandles
{
public class ForkHandler : IHttpModule
{
public void Init(HttpApplication application)
{
application.BeginRequest += new EventHandler(application_BeginRequest);
}
void application_BeginRequest(object sender, EventArgs e)
{
var request = ((HttpApplication)sender).Request;
var bytes = new byte[request.InputStream.Length];
request.InputStream.Read(bytes, 0, bytes.Length);
request.InputStream.Position = 0;
var requestContent = Encoding.ASCII.GetString(bytes);
// vvv
// Apply my custom logic here, using the requestContent as input.
// ^^^
}
public void Dispose()
{
}
}
}
This will allow me to inspect the contents of a webservice request and react to it accordingly.
I'm open to other solutions that may be less invasive as this one will require changing the deployed 3rd party web service's configuration.
If you want to intercept the message to the T&A WCF service i would suggest using custom listener which can be plugged into the service call by making changes in the web.cofig.
This will be transparent.
Please look for WCF Extensibility – Message Inspectors.
system.diagnostics>
<sources>
<source name="System.ServiceModel.MessageLogging">
<listeners>
<add name="ServiceModelMessageLoggingListener">
<filter type=""/>
</add>
</listeners>
</source>
</sources>
</system.diagnostics>
I am reasonably experienced in BizTalk but new to the ESB Tool kit. We don't really have the need for an ESB solution as such but I would like to use the ESB Portal to display errors, modify messages and resubmit.
I have successfully, as far as I can tell, installed and configured the ESB tool kit correctly on my dev machine.
I have managed to send errors to the portal by enabling routing for failed messages and from within an Orchestration by creating a message thus: FaultMessage = Microsoft.Practices.ESB.ExceptionHandling.ExceptionMgmt.CreateFaultMessage();
The messages display correctly in the portal and on selecting 'Edit' I am given the option to resubmit via WCF OnRamp, SOAP OnRamp and HTTPReceive. This is where my problem starts. I have been using the WCF OnRamp to resubmit and on doing so I get a message:
This message has been successfully resubmitted
However on returning to the home screen of the portal I now have a new error for the Microsoft.Practices.ESB application:
There was a failure executing the receive pipeline: "Microsoft.Practices.ESB.Itinerary.Pipelines.ItinerarySelectReceiveXml, Microsoft.Practices.ESB.Itinerary.Pipelines, Version=2.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Source: "ESB Itinerary Selector" Receive Port: "OnRamp.Itinerary" URI: "/ESB.ItineraryServices.WCF/ProcessItinerary.svc" Reason: Error 135008: The itinerary was not found in the repository.
I presume I need to configure something here, a resolver perhaps for my message but I, so far, have not been able to find a guide that will help me through this issue. Is there a walk through out there some where that shows the full end to end exception handling with ESB Portal? I have managed to find plenty of help with getting messages into it but not for configuring for resubmit. Thanks.
Coincidentally I was trying to make this work today as well...
If you set the Itinerary resolver connection string on the WCF OnRamp's receive pipeline component configuration to use ITINERARY-STATIC:\headerRequired=true; (instead of ITINERARY-STATIC:\headerRequired=false;), then you'll get the following message in the event viewer:
The itinerary name is required and was not provided
Meaning the Itinerary isn't present in the custom SOAP header.
I also traced the message going from the ESB.Portal using Fiddler (after turning off the Message security in both the ESB.Portal and the BizTalk receive location). No Itinerary custom SOAP header.
After going through the ESB.Portal code, I found the cause in MessageResubmitter.cs:
[Serializable]
public static class MessageResubmitter
{
/// <summary>
/// Submits an XML message to the WCF OnRamp. The URL of the WCF OnRamp is defined in the
/// portal web.config. Context properties are not resubmitted, they are expected to be
/// applied by the receiving pipeline.
/// </summary>
/// <param name="doc">The XML document to submit.</param>
/// <returns>True if the submission was successful, false if the submission failed.</returns>
public static bool ResubmitWCF(XmlDocument doc)
{
try
{
ProcessRequestClient onRamp = new ProcessRequestClient();
onRamp.SubmitRequest(**null**, doc.OuterXml);
return true;
}
catch (Exception)
{
return false;
}
}
The first argument of SubmitRequest is the Itinerary, which is set to null. This means the ESB.Portal does not resend the Itinerary as a custom SOAP header to BizTalk when you resubmit the message.
At the moment, I can think of the following options to make this work:
1) Create a (or modify the existing) generic WCF OnRamp to use the BRE to determine the Itinerary to be associated with the resubmitted message. This could however become complex, because you'll need to create your rules to be able to deal with any messages resubmitted from any step within your itineraries.
2) Modify the code of the ESB.Portal to be able to resend the Itinerary + current step as a Custom SOAP header.
I'm probably going for option 2.
The WCF OnRamp uses the ItinerarySelectReceiveXml pipeline this can be configured to point to an Itinerary or Business Rule and thus the message can be easily routed depending on its message type and content.
My issue now is that a third party got there before me on our installation so I am now looking into creating a new OnRamp and configuring the ESB portal to pick that up in its resubmit list.
We had a similar issue recently. While we were exporting our itineraries to a local database, and deploying them, the ESB would not be able to find the itineraries.
It turned out a consultant we had on site had modified the esb.config file in ESB Toolkit to look for itineraries on a server instead of the local machine.
So, if, like me, you are sure the itineraries are being exported to the right place and that they are deployed, modify the esb.config connection string.
<connectionStrings>
<add name="ItineraryDb" connectionString="Data Source=.;Initial Catalog=EsbItineraryDb;Integrated Security=True" providerName="System.Data.SqlClient" />
</connectionString>
We usually catch unhandled exceptions in Global.asax, and then we redirect to a nice friendly error page. This is fine for the Live environment, but in our development environment we would like to check if CustomErrors are Off, and if so, just throw the ugly error.
Is there an easy way to check if CustomErrors are Off through code?
I would suggest using the following property:
HttpContext.Current.IsCustomErrorEnabled
As mentioned here, IsCustomErrorEnabled takes more things like RemoteOnly into consideration:
The IsCustomErrorEnabled property combines three values to tell you
whether custom errors are enabled for a particular request. This isn't
as simple as reading the web.config file to check the
section. There's a bit more going on behind the scenes to truly
determine whether custom errors are enabled.
The property looks at these three values:
The web.config's < deployment > section's retail property. This is a
useful property to set when deploying your application to a production
server. This overrides any other settings for custom errors.
The web.config's < customErrors > section's mode property. This setting
indicates whether custom errors are enabled at all, and if so whether
they are enabled only for remote requests.
The HttpRequest object's IsLocal property. If custom errors are enabled only for remote
requests, you need to know whether the request is from a remote
computer.
Yep, the through WebConfigurationManager:
System.Configuration.Configuration configuration =
System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration("/");
System.Web.Configuration.CustomErrorsSection section =
(CustomErrorsSection)configuration.GetSection("system.web/customErrors");
Once you have the section, you can check whether the mode is on or off as follows:
CustomErrorsMode mode = section.Mode;
if (mode == CustomErrorsMode.Off)
{
// Do something
}
This should do the trick...
using System.Web.Configuration;
using System.Configuration;
// pass application virtual directory name
Configuration configuration = WebConfigurationManager.OpenWebConfiguration("/TestWebsite");
CustomErrorsSection section = (CustomErrorsSection)configuration.GetSection("system.web/customErrors");
CustomErrorsMode mode=section.Mode;
I'm using a portalsitemapprovider object to create my navigation server control.
I've assigned sharepoint object model access and impersonation rights in the controls CAS. However despite this I can't retrieve the child nodes of the root node of the sitemap, they just return an error.
If I change the web app to run under full trust I can retrive the child nodes.
Thus my question is what CAS policies are requried to fully access data in the sitemap provider object, and how come I can access the root node but not it's children currently?
Example code:
PortalSiteMapProvider sp = PortalSiteMapProvider.WebSiteMapProvider;
PortalSiteMapNode rootNode = (PortalSiteMapNode)siteProvider.RootNode;
foreach (SiteMapNode node in rootNode.ChildNodes)
{
//this loop returns 1 item with title "Error" with no exception thrown.
}
My Assembly has the following CAS requests:
[assembly: SharePointermission(SecurityAction.RequestMinimum, ObjectModel=true, Impersonate=true)]
With approprite IPermission entries in the deployment manifest. After deploying the web app web.config is updated to WSS_Custom trust level as expected.
Any ideas?
Thanks
You could try using Reflector. This should show you the CAS permissions on that class.
Or use WSPBuilder, which will use reflection to generate the CAS file for you. I recommend this option as you shouldn't need to worry about editing your CAS files again!