CRM 2016 plugins firing inconsistently - crm

I have an entity with 4 plugins registered on the update message.
Two of the plugins are registered on the pre-operation pipeline stage and two of the plugins are registered asynchronously on the post-operation pipeline stage. All four plugins have different filtering attributes.
Each of the plugins serializes its context to an xml file as soon as the Execute method is entered. This is functionality baked into a base class we have and I have no reason to believe this would fail.
When testing the same record for an update I get inconsistent results.
Sometimes only the pre-operation pipeline plugins fire and sometimes all four plugins fire.
What is odd, is that I can tell by the value of the record I'm testing on, that all four plugins have fired. However in many cases the serialized context file isn't generated
I've experimented with changing both of the async plugins to synchronous and that seems to temporarily fix the issue. I've also experimented with disabling the pre-op plugins and only letting the async fire.
Has anyone dealt with a similar issue?

Welcome to the hell of debugging plugins. As a best pratice, I always suggest to have one plugin for one event and one entity. That way debuging is simpler and you can handle more complexe scenarios. Inside the plugin I check the target to know which field was updated.
As for your inconsistant results, it might be caused by the async steps. Did you configure the order in which they are triggered? By default, they are at 1. Which means random.

Related

Update clients after updating Firestore collection name

I have a Firestore collection that I need to rename.
To do that I'll have to do two things. One, rename the collection, two, update my app (only web right now) to use the new collection name.
My problem is that if I just go ahead and do that, any user that has not refreshed the app won't be able to find the renamed collection.
So, my question is: Is there any best practice to handle this scenario?
I can think of a couple of options:
Somehow forcing a reload of the web apps immediately after renaming the collection.
Set a feature flag so that the web apps enter into maintenance mode while I update everything and then reload the web apps once the change is finished. Unfortunately the currently deployed web app doesn't have a maintenance mode to enable so this doesn't seem to be a valid solution.
However, I'd like to hear about other options. There might be some best practice that I'm missing. Moreover, I'm aware this is a problem that might be more general than just related to Firestore. For example when changing a REST API endpoint, so I guess there must be some tried and tested solutions out there.
I tried searching for best practices regarding this and couldn't find any.
Also, if I was consuming a REST API it would be easier to solve because I could change the DB and keep the DB unchanged. But given that Firestore gets consumed directly from the web app I don't have this benefit.
Locking out outdated clients is a common practice, but leads to a lesser user experience. It also requires that you have a mechanism for the clients to detect that they're outdated, which you don't seem to have.
The most common practice I know of is to perform dual writes to both the old and the new collection while clients are updating.

Integration Workflow Error in CRM 2016 Online

When using the custom workflows (integration workflow) of the DocuSign to a CRM Online, I encountered an error about a null value.
I have followed the sequence correctly (CreateEnvelop, AddDocument, AddRecipients then GetSignature) but I still get the error. I think the workflows were part of the solution so we don't have access to the codes unless we decompile it - which is something we don't want to do. We even thought it might be illegal since it's not our dll.
How can we resolve this error?
What version of the DocuSign base solution are you currently running? The base solution is separate from the workflow solution, and it's generally recommended that you install the latest version of that base solution along side the workflow solution.
A few things can cause this issue:
A refresh token that is set to null in your DocuSign configuration. This is the most likely problem. Go to "DocuSign Config" in the Settings dropdown on the sitemap, and open up the active DocuSign configuration. When it loads, you'll see a "Linked to O365" field: click the "refresh" link. This will ensure a token is created and that the workflow solution can get to it.
An invalid envelope ID. Make sure that the workflow is configured correctly and the envelope information is successfully getting passed to the AddDocument activity.
The entity or entity ID is not available to the workflow activity. The AddDocument activity needs to know this so it can connect to the entity in question to retrieve the note attachment(s). This could potentially happen if instead of setting up a workflow, you setup a custom action and did not assign it to an entity.
If none of these things help resolve the issue, feel free to reach out to DocuSign tech support and we'll help you diagnose the problem.
Hope this helps!

What are the possible problems when updating a live ASP.NET application

I update a live ASP.NET application frequently. I have a load balancing set up, so I update each server while no one is on it.
However, there are still problems from time to time, commonly with people loading a page on the old version and then submitting it on the new version and then the viewstate cannot be decoded.
That's the type of generic problem I'm looking for a list of.
I am looking for a complete list of generic problems that can occur after an update, so I can become aware of when and where my update will cause problems for people using the system at the same time.
Of course problems can occur if there are errors etc. in the update or the code, but I'm obviously not talking about that.

Performing bulk processing in ASP.NET page

We need the ability to send out automatic emails when certain dates occur or when some business conditions are met. We are setting up this system to work with an existing ASP.NET website. I've had a chat with one of the other devs here and had a discussion of some of the issues.
Things to note:
All the information we need is already modelled in the ASP.NET website
There is some business-logic that is required for the email generation which is also in the website already
We decided that the ideal solution was to have a separate executable that is scheduled to run overnight and do the processing and emailing. This solution has 2 main problems:
If the website was updated (business logic or model) but the executable was accidentally missed then the executable could stop sending emails, or worse, be sending them based on outdated logic.
We are hoping to use something like this to use UserControls to template the emails, which I don't believe is possible outside of an ASP.NET website
The first problem could have been avoided with build and deployment scripts (which we're looking into at the moment anyway), but I don't think we can get around the second problem.
So the solution we decided on is to have an ASP.NET page that is called regularly by SSIS and to have that do a set amount of processing (say 30 seconds) and then return. I know an ASP.NET page is not the ideal place to be doing this kind of processing but this seems to best meet our requirements. We considered spawning a new thread (not from the worker pool) to do the processing but decided that if we did that we couldn't use the page returned to signify a success or failure. By processing within the page's life-cycle we can use the page content to give an indication of how the processing went.
So the question is:
Are there any technical problems we might have with this set-up?
Obviously if you have tried something like this any reports of success/failure will be appreciated. As will suggestions of alternative set-ups.
Cheers,
Don't use the asp.net thread to do this. If the site is generating some information that you need in order to create or trigger the email-send then have the site write some information to a file or database.
Create a Windows service or scheduled process that collects the information it needs from that file or db and run the email sending process on a completely seperate process/thread.
What you want to avoid is crashing your site or crashing your emailer due to limitations within the process handler. Based on your use of the word "bulk" in the question title, the two need to be independent of each other.
I think you should be fine. We use the similar approach in our company for several years and don’t get a lot of problems. Sometimes it takes over an hour to finish the process. Recently we moved the second thread (as you said) to a separate server.
Having the emailer and the website coupled together can work, but it isn't really a good design and will be more maintenance for you in the long run. You can get around the problems you state by doing a few things.
Move the common business logic to a web service or common library. Both your website and your executable/WCF service can consume it, and it centralizes the logic. If you're copying and pasting code, you know there's something wrong ;)
If you need a template mailer, it is possible to invoke ASP.Net classes to create pages for you dynamically (see the BuildManager class, and blog posts like this one. If the mailer doesn't rely on Page events (which it doesn't seem to), there shouldn't be any problem for your executable to load a Page class from your website assembly, build it dynamically, and fill in the content.
This obviously represents a significant amount of work, but would lead to a more scalable solution for you.
Sounds like you should be creating a worker thread to do that job.
Maybe you should look at something like https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
You can and should build your message body (templated message body) within domain logic (it means your asp.net application) when some business conditions are met and send it to external service which should only send your messages. All messages will have proper informations.
For "when certain dates occur" scenario you can use simple solution for background tasks (look at Craig answer) and do the same as above: parse template, build message and fast send to specified service.
Of course you should do this safe then app pool restarts does not breaks your tasks.

Disabling asp-sessions. Any known issues?

I'm in the process of disabling asp-sessions completely from a site. Its quite a large and complex site, but we're not using the session object programatically anywhere, so I'm just curious if anyone know of any "hidden" issue that may occur if you disable sessions? Viewstates, ajax etc? We're using Dundas components for charting and mapping, but they seems to work ok when running on our test servers.
Check the session events in global.asax. There's one for start and one for end. Make sure nothing is happening there, and you should be be good to go. Assuming the Session type is inproc.

Resources