Raising a “warning” status during SDL Tridion 2011 publishing - tridion

We would like to implement some functionality so that when for some reason an error occurs during publishing or resolving, and we skip over it using a try/catch block, but would still like to notify the user that something was skipped.
The SDL Tridion 2011 Publishing Queue can filter by status. One of these statuses is “Warning”. Is it possible to trigger a publish transaction to have a “Warning” status using the API in either template code or a custom resolver?

If all you need is a warning during the publishing (such that the Publish Transaction has status warning), then you will need to set PublishInstruction.MaximumNumberOfRenderFailures to something greater than 0. As long as the number of Render errors is lower than the max you specified, the status of the Publish will be Warning.
If an error occurs outside rendering, then the instruction will show as Failed.
On the other hand, if you want to show a message in the GUI (in the MessageCenter) with the 'Warning' that something went wrong, then you will need a more complex architecture. Frank worked on this a while back. The idea (IIRC) is to have a GUI piece, e.g. an iframe polling a service that returns statuses for that user's Publish actions. An event system would produce these statuses by monitoring the PublishInstruction, and it would write them into some kind of storage (file, db, memory) that the service would then poll.

I'm afraid this isn't possible, but the answers above might help you find an alternative solution to this.

Related

Symfony Messenger - checking if queue is empty

We are migrating our architecture to take advantage of the Symfony Messenger component. What I am dealing with at the moment is adjusting the deploy process of our application.
The Symfony documentation suggests that the workers should be restarted on deploy to pick up the new code. Makes sense. My problem is that this does not address the issue when upgrading the deployed code. Consider hypothetical versions 1 and 2.
Version 1 works with and understands a certain set of messages.
Version 2 adds more message types and changes the names/structure/whatever of some of the message types defined in version 1.
During deploy, in order to be sure that all messages were processed and there are no incompatibilities when the new version, this is the process that makes intuitive sense to me:
Stop accepting new messages to the queue (put the site to a "maintenance mode")
Let the workers finish processing pending messages in the queue
Deploy new code
Restart workers
Start accepting new messages
The problem I am facing is that I can't see any way to check whether the queue is empty or not.
Is my deploy scenario correct? How the deploy usually done in applications using the Symfony messenger component (or any messaging queue, for that matter)? Is the only way to go ensuring backward compatibility for all the message types?
This is an interesting challenge.
Version 1 (new handlers for the same messages you sent out in the previous release)
For this you could use Middleware and Stamps to add a version-header to the messages sent over a transport. Then on the consuming side your handler can watch for the version stamp and check if its responsible for this message or not. The upside of this approach is, that you can change the handler logic without changing the message itself just by having the new code add a new version to the same message types you sent out before.
This can easily be introduced to an existing application by having your existing handlers look for the stamp and if it's not there assume they are responsible and otherwise bail out. When a new version wants to introduce a new handler it will only work with whatever version you specify and ignore any messages without this header.
Version 2 (Modifying data structure)
One approach to this problem would be to only have backwards compatible changes in your messages and handlers between each release. So for example assume your message looks something like this:
{
"foo": 123
}
and you want to change it to something like this:
{
"bar": "123"
}
In that case you would first release an intermediate version, containing both the old and new field and after a while you can release the version where you remove the old logic. The intermediate version of the message might look like this:
{
"foo": 123,
"bar": "123",
}
You would then have a Handler that checks for bar first and and falls back to using foo and the old logic, if bar is missing. This way you can make sure that both new and old messages are processed by your new application and by adding logging you can easily see when the old code is no longer called making it safe to remove the old property and logic in an upcoming release.
The main drawback of this approach is, that you will have to catch breaking changes in advance which requires a thorough review and testing process. Luckily failure transports can catch issues when your handler encounters issues, but if the message can not be properly decoded those messages might be thrown out instantly, so be careful.
I don't think the Messenger component gives any help with working out the queue length - at least none I found so far.
So the answer depends on what type of transport are you using. For example, with the Doctrine transport you can just count the number of rows in the DB table etc.
The problem with that approach is that you make your code less portable/configurable - if your code expect to count rows in DB table, it won't work with Redis transport, or if the table name changes.
In our project we ended up with a queue counting service that looks into the Messenger configuration and decides how to count the items in the queue.
As for the rest of the question about the deployment, other answers here are good. I'll sum up what we learned when running a clustered Symfony application on AWS ECS with blue/green deployment:
You could treat your message handlers like you would do DB migrations: any two adjacent versions must work with the same schema - so any two message handler versions must be able to work with the same message format.
Turn the handlers off before running a deployment, deploy the new version and turn the handlers on again. If you have multiple versions, you will need to do multiple deployments, one version by one.
You should know before each deployment whether you can just roll out multiple versions at once because there are no breaking changes, or not.
If your environment autoscales, you also need to ensure the handlers are not started on any additional nodes that appear during the deployment and are still serving the older version of the application.
We use a boolean flag in Redis to allow nodes work out whether the handlers should be started or not - that flag is set to "false" just before we halt our current handlers at the beginning of the deployment.
--
If there are any better ways to do this, I'm all ears.
Good luck!

Application Insights removing telemetry after it has been logged

I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.

Automatic Workflow activity not started

I have created a simple workflow start-->createoredit-->automaticactivity-->End. automatic activity doesnt do anything, i have default code in the script area of automati activity FinishActivity "Automatic Activity Finished" alone in the script area.
When i trigger the workflow, automatic activity is not started and performed, it is assigned to "NTAUTHORITY SYSTEM" with the state "Assigned" (In the Global Work List)
Whenever i restart the "Tridion Workflow Agent" service or whenever some other automatic activity assigned via workflow, automatic activity start performed.
I dont see any error message also in the event log.
Could anyone help me on this? I am using SDL Tridion 2011 SP1.
The first activity must always be a manual activity. If you need to have the first activity automated then you need to rely on the event system for this.
EDIT: I see you wrote the question to add the first activity to be manual.
When an automatic activity gets stuck in Assigned state, it usually means there is a script error. There could also be something with the connectors between activities in your Visio design. Check that everything us properly connected. Try deleting the link between automaticactivity to End and recreating it again.
Check that the "Tridion Content Manager Workflow Agent" windows service is running. This fixed it for us.

Tridion Event System Timeout

I am currently running Tridion 2011 SP1.
I am writing some code that runs whenever a page is published. It loops through each component template in the page, gets the component and writes out various fields to an XML document. For pages with many component templates or components with many fields this process can take a while to run. If the process takes more than 30 seconds I get an error
The operation performed by thread "EventSystem0" timed out.
Component: Tridion.ContentManager
Errorcode: 0
User: NT AUTHORITY\NETWORK SERVICE
followed by another
Thread was being aborted.
Component: Tridion.ContentManager
Errorcode: 0
User: NT AUTHORITY\NETWORK SERVICE
StackTrace Information Details:
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
at System.Delegate.DynamicInvokeImpl(Object[] args)
at Tridion.ContentManager.Extensibility.EventSubscription.DeliverEvent(IEnumerable`1 subjects, TcmEventArgs eventArgs, EventPhases phase)
I believe I have three options.
1. Increase the timeout
This seems like a lazy solution and only hides the problem. There is no guarantee that the timeout problem won't reoccur. I'm also not sure where the timeout value is stored (I've tried changing a few values in the Tridion Content Manager.msc snap-in but no luck).
2. Do less in the actual event handler routine and have a separate process do all the hard work
This doesn't seem like the correct solution either. I would really like to keep all my event handler code in the one place. We have a solution like this in place for our live 5.3 installation and is a nightmare to maintain (it is very old and poorly written).
3. Make my code more efficient
My components have many fields and my code must delve deeper into each field if they are ComponentLinks. I guess because the properties of Tridion objects are lazy loaded there is one call to the API/database for each property I access. It takes on average 0.2 seconds to retrieve a property which soon stacks up when accessing multiple properties. If there was a way to retrieve all properties in one call this would be useful.
Any ideas?
Have you considered running your event asynchronously? You can do this by changing the following line:
EventSystem.Subscribe<IdentifiableObject,TcmEventArgs(....)
to
EventSystem.SubscribeAsync<IdentifiableObject,TcmEventArgs(....)
One thing you might consider doing is using the Component's .ToXml() method and get your values from the XML DOM instead of using the Tridion API. This is usually considerably faster, and you can use XSLT or Linq to "walk" through your fields.
If you are really only interested in fields, then just use the .Content (and .Metadata) properties and, again, use Linq or XSLT or whatever technology you want to parse the xml (except RegEx perhaps).
You are simply doing a lot of processing and that takes time. Maybe there's a technical fix, but the first thing to do in this situation is to go back to Why and What? Publishing a page is fundamentally about rendering the HTML and binaries that you want to output for that page. How long should that take?
So please could you tell us why you are doing this? Perhaps part of the effort can be moved somewhere else without compromising on good design. If we know what the purpose is, perhaps we can help more.
SDL Customer Support have advised that I increase the timeout. While not a great solution its the only one that is available. To do this
On the server that the content manager is installed open the Tridion.ContentManager.config which should be located in the config/ subdirectory of the Content Manager root location, which defaults to C:\Program Files\Tridion\ or c:\Program Files (x86)\Tridion\
Find the <eventSystem> node
Increase the threadtimeout value (this is in seconds) to something higher (I put it to 120)
Save the Tridion.ContentManager.config and restart the Tridion Content Manager Service Host service
Further documentation is available http://sdllivecontent.sdl.com/LiveContent/web/pub.xql?action=home&pub=SDL_Tridion_2011_SPONE&lang=en-US#addHistory=true&filename=ConfiguringEventSystem.xml&docid=concept_48C53F76CBFD45A783A3975CA72ECC49&inner_id=&tid=&query=&scope=&resource=&eventType=lcContent.loadDocconcept_48C53F76CBFD45A783A3975CA72ECC49. It does require a username and password to access.
If you really need the processing time then I think you should write a web service that performs the actions you need, which you can call from the event handler. This would not influence user experience (in the case of a synchronous event handler) as much either.

Pattern for long running tasks invoked through ASP.NET

I need to invoke a long running task from an ASP.NET page, and allow the user to view the tasks progress as it executes.
In my current case I want to import data from a series of data files into a database, but this involves a fair amount of processing. I would like the user to see how far through the files the task is, and any problems encountered along the way.
Due to limited processing resources I would like to queue the requests for this service.
I have recently looked at Windows Workflow and wondered if it might offer a solution?
I am thinking of a solution that might look like:
ASP.NET AJAX page -> WCF Service -> MSMQ -> Workflow Service *or* Windows Service
Does anyone have any ideas, experience or have done this sort of thing before?
I've got a book that covers explicitly how to integrate WF (WorkFlow) and WCF. It's too much to post here, obviously. I think your question deserves a longer answer than can readily be answered fully on this forum, but Microsoft offers some guidance.
And a Google search for "WCF and WF" turns up plenty of results.
I did have an app under development where we used a similar process using MSMQ. The idea was to deliver emergency messages to all of our stores in case of product recalls, or known issues that affect a large number of stores. It was developed and testing OK.
We ended up not using MSMQ because of a business requirement - we needed to know if a message was not received immediately so that we could call the store, rather than just letting the store get it when their PC was able to pick up the message from the queue. However, it did work very well.
The article I linked to above is a good place to start.
Our current design, the one that we went live with, does exactly what you asked about a Windows service.
We have a web page to enter messages and pick distribution lists. - these are saved in a database
we have a separate Windows service (We call it the AlertSender) that polls the database and checks for new messages.
The store level PCs have a Windows service that hosts a WCF client that listens for messages (the AlertListener)
When the AlertSender finds messages that need to go out, it sends them to the AlertListener, which is responsible for displaying the message to the stores and playing an alert sound.
As the messages are sent, the AlertSender updates the status of the message in the database.
As stores receive the message, a co-worker enters their employee # and clicks a button to acknowledge that they've received the message. (Critical business requirement for us because if all stores don't get the message we may need to physically call them to have them remove tainted product from shelves, etc.)
Finally, our administrative piece has a report (ASP.NET) tied to an AlertId that shows all of the pending messages, and their status.
You could have the back-end import process write status records to the database as it completes sections of the task, and the web-app could simply poll the database at arbitrary intervals, and update a progress-bar or otherwise tick off tasks as they're completed, whatever is appropriate in the UI.

Resources