I have a workflow built in a way that it has delay activity which causes it to persist, and after delay has expired, a notification is sent. Workflow is exposed via Workflow Services.
This works perfect except for the scenarios when the server restart occurs or server is brought down for a maintenance for a day or two and timers were already expired. In that case, notification is not sent until the first request related to particular workflow arrives to WCF endpoints.
I have to mention that application pool is already set to alwaysRunning.
Is there anything else that has to be added for IIS/AppFabric to check pending timers that should have already been executed?
I'm using Workflow Foundation 4.5.
Issue has been caused by AppFabric not being set properly.
Related
When attaching a topic to a SNS application's "Endpoint updated" configurable topic I'm experiencing some unexpected behavior. Per AWS's documentation on SNS Application Events, I should receive an event on my configured topic when a platform endpoint has been updated to disabled or it's token changed.
In my case I have a lambda function subscribed to this topic that then retrieves the platform endpoint's attributes via a call to AWS's javascript sdk SNS.getEndpointAttributes so that I can check what attribute have changed to either delete the endpoint or update the associated token in my persistent storage. This call however is returning the endpoints as Enabled = true which then prevents me from taking the corrective actions. However if I look in the AWS SNS console I can see the endpoint has been disabled as Enabled = false.
Have others experienced similar inconsistencies and if so what's the best practice to get around them? Thanks for any input!
I was also facing the similar problem when amazon notified me sns application events via http. To work around this problem i actually delayed the execution of code that sync these endpoint updates with my database. To achieve this i scheduled a job for my background queue worker and delayed its execution after 30 seconds from the time amazon notified via http. I don't know whether it is a best practice or not but it is working in my scenario.
I have an ASP.NET website with a a number of long-running (5 mins to 2 hours) user-initiated tasks. I want each user to be able to see the progress of there own jobs, and be able to close their browser and return at a later time.
Current plan is to store each job in the database when it's started and publish a message to a RabbitMQ queue, which a windows service will receive and start processing the job.
However, I'm not sure of the best way to pass the progress information back to the webserver from the service? I see two options:
Store the progress information in the database, and have the web-app poll for it
Have a RabbitMQ consumer in the webserver and have the windows service post progress messages to that queue
I'm leaning towards the second option, as I don't really want to add more overhead to the database by regular polling / writing progress info. However, there are lots of warnings about using RabbitMQ (as a consumer) - as I am not sending vital messages (it doesn't matter if progress messages aren't processed), I'm wondering if this matters? It's not that (famous last words) difficult to restart the RabbitMQ consumer whenever the web app is restarted.
Does that option sound reasonable? Any better choices out there?
Store the progress information in the database, and have the web-app poll for it
Have a RabbitMQ consumer in the webserver and have the windows service post progress messages to that queue
the correct answer is C) All Of The Above!
A database is not an integration layer for applications.
RabbitMQ is not meant for end-user consumption of messages.
But when you combine RabbitMQ with a database, you get beautiful things...
have your background service send progress updates through RabbitMQ. the web server will listen for these updates and write the new status to the database. use websockets (signalr) to push the progress update to the user immediately, but you still have the current status in the database in case the user does a full refresh of the page or comes back later.
i wrote about this basic setup in a blog post on using rabbitmq to do user notifications
I am having a problem under the following circumstances:
1. Running a webapp with both firebase sdk and angularfire initiated connections
2. App has been running in background. Meaning user has launched other apps. During that time, internet connectivity could have come and gone.
3. User brings webapp back into foreground as active app. Firebase connectivity via angularfire has been lost and updates have been lost as well. Hence "2way" binds to existing views/controllers are not longer current and will not re-establish/sync. Only recovery is to actively kill the app and restart.
This is obviously undesirable. I have currently attempted a bruit force method where I am issuing a goOnline() call at the beginning of each of my controllers in an attempt to always try to re-establish a connection but for services that expect a 2way bind, I am not sure that everything will sync up?
Any thought or guidance on this would be very helpful as this is a serious issue.
I have 2 roles in my Azure cloud service application: a web role (signalr connections here) and a worker role.
The web role uses Azure service bus as its scaleout provider.
At certain points in time the worker role will emit certain events. I'd like to send this data directly to clients connected to a Hub.
My current implementation involves the worker role placing a message on a service bus queue which the web role subscribes to, the web role then forwards this message to clients via a HubContext call.
My question is: how can I send this message directly to connected clients from the worker role? So far I have considered 3 methods:
Configure signalr as in the web role so that they use the same servicebus topic. - this done not work as intended as worker role instances "steal messages" from topic subscriptions intended for the web role. This would seem to be the cleanest way of doing it but configuration is a problem.
Use the .Net client to send a hub message - this is not ideal as it places unnecessary load on the web role, as well as double the amount of service bus messages when compared to the above method.
Manually write a signalr compatible message to the topic - very hacky and succeptable to breaking changes.
I know that the team are currently rewriting scaleout for the next release but will this be possible at some point?
Edit:
I have noticed that this is supported in the RabbitMq implementation.
It seems an issue with my configuration was responsible for the first method not working.
However, it seems like that method is slower end to end (by about 150 ms) even with one less message in the loop.
I will wait and see if the scaleout work brings any improvements to this method before making any changes.
We have this Pub/Sub system that you subscribe to via a callback mechanism in C# to recieve events from various things that happen within the database. This subscription has a callback signature attached to it that allows for the Pub / Sub system to callback any subscribers it has and notify them of the change on that Callback thread.
We are taking our windows application and migrating it into a web application. In doing so, I need a way to update this Web Application (The clients) with information from this Pub / Sub. I want to use SignalR, but not sure where to host it. I assume if I host it on the same Web Application as the Client, it won't be able to subscribe to the pubsub due to it not being able to do background threading.
Currently, I have it in a Console application hosting the SignalR server on a specific port. Obviously this is for testing and not ideal for a larger scale.
My question is.. is it really safe to be hosting SignalR outside of IIS? should I put this in a Windows Service? Web Service somehow? Can it go in a Web Application somehow?