Symfony app stuck when getting EntityManager - symfony

I am working on an app which has the web component (visited via browser) and background task processing component, to which web component delegates some long running stuff.
I've just hit an issue when I refreshed my web browser only to find it loading indefinitely (first spotted in AJAX, but later in normal request).
It did not really look apparent but as soon as I shut down the background Symfony command which also utilizes EntityManager the browser get unblocked and proceeds with request.
My app uses RabbitMQ to store job requests which are publish by web component. The Symfony command uses the same "backbone" to create RabbitMQ consumer and take consume those jobs.
I tried, without any result:
Restarting Apache
Restating RabbitMQ
Purging RabbitMQ queue
Using different EntityManagers for web and command
I use OldSoundRabbitMqBundle (link) to facilitate communication between those two.
The web component gets stuck regardless of action being called (not related to RabbitMQ producer).
Has anyone stumbled upon similar issue?
This happens on dev box, I haven't got around giving it a spin on a production server, nor would I until I find out more about this.

It would seem that I misused the locking mechanism in Postgres. Indeed the task processing component is a long-running task, but given that it is Symfony command, Doctrine connection is being established as early as possible.
Now comes the tricky part: I used the LOCK TABLE statement to lock some tables away from concurrent access (EXCLUSIVE type). Without closing the connection (not entity manager), those locks are left intact, until I restart the command (every 10th task).
This was the root cause.
I am still investigating some edge-cases, but since I moved away to advisory locking, I had no more lock-ups.

Related

Make .net core service run in multiple machines to make it highly available but do the work by only one node

I have a .Net core application that consists of some background tasks (hosted services) and WEB APIs (which controls and get statuses of those background tasks). Other applications (e.g. clients) communicate with this service through these WEB API endpoints. We want this service to be highly available i.e. if a service crashes then another instance should start doing the work automatically. Also, the client applications should be able to switch to the next service automatically (clients should call the APIs of the new instance, instead of the old one).
The other important requirement is that the task (computation) this service performed in the background can’t be shared between two instances. We have to make sure only one instance does this task at a given time.
What I have done up to now is, I ran two instances of the same service and use a SQL server-based distributed locking mechanism (SqlDistributedLock) to acquire a lock. If a service could acquire a lock then goes and do the operation while the other node waiting to acquire the lock. If one service crashed the next node could be able to acquire the lock. On the client-side, I used Polly based retry mechanism to switch the calling URL to the next node to find the working node.
But this design has an issue, if the node which acquired the lock loses the connectivity to the SQL server then the second service managed to acquire the lock and started doing the work while the first service is also in the middle of doing the same.
I think I need some sought of leader election (seems done it wrongly), Can anyone help me with a better solution for this kind of a problem?
This problem is not specific to .Net or any other framework. So please make your question more general so as to make it more accessible. Generally the solution to this problem lies in the domain of Enterprise Integration Patterns, so consult the references as the status quo may change.
At first sight and based on my own experience developing distributed systems, I suggest two solutions:
use a load balancer or gateway to distribute requests between your service instances.
use a shared message queue broker to put requests in and let each service instance dequeue a request for processing.
Either is fine and I can use both for my own designs.

Updating service workers on single page webapps

I work on a single page webapp and am implementing service workers. I've learned that I need to manually update the service worker because a single page app doesn't have traditional navigation events, however, I'm unclear about the roles of clients.claim() and self.skipWaiting().
Do I need to call these functions after manually updating the service worker? I've seen skipWaiting() called in the install event, and claim() called in the activate event handler. If I need either or both of them, are these the correct places to make these calls?
clients.claim() usually called in the install event to take control of uncontrolled clients once service worker is activated.
It only really matters on the very first load, and due to progressive
enhancement the page is usually working happily without service worker
anyway.
For skipWaiting(), this causes your service worker to kick out the current active worker and activate itself as soon as it enters the waiting phase (or immediately if it's already in the waiting phase). It doesn't cause your worker to skip installing, just waiting. It's pretty common to call it in the install event.
This must be called when updating the service worker.
For more information, you can check the service worker lifecycle documentation.

Hosting WF as Windows Service

I am trying to construct a simple windows workflow to monitor a directory for inbound files and do some DB updates using Windows WF 4.0. Currently I am planning to build a 'WCF Workflow Service' and host it as a 'Windows service' running 24/7 (with a daily service shutdown and startup).
Further in the future I am planning to consume this service using an ASP.NET/WPF application to create a basic dashboard kind of stuff.
Considering the idea of directory polling for files with WF hosted on windows service, does it seems to be a good idea? What can be the cons of this?
Please advice if there are any drawbacks on this or can this achieved by better means?
I'm actually doing this, but it is a bit more complex than you think, and should be avoided if possible.
You should not be blocking from within an Activity; if it is expected to be a long running Activity that is waiting from input from the outside (FileSystemWatcher event, for instance), the workflow should idle itself and wait to be woken from the outside.
How I did this was I created a workflow extension that hosted the FileSystemWatcher. Once the Activity was ready to watch for a file, it created a bookmark and passed it to the extension.
The extension then started the FSW, holding onto the bookmark.
When a FSW event was fired, the extension resumed the bookmark, passing in an object that contained details about the event. The Activity did what was needed with the event, then re-scheduled itself.
Normally I wouldn't have done this, but I had some requirements that forced me to use WF4 to accomplish this goal. If I didn't have to use WF4, I would have just spun up the FSW within the service and consumed the events.
Unless you expect to have to be very flexible with your configuration detailing what you do with the FSW event, and expect this to change relatively often during deployment of the service, I'd skip WF4.

EJB or Servlet - how to add a 'kill switch' to force a process/thread to stop

Kind of an open question that I run into once in a while -- if you have an EJB stateful or stateless bean, or possibly a direct servlet process, that may with the wrong parameters start running long on a production system, how could you effectively add in a manual 'kill switch' for an administrator/person to specifically kill that thread/process?
You can't, or at least you shouldn't, interfere with application server threads directly. So a "kill switch" look definitively inappropriate to me in a Java EE environment.
I do however understand the problem you have, but would rather suggest to take an asynchronous approach where you split you job in smaller work unit.
I did that using EJB Timers and was happy with the result: An initial timer is created for the first work unit. When the app. server executes the timer, it then register as second one that correspond to the 2nd work unit, etc. Information can be passed form one work unit to the other because EJB Timers support the storage of custom information. Also, timer execution and registration is transactional, which is fine to work with database. You can even shutdown and restart the application sever with this approach. Before each work unit ran, we checked in database if the job had been canceled in the meantime.

Architecture Queuing asp.NET - MSMQ

Problem: Some 300 candidates make a test using Flex. A test consist of some 100 exercises. After each exercise a .NET service is called to store the result. If a candidate finishes a test, all the data of his/her test is denormalized by Asp.NET. This denormalization can take some cpu and can take 5 to 10 seconds. Now, most of the times, some of the candidates have finished their test earlier than the rest, but still some 200 of them wait until their time is up. At that moment, 200 candidates finish their test and 200 sessions are denormalized at the same time. At this point, server load (cpu) is too high and cause calls to the webserver to go wrong. Now, instead of all these sessions being normalized concurrently, I would like to add them to a queue using MSMQ.
Question:
How do you process the Queue?
Do you start a separate thread in the Application_Start of global.asax that listens to the queue? If there are messages, they are dealt one at the time.
Is it necessary to do this in a separate thread? What if in the global.asax you just call a singleton for instance that starts listening to the queue? In what thread will this singleton run? (what's the thread that calls global.asax)
What are best practices to implement this? Links? Resources? Tutorials? Examples?
I don't like the idea, but could you put an exe on the root of your website, an exe that starts a process listening to the queue...
If you get a message out of the queue, do you remove it when you pull it out or do you remove it if denormalization for this session was successful? If you remove it when you pull it out and something goes wrong...
I could also create my own queue in memory, but restarting the webserver would empty the queue and a lot of sessions would end up not being normalized, so I guess this is really a bad idea.
Is MSMQ a good choice or are there better alternatives?
You could consider using a WCF-Service with MSMQ transport. I used this approach in an application that calculates commissions:
User completes asp.net wizard configuring calculation parameters
Calculation Job is sent to WCF-Service using MSMQ transport
Service transaction is completed as soon as Job entered MSMQ
New transaction scope is created for processing Job instances
One drawback is that the transaction will require MSDTC which will add some overhead when targeting MS SQL Server and even more when dealing with Oracle.
IDesign provides a lot of useful samples and best practices on WCF queueing.
Personally, I use a servicebus for scenario's like that. I know this sounds like an overkill, but I think the .net servicebusses are so good that they require the least amount of code written by you, because it's not easy to create a good scheduler for background processes without disturbing the threads of the application pool the webapp is running in. NServicebus and MassTransit are both good an well enough documented servicebuses for your scenario. With a servicebus, you have a framework that writes to msmq and listens to msmq in several apps connected by the messagequeue. The bus makes it easy for you to create a separate app that runs as a background service and is connected with your web-app by the message queue. When you use topself (included in nservicebus and masstransit), an installer/uninstaller for the seperate apps is automatically generated by the service bus.
Question: Why don't you like the idea of having a separate exe?
How do you process the Queue?
Do you start a separate thread in the Application_Start of global.asax
that listens to the queue? If there are messages, they are dealt one at
the time.
Is it necessary to do this in a separate thread? What if in the
global.asax you just call a singleton for instance that starts listening to
the queue? In what thread will this singleton run? (what's the thread that
calls global.asax)
[skip]
I don't like the idea, but could you put an exe on the root of your website, an exe that > starts a process listening to the queue...
Normally another program processes the queue - not ASP.NET. Either a windows service or an executable that you run under a scheduler (and there's no reason to put it in the root of your website).
If you get a message out of the queue, do you remove it when you pull
it out or do you remove it if denormalization for this session was
successful? If you remove it when you pull it out and something goes
wrong...
For critical work, you perform a transactional read. Items aren't removed from the queue until you commit your read operation, but while the transaction is open, no other process can get the item.
What are best practices to implement this? Links? Resources? Tutorials? Examples?
This tutorial is a good introduction and John Breakwell's blog is excellent and offers a lot of good links (including the ones in his easy-to-find sidebar "MSMQ Documentation").

Resources