We have a multi-user application which is working on tasks initiated by the users.
These tasks are running asynchronously therefore we are using RabbitMQ technology for the task distribution.
In the first version of the program the tasks of every user are sent immediately to only one worker queue, so we faced the following problem, if "User A" sends his tasks quicker than "User B", then "User B" has to wait while the tasks of "User A" are all completed.
The next version of this program we want to give fair distribution so we introduced user related queues, so in the first phase the tasks of every user are sent to the user's own queue and we created a consumer which reads messages from all of the users queue and forwards these messages to the worker queue we used in the first version of the program. What we expected by this solution we could equally read messages from the users queue if we would limit the message size of the worker queue. But it is not working all of the user's tasks are immediately forwarded to the worker queue just like the message limitation of the worker does not exist.
I think it should work, but we missed some configuration...
Thank you, if somebody could help us.
Related
Is it okay to make HTTP requests to a counter party's external service from within a responder flow?
My use case is a Party invokes a "request-token" flow with an exchange node. That exchange node makes a HTTP request (on the responder flow) to move cash from that parties account to an exchange account in the external payment system. The event of the funds actually hitting the count and hence the issuance of the tokens would happen with another flow.
If it is not okay, what may be an alternative design to achieve the task?
It is not always a good idea to make HTTP request that way.
Unless you think very carefully about what happens when the previous checkpoint is replayed.so dedupe and idempotence are key considerations.plus what happens if target is down? plus this may exhaust the thread pool upon which the fibers operate.
Flows are run on fibers. CordaServices can spawn their own threads
threads can block on I/O, fibers can only do so for short periods and we make no guarantees about freeing resources, or ordering unless it is the DB. Also threads can register observables
The real challenge is restart-ability and for that they need to test the hell out of their code with a random kills.
You need to be aware that steps can be replayed in the event of a crash. this is true of any server-side work based system that restarts work.
Effectively, you should:
Step 1) execute an on-ledger Corda transaction to move one or more
assets into a locked state (analogous to XA 'prepare'). When
successfully notarised,
Step 2) execute the off-ledger transaction
with an idempotent call that succeeds or fails. When we know if it
succeeded or failed, move to
Step 3) execute a second Corda
transaction that either reverts the status of the asset or moves it
to its intended final state
I have a Google Cloud Task queue (rate: 10/s, bucket: 200, concurrent: 1) that dispatches tasks to a worker in a App Engine service (python 2.7 runtime) Tasks are normally added to the queue about 3-4/s. Each task is processed one at a time (no concurrency)
In general, each task is processed very fast(less than 1sg). Surprisingly, the queue sometimes randomly "pauses" a small subset of 5-20 tasks. New incoming tasks are processed as usual but those ones are blocked and stay on the queue for some minutes, even when worker is idle and might process them. After 7-9 minutes, they are processed automatically without any other interaction. Issue is this delay is too much and not acceptable :(
While "paused", I can manually execute those tasks by clicking on the "Run" button and they are immediately processed. So I'd discard some kind of limitation on the worker side.
I tried redeploying the queue.yaml. I also tried pausing and resuming the queue. Both with no effect.
No errors are notified. Tasks are not retried, just ignored for some minutes.
Has anybody experienced this behavior? Any help will be appreciated. Thanks.
Cloud Tasks now uses gcloud (Cloud SDK) to manage the queue configuration. queue.yaml is a part of the legacy App Engine SDK for App Engine Task Queues. Uploading a queue.yaml when using Cloud Tasks may cause your queue to be disabled or paused.
To learn more about queue management see, Using Queue Management versus queue.yaml.
To learn more about migrating from Task Queues to Cloud Tasks, see Migrating from Task Queues to Cloud Tasks
Imagine the following set up
A set of n independent tasks in a task list must be completed in Siebel
Tasks a, b etc can be worked on by separate threads
When a thread starts the work flow records the states of all n tasks
The threads continue to completion and eventually end up sending a JMS message to a queue
We have the following problem:
Thread 1 that works on task a completes its work and marks task a as closed
At the same time thread 2 that works on task b also completes its work and marks task b as closed
Two JMS messages are placed on the queue and sent to another back end system
Here's the problem: The first JMS message says that the state of the task list is a=closed b=open and the second JMS message says a=open b=closed
Tasks can legitimately be re-opened by a user of Siebel (let's say for fraud checking purposes)
The back end system receives the two JMS messages in any order since the middleware does not guarantee ordering
The back end system receives one JMS message saying closed,open and another shortly afterwards that says open,closed. This could happen in any order but the result is the same. It appears to the back end system that the entire task list has not been closed whilst in Siebel all tasks (a and b in this example) have been closed
I am told that there is no way in Siebel that the commit to the database that modifies the state of the tasks being acted upon in the workflow thread can only happen at the very end of the work flow. That means crucially after the JMS messages have been sent out with the misleading state. This is apparently because of the need to roll back a workflow upon error.
Questions: Is the above statement true meaning that this problem can never be solved in Siebel? If not then can someone tell me whether it is possible to fix this in Siebel such that a JMS message is sent with the correct state of the tasks. I naively think this is solved using semaphores, but truth be told I've been spoiled in the sense I've never had to deal with semaphores and I sure don't know if that concept even exists in Siebel. Any help?
Can't read data before it's committed to the database, can only control the timing.
Use a business service to call workflow(s) synchronously, or use business service instead of workflow, and send JMS message after database commit. Instructions to call a workflow process from business service.
Scenario
I have an web application which needs some calculations and processing on data. This job is a long running job(few hours). Job is initiated by user.
Requirement.
User Clicks on Process Data.
Some functions are called to start data processing.
Data Processing runs for hours.
User is given feedback of percentage completed etc.
Even if user logs off and then again log on he should get this feedback.
The requirement is somewhat similar to Spiceworks. Where it runs in background to detect the devices/computers in network and the user is notified in his page about the progress. But spicework uses windows service. We don't want to us windows service.
Now the question is.
What if user closes the page, will the task still run in background.
This task has to be completed fully.If terminated in between output will not have any meaning.
How to actually to design this long running process. In ASP.Net environment.
Also is there a way to show all/same user who logs in the status of processing.
There are multiple ways to schedule a job in the background. You can use SQL Job, Windows Service or Scheduled Tasks.
I would design it like this:
From my ASP.NET page - I will store an indication in the database for the job to start which will then be picked by the scheduled task. This task is nothing but a console application which pulls data from the database to see which tasks user initiated and then take the next action in there.. For the percetage complete you can store those values from your job into DB and your page will access the dB to show it to user anytime they come to the page.
Here is another thread where long running tasks in IIS are discussed:
Can I use threads to carry out long-running jobs on IIS?
I have a aspx web application that updates or adds files in a database. The clients access through the browser and one of the requirements is that they can start the update and be able to close the browser while the update continues. It appears to run for a little bit after I close the browser but then it stops. How can you keep the application running for asp.net?
That's something you could very well solve with WF (Workflow Foundation). Create a workflow for the task that should survive closing the browser. Workflows have their own threads and livecycles separate from ASP.NET.
The web application will keep running in the application pool, but this will be recycled eventually. As long as the users session runs the application should be kept alive, so by upping the session timeout you may fix the problem.
A better approach though would be to move the long-running task into a service instead, but that may require a rewrite of your application.
Usually for long-running or asynchronous processing, you want to dispatch the request to a back-end service to handle. Trying to keep the web-app alive to finish processing can lead to problems, especially with HTTP and session timeouts.
A common pattern for this is to put the request on a message queue and let a back-end service process it when it can.
I would create a separate windows service that you can push jobs onto from your web application, then check the status of the job(s) when the user logs in again.
The windows service won't be tied to the asp.net app domain so it will continue to run regardless of whats happening in your web application.
I've run into this pattern and you have to decouple the work from the HTTP request. The way we've solved it is to abstract the computing to be done as an event to be scheduled. So, say a user at a browser takes an action that requires a long lived (relatively) computation on the back end, this computation is given a name like 'doXYZForUser' and given a prameter vector like (userId, params...) and sent off to the work queue. Some time in the future the user logs in again and can see what the status of their job is.
I'm running a Java stack and a Java Message Service (JMS) but the principle is the same. The request from the browser queues up an event and the browser get an ACK back saying the event is on the work queue. The queue is managed by an entirely separately running process which in .NET I believe is just called the Message Queue. The job comes up on the queue gets processed and the results can be placed in a separate table containing a reference to the user that kicked off the job, so the next time they log in job status/results can be returned.