retry strategy in mule alternative for until-successful - http

I am trying to implement retry strategy for a http outbound. After Googling, I found out that until-successful is having good capability to retry. But, maximum number of threads available for this API is 32. Hence, messages will be lost once the thread count reaches 32 and hence it might result in performance issues. Could someone clarify whether this issue is fixed in mule.
What are the other alternative strategies available ? Any suggestions/links/sample/pseudo code is really appreciated.

There is no limit to the number of retries you can attempt with until-successful.
I have no idea where you got this "32" from...

Related

Corda: Possible Latency issue with encumbered tokens

We are running into a possible latency issue with Corda Token SDK and encumbrances. The encumbrance is not immediately recognized by DatabaseTokenSelection but after a sufficient wait it does. And this happens sporadically.
This is a little hard to diagnose but if you're able to pinpoint what's causing the latency please do make an issue on here: https://github.com/corda/token-sdk/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc
if you don't hear back on the issue message me (#david) on slack.corda.net I can help kick the can along for you.
Some tips for how to debug this:
test the latency between the nodes with ping
make sure any cnames resolve with nslookup from some of your corda nodes
check if any flows are ending up in the flow hospital along the way: https://docs.corda.net/docs/corda-os/4.7/node-flow-hospital.html

will use request_pipeline increases performance in resty (lua)

In lua resty https://github.com/pintsized/lua-resty-http, I saw we can use request_pipeline for requests. I am wandering whether this will increase the performance. While after reading the source code, I found the request_pipeline method is also implemented with the regular send_request, and a loop is used to send each request one at a time.
Seems it cannot help to improve performance, if it is the case, why bothering to have this method?
Thanks
Someone answered on github https://github.com/pintsized/lua-resty-http/issues/130
https://en.wikipedia.org/wiki/HTTP_pipelining
Pipelining is intended to reduce latency when sending many requests.
It's not commonly used though as many intermediaries do not support it
well. It's part of the HTTP spec which is why its included.
Problem solved

Max messages allowed in a queue in RabbitMQ

I am looking for Max messages allowed in a queue in RabbitMQ I understand from some of the links that, there is no limit on this unless we specify.
Was searching for some authorized/from RabbitMQ information on this for some time, but not finding exact information other than below link.
[http://rabbitmq.1065348.n5.nabble.com/Max-messages-allowed-in-a-queue-in-RabbitMQ-td26063.html][1]
Well, for my scenario I have not specified max limit, now I would like to know, if the max messages allowed in a queue is unlimited, then based on what? Does it depend on system attributes (memory) ? I mean Number of messages proportional to the memory?
Do we have this mentioned in any of RabbitMQ documents?
Kindly share if anyone has got. Any answers much appreciated.
Note:- am searching this info for worst case scenario
memory and hard drive space... that's where the limit sits.

Ordered delivery BizTalk performance

Is there an alternative to the "ordered delivery" on a send port in BizTalk? The sequence of the message is very important to me, so I created an orchestration that suspends the message when it is not in sequence, and resumes it when it is in sequence. I use a long running orchestration and direct port binding.
Now some messages are processed faster in the send pipeline, so it happens that sometimes the messages aren't in sequence (I use file adapter...).
Now when I check the "ordered delivery" the messages are in sequence no matter what, but the performance is really really bad (messages get bulked up in the send ports), so I need to find an alternative for the ordered delivery in the send port.
Any suggestions?
thx
Now ordered delivery does obviously add a lot of overhead with the FIFO pattern. Take a look at this article and look a the FIFO article in the first issue. Also take a look at BizTalk performance in general to help speed up some of the other areas on your solution. Now I've seen a few people try their own custom solution to ordering via .net and SQL and performance wasn't that much better because the ordering pattern takes time to process. ALso take a look at these resources around performance in general:
Considersations when planning a perf test -
http://msdn2.microsoft.com/en-us/library/aa972201.aspx
BizTalk 2006 adapter performance numbers -
http://msdn2.microsoft.com/en-us/library/aa972200.aspx
If your transport in or out is SOAP, read this scalability study -
http://msdn2.microsoft.com/en-us/library/aa972198.aspx
Good proof points for BizTalk performance with relation to infrastructure
setup - http://msdn2.microsoft.com/en-us/library/ms864801.aspx
Do you have multiple locations the data is being sent to? So, it has to be in sequence but could be partioned out. If so you can use correlation and ordered delivery and have several pipes that deliver and speed the process up.

Detecting Blocked Threads

I have a theory regarding trouble shooting a Asynchronous Application (I'm using the CCR) and I wonder if someone can confirm my logic.
If a CCR based multi-threaded application using the default number of threads (i.e. one per core) is slower than the same application with double the threads specified - does this means that threads are being blocked somewhere in the code
What do think? Is this a quick and valid way to detect if threads are being inadvertantly being blocked?
What do you mean by "slower"?
If you want to automatically detect blocked threads, perhaps those threads should send a heartbeat, which are then observed by a monitor of some sort, but your options are limited.
A cheap way to tell if threads are being blocked is to get the current system time before doing any potentially blocking operation, then after the operation, and see how much time has elapsed. For example, while waiting for a message to arrive, measure to see how much time the thread was blocked waiting for a message to arrive.
Unless there are always more than enough messages to be processed, threads will block waiting for a message. If you have more threads, then you have more potential message generators (depending on your design) and thus threads waiting to receive messages will be more likely to have one ready.
Exactly one thread to CPU is too few unless you can guarantee that there will always be enough messages so no thread will have to wait.
If this is the case, that means that your threadpool is being exhausted (i.e. you have 2 threads but you've async pended 4 IOs or something) - if your work is heavily IO bound, the rule of "one thread per core" doesn't really apply.
I've found that to keep the system fluid with minimal threads, I keep the tasks dealing with I/O as concise as possible. They simply post the data from the I/O into another Port and do no further processing. The data is therefore queued elsewhere for processing in a controlled manner without interfering with the task of grabbing data as fast as possible. This processing might happen in the ExclusiveGroup of an Interleave if there's shared state to think about... and a handy side-effect is that exclusive tasks will never tie up all the threads in a Dispatcher (however, I suspect that there's probably nattier ways of managing this in the CCR API)

Resources