Max messages allowed in a queue in RabbitMQ - asynchronous

I am looking for Max messages allowed in a queue in RabbitMQ I understand from some of the links that, there is no limit on this unless we specify.
Was searching for some authorized/from RabbitMQ information on this for some time, but not finding exact information other than below link.
[http://rabbitmq.1065348.n5.nabble.com/Max-messages-allowed-in-a-queue-in-RabbitMQ-td26063.html][1]
Well, for my scenario I have not specified max limit, now I would like to know, if the max messages allowed in a queue is unlimited, then based on what? Does it depend on system attributes (memory) ? I mean Number of messages proportional to the memory?
Do we have this mentioned in any of RabbitMQ documents?
Kindly share if anyone has got. Any answers much appreciated.
Note:- am searching this info for worst case scenario

memory and hard drive space... that's where the limit sits.

Related

DynamoDB: High SuccessfulRequestLatency

We had a period of latency in our application that was directly correlated with latency in DynamoDB and we are trying to figure out what caused that latency.
During that time, the consumed reads and consumed writes for the table were normal (much below the provisioned capacity) and the number of throttled requests was also 0 or 1. The only thing that increased was the SuccessfulRequestLatency.
The high latency occurred during a period where we were doing a lot of automatic writes. In our use case, writing to dynamo also includes some reading (to get any existing records). However, we often write the same quantity of data in the same period of time without causing any increased latency.
Is there any way to understand what contributes to an increase in SuccessfulRequest latency where it seems that we have provisioned enough read capacity? Is there any way to diagnose the latency caused by this set of writes to dynamodb?
You can dig deeper by checking the Get Latency and Put Latency in CloudWatch.
As you have already mentioned, there was no throttling, and your writes involve some reading as well, and your writes at other period of time don't cause any latency, you should check for what exactly in read operation is causing this.
Check SuccessfulRequestLatency metric while including the Operation dimension as well. Start with GetItem and BatchGetItem. If that doesn't
help include Scan and Query as well.
High request latency can sometimes happen when DynamoDB is doing an internal failover of one of its storage nodes.
Internally within Dynamo each storage partition has to be replicated across multiple nodes to provide a high level of fault tolerance. Occasionally one of those nodes will fail and a replacement node has to be introduced, and this can result in elevated latency for a subset of affected requests.
The advice I've had from AWS is to use a short timeout and a fast retry (e.g. 100ms) if your use-case is latency-sensitive. It's my understanding that only requests that hit the affected node experience increased latency, so within one or two retries you'll hit a different node and get a successful response, with minimal impact on your overall latency. Obviously it's hard to verify this, because it's not a scenario you can reproduce!
If you've got a support contract with AWS, it's well worth submitting a support ticket from the AWS console when events like this happen. They are usually able to provide an insight into what actually happened.
Note: If you're doing retries, remember to use exponential backoff to reduce the risk of throttling.

Is it realistic for a dedicated server to send out many requests per second?

TL;DR
Is it appropriate for a (dedicated) web server to be sending many requests out to other servers every second (naturally with permission from said server)?
I'm asking this purely to save myself spending a long time implementing an idea that won't work, as I hope that people will have some more insight into this than me.
I'm developing a solution which will allow clients to monitor the status of their server. I need to constantly (24/7) obtain more recent logs from these servers. Unfortunately, I am limited to getting the last 150 entries to their logs. This means that for busy clients I will need to poll their servers more.
I'm trying to make my solution scalable so that if it gets a number of customers, I won't need to concern myself with rewriting it, so my benchmark is 1000 clients, as I think this is a realistic upper limit.
If I have 1000 clients, and I need to poll their servers every, let's give it a number, two minutes, I'm going to be sending requests off to more than 8 servers every second. The returned result will be on average about 15,000 characters, however it could go more or less.
Bearing in mind this server will also need to cope with clients visiting it to see their server information, and thus will need to be lag-free.
Some optimisations I've been considering, which I would probably need to implement relatively early on:
Only asking for 50 log items. If we find one already stored (They are returned in chronological order), we can terminate. If not, we throw out another request for the other 100. This should cut down traffic by around 3/5ths.
Detecting which servers get more traffic and requesting their logs less commonly (i.e. if a server only gets 10 logged events every hour, we don't want to keep asking for 150 every few minutes)
I'm basically asking if sending out this many requests per second is considered a bad thing and whether my future host might start asking questions or trying to throttle my server. I'm aiming to go shared for the first few customers, then if it gets popular enough, move to a dedicated server.
I know this has a slight degree of opinion enabled, so I fear that it might be a candidate for closure, but I do feel that there is a definite degree of factuality required in the answer that should make it an okay question.
I'm not sure if there's a networking SE or if this might be more appropriate on SuperUser or something, but it feels right on SO. Drop me a comment ASAP if it's not appropriate here and I'll delete it and post to a suggested new location instead.
You might want to read about the C10K Problem. The article compares several I/O strategies. A certain number of threads that each handle several connections using nonblocking I/O is the best approach imho.
Regarding your specific project I think it is a bad idea to poll for a limited number of log items. When there is a peak in log activity you will miss potentially critical data, especially when you apply your optimizations.
It would be way better if the clients you are monitoring pushed their new log items to your server. That way you won't miss something important.
I am not familiar with the performance of ASP.NET so I can't answer if a single dedicated server is enough. Especially because I do not know what the specs of your server are. Using a reasonable strong server it should be possible. If it turns out to be not enough you should distribute your project across multiple servers.

retry strategy in mule alternative for until-successful

I am trying to implement retry strategy for a http outbound. After Googling, I found out that until-successful is having good capability to retry. But, maximum number of threads available for this API is 32. Hence, messages will be lost once the thread count reaches 32 and hence it might result in performance issues. Could someone clarify whether this issue is fixed in mule.
What are the other alternative strategies available ? Any suggestions/links/sample/pseudo code is really appreciated.
There is no limit to the number of retries you can attempt with until-successful.
I have no idea where you got this "32" from...

Getting current transferred MPI network communication volume

I have a question related to MPI.
In order to keep track of the communication volume used by my implementation, I would like to get the currently-transferred data amount since the mpi-process' start until the current measure-point.
I checked the specification as well as the mpi.h header file of mpich and did not find a matching function to call or variable that keeps track of the network transfer costs. It would, of course, be possible to implement a small traffic registry or define a macro for tracking communication sizes, but maybe it can be read out from somewhere.
Do you know a method to gain the current transfer size, maybe it is also possible to get this number using a system call to get the network traffic size of the process?
Is it maybe possible to access the proc information of the current process, maybe the /proc/net is maintained per process as well, such as /proc/self/net?
Thank you in advance,
Martin

How to avoid the max connection limit?

Dear StackOverflow members,
I was wondering, let's say with Whatsapp... you're continously connected to their servers.(Using TCP)
And assuming there's a max of 65535 connections/port, how do they avoid that limit?
Seeing as that'd mean once a server hits 65535 one time it'll always stay on that and never go down, as everyone's phone simply stays connected.
I'm not sure if you guys understand my question, but if you have any questions feel free to ask.
Kind regards,
Rene Roosen
Any large website wouldn't rely on one server. They'd usually use a load balancing proxy (commercial or open-source ones like ATS or HA proxy) and have several servers behind those. Those proxies have mechanisms to scale to much higher connections.
As long as the 4-tuple is unique (source-ip, source-port, dest-ip, dest-port), a proxy can handle the connection provided other resources (memory, cpu, etc.) are available. They don't restrict traffic to 64k connections/port.

Resources