Telegram webhook is not responding fast - telegram

My bot is in more than 50K groups and receives every message using Webhook.
Problem is, in busy hours, telegram sends updates to my webhook with a long delay (i.e. after one hour!).
Is there any reference talking about the limits and how many messages does telegram pass to webhook per second and generally how can I speed it up?!

You can use max_connections parameter in setWebhook.
Maximum allowed number of simultaneous HTTPS connections to the webhook for update delivery, 1-100. Defaults to 40.
Use lower values to limit the load on your bot‘s server, and higher values to increase your bot’s throughput.

Related

Adafruit IO data rate limit

I'm trying to send data from multiple ESP-8266 to feeds on my Adafruit IO account.
The problem is that when I try to send new values, I'm faced with a ban from publishing because the 2 seconds time limit is violated when two or more of my MCUs happen to send data at the same time (I can't synchronize them to avoid this).
is there any possible solution to this problem?
I suggest to consider those three options:
A sending token which is send from one ESp to the next. So basically all ESPs are mot allowed to send. If the token is received its allowed to send - waits the appropriate time limit hands the token to the next ESP. This solution has all Arduinos connected via an AP/router and would use client to client communication. It can be setup fail safe, so if the next ESP is not available (reset/out of battery etc) you take the next on the list and issue an additional warning to the server
The second solution could be (more flexible and dynamic BUT SPO - single point of failure) to set up one ESP as data collector to do the sending.
If the ESps are in different locations you have to set them up that they meet the following requirement:
If you have a free Adafruit IO Account, the rate limit is 30 data
points per minute.
If you exceed this limit, a notice will be sent to the
{username}/throttle MQTT topic. You can subscribe to the topic if you
wish to know when the Adafruit IO rate limit has been exceeded for
your user account. This limit applies to all Data record modification
actions over the HTTP and MQTT APIs, so if you have multiple devices
or clients publishing data, be sure to delay their updates enough that
the total rate is below your account limit.
so its not 2 sec limit but 30/min (60/min if pro) so you limit sending each ESP to the formula:
30 / Number of ESPs sending to I/O -> 30 / 5 = 6 ==> 5 incl. saftey margin
means each ESP is within a minute only allowed to send 5 times. Important if the 5 times send limit is up it HAS to wait a minute before the next send.
The answer is simple, just don't send that frequent.
In the IoT world
If data need frequent update (such as motor/servo, accelerometer, etc.), it is often that you'd want to keep it local and won't want/need to send it to the cloud.
If the data need to be in the cloud, it is often not necessary need to be updated so frequently (such as temperature/humidity).
Alternatively, if you still think that your data is so critical that need to be updated so frequently, dedicate one ESP as your Edge Gateway to collect the data from sensor nodes, and send it to the cloud at once, that actually the proper way of an IoT network design with multiple sensor nodes.
If that still doesn't work for you, you still have the choice of pay for the premium service to raise the rate limit, or build your own cloud service and integrate it with your Edge Gateway.

BizTalk 2013 R2 - Rate based Throttling on flooding messages

We have a solution that takes a message, and sends it to a web API.
Every day, an automatic procedure is run by another department that passes thousands of records into the messagebox, which seems to cause errors related to the API solicit-response port (strangely these errors don't allude to a timeout, but they do only trigger when such a massive quantity of data is sent downstream).
I've contacted the service supplier to determine the capacity of their API calls, so I'll be able to tailor our flow once I have a better idea.
I've been reading up on Rate Based Throttling this morning, and have a few questions I can't find an answer to;
If throttling is enabled, does it only process the Minimum number of samples/messages? If so, what happens to the remaining messages? I read somewhere they're queued in memory, but only of a max of 100, so where do all the others go?
If I have 2350 messages flood through in the space of 2 seconds, and I want to control the flow, would changing my Sampling Window duration down to 1 second and setting Throttling override to initiate throttling make a difference?
If you are talking about Host Throttling setting, the remaining messages will be in the message box database and will show as being in a Dehydrated state.
You would have to test the throttling settings under load. If you get it wrong it can be very bad. I've come across one server where the settings were configured incorrectly and it is constantly throttling.

HTTP Send port timing out in BizTalk

I have a static one-way port configured for HTTP which sends XML documents to an external website. It's been working fine for over a year but lately it has been throwing errors
The HTTP send adapter cannot complete the transmission within the specified time. Destination: https://xyz.example.com
I've tried extending the timeout on the send port and the errors keep happening. The vendor says there have been no changes on their end, I have made no changes to the server and the network team says no changes have been made either.
I've tested the interface with PostMan and it works every time I try it.
Resuming the messages does nothing as I get the same error. What I've noticed is that if I reset the host instance then the messages start flowing.
Any clues?
Maybe you have a lot of HTTP request and the Outbound Throttling is activated? Check the performance counter Message delivery throttling state associated with BizTalk:MessageAgent performance object category to measure the current throttling state and see if it is different from 0.
Host Throttling Performance Counters
How BizTalk Server Implements Host Throttling
On trace 3, the value is 1, then "Throttling due to imbalanced message delivery rate". This means that "Message delivery incoming rate for the host instance exceeds the Message delivery outgoing rate * the specified Rate overdrive factor (percent)". The send port can't send messages as fast it receives new ones.
You can check both rates with performance counters:
You can increment the "Rate overdrive factor (percent)" at the Host properties to allow more load, by default is 125 (input rate can be as much 25% above output rate, then it begin to Throttling):
Or adjust the Sampling Window Duration or Minimum Number of Samples. It depends of the behaviour of your load.

Are messages sent to groups delivered to every user at the same time?

I was wondering if every user of a group will receive messages at the same time (disregarding the network latency) or if Telegram delivers messages in chunks, which mean some members would see the message earlier than others. Are messages delivered to the Web client and the phone clients at the same time?
Telegram transfer message at low latency, so all account/device should receive messages at same time.
You can prove this yourself by using two account in different network and receive messages.

How does parse.com push messages count towards the request limit?

I have tried reading about requests/sec definition on the parse.com, but I still couldn't understand if push messages are considered a "request"?
I was wondering if I could use parse.com push services for free, even at 10 million push messages a month, as long as I don't pass the 1 million unique devices threshold?
How is it calculated within the free 30 requests/second, if at all?
Thanks :)
A request to send a push does use an API request, and does count towards your burst limit.
However, that request could be to send a push to a single device, or a million devices, and it still uses just one request.
So yeah, you can get by for free with the limited scenario you described.

Resources