Telegram Bot API - Rate limits for queries (not sending messages) - telegram

I am using the Telegram Bot API getChatMemberCount to query the number of people in various Telegram groups. I am not trying to send messages with the Bot API.
https://api.telegram.org/YOURBOT/getChatMemberCount?chat_id=#telegram
Some of the requests receive a 429 error.
I'm aware of the rate limits for sending messages, detailed here, and have tried waiting 5 and 10 seconds between requests, but am still getting the 429.
I've also tried waiting 10 minutes, and 15 minutes after receiving a 429 to re-send the requests that errored, then waiting 30 seconds between each query, and still got a 429 on all of the ones that previously had a 429. If I run the request on Groups that were previously successful, they work, at the same time that groups that previously received a 429 are still receiving a 429. So it almost seems like it's the group # itself that is the issue, and not the actual length of time between requests.
For example:
INITIAL REQUEST: Group A (success), (wait 10 seconds), Group B (429), (wait 10 seconds), Group C (success), (wait 10 seconds), Group D (429)
WAIT 15 MINUTES
2nd REQUEST: Group B (429), Group D (429), Group A (success), Group C (success)
If anyone has insight into the Telegram Bot API rate limits for requests that are not sending messages, please let me know what worked for you.

Related

BizTalk send port retry interval and retry count

There is one dynamic send port (Req/response) in my orchestration.
Request is sending to external system and accepting response in orch. There is a chance the external system have monthly maintenance of 2 days. To handle that scenario
Retry interval if I set to 2 days is it impacting the performance? Is it a good idea?
I wouldn't think it is a good idea, as even a transitory error of another type would then mean that message would be delayed by two days.
As maintenance is usually scheduled, either stop the send port (but don't unenlist) or stop the receive port that picks up the messages to send (preferable, especially if it is high volume), and start them again after the maintenance period.
The other option would be to build that logic into the Orchestration, that if it catches an exception that it increased the retry interval on each retry. However as above, if it is high volume, you might be better off switching of the receive location, as otherwise you will have a high number of running instances.
Set a service interval at the send port if you know when the receiving system will be down. If the schedule is unknown I would rather set:
retry count = 290
retry interval = 10 minutes
to achieve that the messages will be transmitted over two days.

Response time for an search with more than 1000 records is high

Response time for an search transaction with more than 1000 records has high response time for singapore client more than 4 to 5 time than USA client during 125 user load test.please suggest
As per JMeter Glossary
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So the formula is:
Response time = Connect Time + Latency + Actual Server Response time
So the reasons could be in:
Due to long distance from your load generators to Singapore you have worse results due to the time required for the network packets to travel back and forth presumably due to high latency
Your Singapore instance is slower than the USA one due to i.e. worse hardware specifications, bandwidth, etc.

Telegram webhook is not responding fast

My bot is in more than 50K groups and receives every message using Webhook.
Problem is, in busy hours, telegram sends updates to my webhook with a long delay (i.e. after one hour!).
Is there any reference talking about the limits and how many messages does telegram pass to webhook per second and generally how can I speed it up?!
You can use max_connections parameter in setWebhook.
Maximum allowed number of simultaneous HTTPS connections to the webhook for update delivery, 1-100. Defaults to 40.
Use lower values to limit the load on your bot‘s server, and higher values to increase your bot’s throughput.

Response not received back to client from Apigee Cloud

POSTMANCleint--> Apigee OnCloud-->Apigee On Premise---->Backend
Backend is taking 67 sec to respond and i can see the response in Apigee cloud as well however the same response is not sent to client and instead timeout is received .
I have also increased the timeout counts on HTTTargetConnectionProperties but still the issue persists.
Please let us know where to investigate.
There are two levels of timeout in Apigee -- first at the load balancer which has a 60 second timeout, then at the Apigee layer which I believe was 30 seconds but looks like it was increased to 60.
My guess is that the timeout response is coming from the load balancer and that the the timing is just such that Apigee is able to get the response but the load balancer has already dropped the connection.
If this is a paid instance you should be able to get Apigee to adjust the timeouts to make this work (but, man... 67,000ms response times are pretty long...)

JMeter Graph Results for multiple http requests

I created a simple JMeter Http test.
I specified 50 users, each users will do do 30 http requests (one after the other), and the user's ramp up time is 1 second.
Then I added a Graph Result Listener, then recorded the performance of my application for 10minutes.
Question : What is Graph Results Listener measuring - per http request of each user? or all 30 http requests of each user?
I mean, if I have an Average of 5seconds, does that mean that the each http requests gets a response 5 seconds on average? ...or does that mean that all 30 http requests (totaling their response times) gets 5 seconds on average?
I mean, if I have an Average of
5seconds, does that mean that the each
http requests gets a response 5
seconds on average? ...or does that
mean that all 30 http requests
(totaling their response times) gets 5
seconds on average?
This depend how you created your test plan. But if you have one action (which is the HTTP request) and you specified 30 iterations with 50 users, then it means that 5 sec is the average time for the action performed 50 * 30 times.

Resources