Why I am getting too much "Flood Wait Error"? - telegram

I have a very simple application for sending bulk messages.
It sends a single message to 20 groups.
The delay that I declare between send messages is "8".
It means about 7~8 send message request per minute.
The documentation says "telegram api has a limit of 20 request per minute".
It means I am using less than half of the limit.
But still, I am getting lots of flood wait errors.
And those wait errors has like 84.000second wait limit.
I am facing 2 errors while getting floodwait error.
1-
Security error while unpacking a recevied message: Server replied with a wrong session ID
2-
Floodwaiterror invoked while sending a message; forcing 70792 second wait interval for ....
I really don't know why this is happening.
The number I am trying to send message is brand new. Clear. Unbanned. Unspammed.
As I said wait interval between 2 messages is 8 seconds, means less than the limits for a minute.
Sessions are correct because it sends couple message but after couple it gets instantly tons of floodwaiterrors from 0 to 70k.
Could you help me to understand what is causing that please?

I have the same problem and solution is to pass the entity object instead of string name of receiver party.
was:
results = await client.send_message("mybot", message="hello")
now:
with TelegramClientSync(StringSession(session_id), api_id, api_hash) as client:
bot_entity = client.get_input_entity(peer="mybot")
results = await client.send_message(entity=bot_entity, message=message)

Related

Firebase verifyPassword returning TOO_MANY_ATTEMPTS_TRY_LATER

Using Firebase for authentication, I am seeing behavior that seems like extremely aggressive progressive lockout (after 1 or 2 failed attempts, having to wait 15 to 30 seconds to retry), yet I cannot find any documentation of such behavior or whether it is possible to modify it.
The specific behavior I am seeing is that if I submit a valid username but invalid password (using Firebase.auth.signInWithEmailAndPassword()), I will get auth/user-not-found results anywhere from 1 to 5 times, then I will start to receive auth/too-many-requests/TOO_MANY_ATTEMPTS_TRY_LATER
Once I receive this result, I can submit the valid password for the same account and will continue receiving the TOO_MANY_ATTEMPTS_TRY_LATER result until I wait a yet-undetermined amount of time (sometimes 15 seconds, other times up to a minute)
If instead, once I receive the TOO_MANY_ATTEMPTS_TRY_LATER result, I immediately submit a signInWithEmailAndPassword request for another account, they fail or succeed on their own accord - suggesting that the "lockout" is based on the account not my source IP or client.
Is this known and expected behavior? If so, is there anywhere to modify the aggressiveness of the lockout trigger? My users are getting confused because they are seemingly not being allowed into the app when they are certain they have gotten the correct password - many times after only a single failed password attempt.

Queue of Future in dart

I want to implement a chat system.
I am stuck at the point where user sends multiple messgaes really fast. Although all the messages are reached to the server but in any order.
So I thought of implementing a queue where each message shall
First be placed in queue
Wait for its turn
Make the post request on its turn
Wait for around 5 secs for the response from server
If the response arrives within time frame and the status is OK, message sent else message sending failed.
In any case of point 5, the message shall be dequeued and next message shall be given chance.
Now, the major problem is, there could be multiple queues for each chat head or the user we are talking to. How will I implement this? I am really new to dart and flutter. Please help. Thanks!
It sounds like you are describing a Stream - a series of asynchronous events which are ordered.
https://www.dartlang.org/guides/language/language-tour#handling-streams
https://www.dartlang.org/guides/libraries/library-tour#stream
Create a StreamController, and add messages to it as they come in:
var controller = StreamController<String>();
// whenever you have a message
controller.add(message);
Listen on that stream and upload the messages:
await for(var messsage in controller.messages) {
await uploadMessage(message);
}

How long does Firebase throttle you?

Even with debug enabled for RemoteConfig, I still managed to get the following:
Error fetching remote config values Optional(Error Domain=com.google.remoteconfig.ErrorDomain Code=8002 "(null)"
UserInfo={error_throttled_end_time_seconds=1483110267.054194})
Here is my debug code:
let debug = FIRRemoteConfigSettings(developerModeEnabled: true)
FIRRemoteConfig.remoteConfig().configSettings = debug!
Shouldn't the above prevent throttling?
How long will the throttle error remain in effect?
I've experienced the same error due to throttling. I was calling FIRRemoteConfig.remoteConfig().fetchWithExpirationDuration with an expiry that was less than 60 seconds.
To immediately get around this issue during testing, use an alternative device. The throttling occurs against a particular device. e.g. move from your simulator to a device.
The intention is not to have a single client flooding the server with fetch requests every second. Make sensible use of the caching it offers out of the box and fetch only when necessary.
When you receive this error, plug the value of error_throttled_end_time_seconds into an epoch converter (like this one at https://www.epochconverter.com) and it will tell you the time when throttling ends. I've tested this myself, and the throttling remains in effect for 1 hour from the first moment you are throttled. So either wait an hour or try some of the other recommendations given here.
UPDATE: Also, if you continue making config requests and receive the throttle error, the expire timeout does not increase (i.e. "you are not further penalized").
The quick and easy hack to get your app running is to delete the application and reinstall it. Firebase identifies your device as new device on reinstalling.
Hope it helps and save your time.

Google Cloud Bigtable: repeated grpc error code 13, then suddenly success

In short, we are sometimes seeing that a small number of Cloud Bigtable queries fail repeatedly (for 10s or even 100s of times in a row) with the error rpc error: code = 13 desc = "server closed the stream without sending trailers" until (usually) the query finally works.
In detail, our setup is as follows:
We are running a collection (< 10) of Go services on Google Compute Engine. Each service leases tasks from a pair of PULL task queues. Each task contains an ID of a bigtable row. The task handler executes the following query:
row, err := tbl.ReadRow(ctx, <my-row-id>,
bigtable.RowFilter(bigtable.ChainFilters(
bigtable.FamilyFilter(<my-column-family>),
bigtable.LatestNFilter(1))))
If the query fails then the task handler simply returns. Since we lease tasks with a lease time between 10 and 15 minutes, a little while later the lease will expire on that task, it will be lease again, and we'll retry. The tasks have a max retry of 1000 so they can be retried many times over a long period. In a small number of cases, a particular task will fail with the grpc error above. The task will typically fail with this same error every time it runs for hours or days on end, before (seemingly out of the blue) eventually succeeding (or the task runs out of retries and dies).
Since this often takes so long, it seems unrelated to server load. For example right now on a Sunday morning, these servers are very lightly loaded, and yet I see plenty of these errors when I tail the logs. From this answer, I had originally thought that this might be due to trying to query for a large amount of data, perhaps near the max limit that cloud bigtable will support. However I now see that this is not the case; I can find many examples where tasks that have failed many times finally succeed and report only a small amount of data (e.g. <1 MB) was retrieved.
What else should I be looking at here?
edit: From further testing I now know that this is completely machine (client) independent. If I tail the log on one of the task leasing machines, wait for a "server closed the stream without sending trailers" error, and then try a one-off ReadRow query to the same rowId from another, unrelated, totally unused machine, I get the same error repeatedly.
This error is typically caused by having more than 256MB of data in your reply.
However, there is currently a bug in our server side error handling code that allows some invalid characters in HTTP/2 trailers which is not allowed by the spec. This means that some error messages that have invalid characters will be seen as this kind of error. This should be fixed early next year.

ActiveMQ Async sending waiting on all message to be sent

I am using activemq to send messages. Due to the number of messages being sent and the amount of other work the application is sending. I've enabled async sending.
When the application ends I want to be sure the remaining messages are actually delivered as we sometimes miss one of the last messages that go on the connection.
Googling around I found only the warnings that state you might miss some messages well this is acceptable however we miss some of the last ones about 40% of the time. I wish to improve this situation.
Q: Is there any method to check wether the activemq producer/sender has sent all it's async messages?
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory( brokerUrl );
connectionFactory.setUseAsyncSend( true );
RedeliveryPolicy policy = connectionFactory.getRedeliveryPolicy();
policy.setInitialRedeliveryDelay( 500 );
policy.setBackOffMultiplier((short)2);
policy.setUseExponentialBackOff(true);
policy.setMaximumRedeliveries(10);
You could set the async send mode back to off for the last message which would cause the client to wait on the send until the broker acknowledges that the message has been received. The Connection object would need to be cast to ActiveMQConnection and then you can call setUseAsyncSend(false)

Resources