Amazon SNS rate limiting on HTTPS endpoint - amazon-sns

I have a issue with the rate that Amazon SNS calls our HTTPS endpoint. Our server can't handle that much calls at once and crashes eventually.
The situation
We are sending newsletters with Amazon SES (Simple Email Service). The notifications about bounces / complaints / delivery are send to a SNS topic (all the same).
We are sending the newsletter with a rate of 2000 e-mail per minute. This also means that we receive the SNS topics with a rate of 2000 per minute. The sending of newsletters and receiving of the SNS topics are all handled by the same server.
The server is already busy by sending those newsletters, and in the meantime it must also handle the SNS topics and that is to much.
So I actually want to limit the rate of the SNS topics, so that they are send and a rate of eg 500 per minute. I can't find something like that in the policy.

Can you create an SQS queue and subscribe it to the SNS topic? Then, your service can process messages from queue later/when it is possible.

Related

FCM - What is the limitation of sending messages/minute using API?

I got a few questions
what is the limitation of sending messages/minute using API?
If there was a limitation, what will happen when exceeding the limitation?
The arrival rate is 30%~40%, I would like to know how to increase this rate.
For Android, you can send up to 240 messages/minute and 5,000 messages/hour to a single device. Link
But what if I send up to 10000 devices with 100 messages in one minute, would some of the messages be lost?
Thank you !!!
firebaser here
I would recommend following the rates stated in the doc. Just a general rule to not send too many requests at a time. Usually, you'll get 5XX or 429 error when you're sending too many notifications. When this happens, you need to implement exponential back-off in your retry mechanism.
As for the arrival rate, I'm assuming you're talking about delivery rate.
This can be caused by several factors like battery saving features, message priority, message lifespan or issue with FCM backend. It’ll be challenging to pinpoint or even narrow down what is the specific cause of low delivery on a public forum without going into project-specific details. I would recommend reaching out to Firebase support as they can offer personalized help.

Adafruit IO data rate limit

I'm trying to send data from multiple ESP-8266 to feeds on my Adafruit IO account.
The problem is that when I try to send new values, I'm faced with a ban from publishing because the 2 seconds time limit is violated when two or more of my MCUs happen to send data at the same time (I can't synchronize them to avoid this).
is there any possible solution to this problem?
I suggest to consider those three options:
A sending token which is send from one ESp to the next. So basically all ESPs are mot allowed to send. If the token is received its allowed to send - waits the appropriate time limit hands the token to the next ESP. This solution has all Arduinos connected via an AP/router and would use client to client communication. It can be setup fail safe, so if the next ESP is not available (reset/out of battery etc) you take the next on the list and issue an additional warning to the server
The second solution could be (more flexible and dynamic BUT SPO - single point of failure) to set up one ESP as data collector to do the sending.
If the ESps are in different locations you have to set them up that they meet the following requirement:
If you have a free Adafruit IO Account, the rate limit is 30 data
points per minute.
If you exceed this limit, a notice will be sent to the
{username}/throttle MQTT topic. You can subscribe to the topic if you
wish to know when the Adafruit IO rate limit has been exceeded for
your user account. This limit applies to all Data record modification
actions over the HTTP and MQTT APIs, so if you have multiple devices
or clients publishing data, be sure to delay their updates enough that
the total rate is below your account limit.
so its not 2 sec limit but 30/min (60/min if pro) so you limit sending each ESP to the formula:
30 / Number of ESPs sending to I/O -> 30 / 5 = 6 ==> 5 incl. saftey margin
means each ESP is within a minute only allowed to send 5 times. Important if the 5 times send limit is up it HAS to wait a minute before the next send.
The answer is simple, just don't send that frequent.
In the IoT world
If data need frequent update (such as motor/servo, accelerometer, etc.), it is often that you'd want to keep it local and won't want/need to send it to the cloud.
If the data need to be in the cloud, it is often not necessary need to be updated so frequently (such as temperature/humidity).
Alternatively, if you still think that your data is so critical that need to be updated so frequently, dedicate one ESP as your Edge Gateway to collect the data from sensor nodes, and send it to the cloud at once, that actually the proper way of an IoT network design with multiple sensor nodes.
If that still doesn't work for you, you still have the choice of pay for the premium service to raise the rate limit, or build your own cloud service and integrate it with your Edge Gateway.

GCM and WNS banning policy for sending push notifications to inactive devices

I know that APNS will ban a client who attempts to send multiple push notifications to inactive devices. What is the policy for keeping on sending push notifications to 'NotRegistered' devices in GCM and ' response code 410' channels in WNS? There would be any banning or blocking of the client?
Thanks in advance.
For the GCM part, it seems there is no defined policy when it comes to sending to Unregistered Devices. I did find this answer to a post that discusses the limitations of GCM. Some significant quotes from the answer:
The only limits you run into the GCM documentation is this: http://developer.android.com/google/gcm/adv.html#lifetime
Quote from the above link:
Note: There is a limit on how many messages can be stored without collapsing. That limit is currently 100. If the limit is reached, all stored messages are discarded. Then when the device is back online, it receives a special message indicating that the limit was reached. The application can then handle the situation properly, typically by requesting a full sync.
I also found this post where the GCM blocks a server, and as per the answer:
There is usage limit for GCM and if you automate the GCM request they consider as a threat as DOS attack

Getting around SNS API call limits to publish

I need to publish a unique message to potentially thousands of device endpoints simultaneously.
The message is unique so I cant group the endpoints in to topics...
Although I cant find any documentation, it seems that SNS limits to only 10 concurrent API publish requests.
More than 10 concurrent requests returns
RequestError: send request failed
caused by: Post https://sns.us-east-1.amazonaws.com/: dial tcp 54.239.24.226:443: i/o timeout
And then seems to block my IP from further requests for a short time...
I was planing to have the whole app backend to be "serverless" which would mean that there would be a scheduled task in Lambda to make the calls to SNS publish...
1000 SNS publish requests / 10 concurrent = 100 batches...This would mean it would take 100 * x seconds to process all the messages which would reach the API gateway and Lambda timeout limits (and would also add to the costs)
Is there a good way around these limits. A increase in allowed concurrent API calls would be nice...
Amazon SNS does not enforce a rate limit for Publish calls. Occasionally, SNS will throttle requests but the service will respond with HTTP 400 and an AWS SNS request ID.
The error message you posted looks like something upstream between you and the SNS endpoint is rate limiting your calls. Check if there is a proxy or firewall between you and the SNS endpoint, or talk to your network administrator.
You can request additional limit increases here:
https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=service-code-sns
I encountered something like this before and the solution was launching tens of t2.micro or t2.nano instances. Because there is also a limit about the requests that you can make from ec2 to aws.
AWS SNS quotas for publishing a messages ranges between 30K transactions/sec to 300 based on if its fifo and which region. If it's fifo there is a 10mb/sec limit so that maybe what is limiting the publishes.
https://docs.aws.amazon.com/general/latest/gr/sns.html

OpenStack Notification Service: Marconi

I have ever implemented the Notification service based on RabbitMQ before.
And recently, I am interesting in the OpenStack Notification Service, Marconi.
But I am not sure that how can a client listen to a queue.
I mean a client would be notified if there is a message being pushed into the queue.
Is there any example or tutorial go through the Publisher/Subscriber pattern?
Thanks.
The Marconi project (API v1) does not currently support Push technology, including long-polling. Depending on how your subscriber processes messages that appear in the queue, you will need to poll the service at an appropriate interval using either the List Messages or Claim Messages requests.
Keep in mind that polling requests may count towards the rate limits for the service, even when no messages are in the queue. Rate limits are provider-specific.

Resources