Amazon SNS Text Messaging - amazon-sns

I'm using Amazon SNS for sending Text Message blasts to subscribers on my site. I would like to be able to have a different short code to send from, assigned if possible to one of my User accounts which have already been created and which have their own Key and Secret for use with the SNS API.
Where do I go in the AWS Console to set this up? Is this possible or not?

It is possible. Take a look at the AWS docs. You ultimately need to create an SNS Limit Increase case with Amazon support. But make sure you follow the SMS Pricing doc because it's not cheap or quick:
Amazon SNS now supports dedicated short codes for the US destinations.
Each dedicated short code is $995 per month. You are billed for your
dedicated short code addresses at the end of each month along with any
other Amazon SNS text message sending charges you incur.
The United States Short Codes will incur a one-time setup fee of $650.
Carrier approval may take 8-12 weeks (or longer), and your short code
may not be fully live with all US carriers during this period.
To request a dedicated short code, open a dedicated short code limit
increase case in Support Centre.

Related

How will a popular project handle rate-limit for topic subscriptions per project?

Introduction
So from the official firebase docs. It states there that:
The frequency of new subscriptions is rate-limited per project. If you send too many subscription requests in a short period of time, FCM servers will respond with a 429 RESOURCE_EXHAUSTED ("quota exceeded") response. Retry with exponential backoff.
The topic subscription add/remove rate is limited to 3,000 QPS per project.
qps = queries per second.
Now here's the thing. It says "per project". What if an app gets very popular and thousands of users open the app at the same time then click a button that subscribes to a topic or unsubscribe?
Wouldn't hitting the rate limit 3000 queries per second highly possible?
Here's my real question
Does firebase_messaging's subscribeToTopic(..) and unsubscribeFromTopic(..) methods automatically handle retries? (referring to "Retry with exponential backoff" above)
If not, then how would a very popular app handle topic subscriptions from a massive number of users?
Because the plugin does not provide any documentation about this.
Coming from a different platform (Android) for your question on subscribeToTopic() retry?, but possibly both function similarly. On Android it returns a Task where in Flutter it returns Future (Task = Future for Flutter if I understood that correctly), which I presume is a hint that Google gave the responsibility for the retries to the developers.
I'm not sure what your looking for as an answer. The correct way is to implement an exponential backoff as mentioned in the docs. Depending on your use-case, different strategies can be implemented.
e.g. forced subscription -- like a general topic, that runs every app start is easy. One-off subscriptions, you might need some verifications i.e. a way to check if the user actually finished subscribing, retry if not.

Firebase storage cost explosion (or how to prevent it) [duplicate]

Some functions in the Google Developers Console, like the Analytics API, are free until you reach a quota. Other functions, like Google Cloud Storage, create costs from the first click.
When I upload a file under https://console.developers.google.com/ > Storage > Cloud Storage > Storage Browser and I make this file publicly available, I pay about $0.12 per GB traffic.
But theoretically the traffic to this link could explode, e.g. because of sudden popularity. Therefore I would like to set something like a daily or monthly cost limit.
Q: How do I protect myself from overly high costs in the Google Developers Console?
You cannot. I asked Google about this, here's their response, from May 7 2016:
(GCE = Google cloud engine. No spending limits.
GAE = Google app engine — yes it has spending limits.)
... you are eligible for support on ... only ...
... [various helpful links] ...
That been said, at the moment there is no a feature that allows you to
configure a limited budget on GCE. This feature is certainly available
for GAE [1]. As you mentioned in your comments, you either can totally
shut down your VMs (will depend on your use case) or set the VMs to
send you alerts if they reach a certain traffic limit [2].
Sincerely,
Someone's first name
Technical Solutions Representative
Google Cloud Platform
[1] https://cloud.google.com/appengine/docs/quotas
[2] https://cloud.google.com/monitoring/support/notification-options
#wmdry, you wrote: "traffic to this link could explode" — I'm afraid of this too. That's why I asked Google about this. And I'm planning to avoid Google's CDN because of this, and use another CDN provider instead, which has spending limits. Because, unlike Nginx, I don't see any way for me to rate limit / throttle Google's CDN.
I do plan to use GCE (Google Cloud Engine) though. Therefore, right now I'm reading about how to rate limit my Nginx server. Because if I just configure Nginx correctly, then those $0.12 / GB you mentioned, cannot possible explode to ... like $10k in a month? What if Google sends a $10k bill when I'm back from an a few week's vacation, just because of my hobby project and a few people downloading a 1 MB movie over and over again forever (because: evil). Hmm, & the bigger & faster my servers, the higher the risk.
I hope Google will add spending limits, because I did want to use Google's CDN.
Update 2020: Apparently this does bite people from time to time — look here:
"Burnt $72k testing Firebase and Cloud Run and almost went bankrupt", Dec 08, 2020, https://news.ycombinator.com/item?id=25372336,
In that case, they could contact Google and in the end didn't need to pay.
As of July 2017 you can set budgets that send notifications via email but do not cap spending:
To set an alert-only budget, which will not cap spending:
Go to the Cloud Platform Console.
Open the console left side menu and click Billing
If you have more than one billing account, click the billing account name.
On the left, click Budgets & alerts.
Official help page: https://support.google.com/cloud/answer/6293540?hl=en
I found that Google's documentation now provides two methods to actually limit the cost of a GCP project. It involves the following setup:
Create a Cloud Function that checks the cost against the budget, and carries out a certain action if the cost exceeds the budget. Google's Documentation provides a sample code snip that can either shutdown all VM instances in a Project or disable the billing for a project. Shutting down all VMs would stop all VM-related cost but you get to keep your data (and still have to pay for the storage). Disabling the billing for a project would effectively zap all cost-related activities and you could lose data. You can name the Cloud Function "budget-enforcer".
The Google code snip as provided above has a hard coded ZONE variable. Remember to change it to match your zone!
Create a Service Account to run the Cloud Function "budget-enforcer". For shutting down VMs, the Service Account would need role "Compute Instance Admin (v1)". For disabling billing on a project, the Service Account would need role "Project Billing Manager".
Set a Topic for the Cloud Function (I call mine "proj-name-stop-vm" and "proj-name-disable-bill").
Set up a budget alert as usual, and connect it to one of the Pub/Sub topic above.
Please be noted that Google's documentation did mention that there could be a delay between the cost exceeds a budget and the function is triggered, so you should build in a buffer if you have an absolute hard cost limit. I use 90% of the budget as the trigger line for shutting down my instances.
The API usage can be limited with a hard limit:
Depending on the API, you can explicitly cap requests in a variety of
ways, including: requests per day, requests per 100 seconds, and
requests per 100 seconds per user. You might want to limit the
billable usage by setting caps. For example, to prevent getting billed
for usage beyond the free courtesy usage limits, you can set requests
per day caps
Source
You can combine budget pub/sub alerts with a cloud function that can disable billing on your entire account if a threshold is met.
Full Tutorial Here:
https://www.youtube.com/watch?v=KiTg8RPpGG4
GitHub Repo Here: https://github.com/aioverlords/Google-Cloud-Platform-Killswitch
To Disable Billing
const _disableBillingForProject = async projectName => {
const res = await billing.updateBillingInfo({
name: projectName,
resource: {
billingAccountName: ''
}, // Disable billing
});
console.log(res);
console.log("Billing Disabled");
return `Billing disabled: ${JSON.stringify(res.data)}`;
};
Simply go to the developer console:
https://console.developers.google.com/project
Select your project.
Select "billings & settings"
Enable billing.
Then go to Compute/AppEngine/Settings and set a daily budget.
Go to Google Cloud console, and then to Billing / Budgets and Alerts and create a new budget for one or all your projects. You can select which services should be included in the limit and set a monthly amount that should not be exceeded.

How can you limit the billing in firebase? They used to have this possibility, it looks like they removed it [duplicate]

I'm currently working in a social network app and I need to do a search feature. Firestore does not support these kind of queries, so I need to use an external service like Algolia.
The problem is that the free plan does not support connecting to external websites/APIs other than Google's own ones, so I can't connect to Algolia to get my search system working.
I have read multiple stories about devs paying high bills because of loops or errors in their code, and as the Blaze plan is a pay-to-go plan, they get charged what they used. If a loop generated 10TB of files they will get charged for that.
I also know that Blaze plan's features are free as long as each of them (individually) stay below the limits of the free Spark plan.
So as my question says, is there a way to set limits? For example, I would like to tell Firebase to limit my cloud functions invocations to 100k per month. That way it would be free and I would never be able to get over 100k as it's limited, which means I'll never get billed for that.
Take into account that the only thing I need right now from a paid plan is the connection to external networks. I don't need anything else as we're just starting and the app is not in production, so there's no need for huge limits.
Every Firebase project is also a Google Cloud Platform project. This means that many of the advanced features of Google Cloud Platform are also available for your Firebase project.
For example, you can set up billing alert for your Firebase project, so that you are alerted when the usage reaches a certain level. While you can't configure it to switch off the project at some point, the alert should typically be quite good for alerting you to unusual usage patterns.
For more on this see:
Tracking your spending with budgets in a recent blog post.
The GCP documentation on how to set budget alerts, which is what Firebase uses under the hood.
The GCP documentation now also has a section on capping (disabling) billing to stop usage. This is a brute force approach though and may lead to data being lost, so I'd recommend investigating all other options first.
Update (December 2020): Firebase's Todd Kerpelman just released a series of videos where he disables billing using the process from the documentation mentioned above.
You cannot set spending limits to your app now.
As of December 12, 2019, you can no longer create spending limits, but
you can change or remove existing spending limits.
https://cloud.google.com/appengine/pricing#spending_limit
You can create budgets, which will alert you when reaching the budget. But it won't stop the usage when hitting the budget.
https://cloud.google.com/billing/docs/how-to/budgets#add-new-budget
The screenshot here seems to show a Spending Limit setting for Firebase projects: Firebase: Budget and Daily Spending Limit
That settings page is located here (the Spending Limit setting apparently only shows up once you set up billing for the project): https://console.cloud.google.com/appengine/settings
It's disabled in the poster's case, but I think that's only because he connected it up to a "NodeJS App Engine app", which isn't the case for many Firebase developers.
I haven't tried it yet myself, but will do so once I start a paid plan.
EDIT: Yep, the setting shows up once you switch to a paid plan. (in my case, Blaze) I don't have enough traffic yet to confirm that it works as expected, but if I find later that it doesn't, I'll give an update here.
"This example shows you how to cap costs and stops usage for a project by disabling Cloud Billing. This will cause all Google Cloud services to terminate non-free tier services for the project."
Google Cloud Source

Sabre air search and book flow

Hoping for a bit of guidance / reassurance on air search and book flow in Sabre (SOAP API) which I'm integrating with for a client website project.
My client is planning to take payment separately via a 3rd party payment gateway and also have a 3rd party ticketing robot.
The details I have been given from the ticketing robot company is that we should create the PNR then queue transfer to "International/Domestic Agent Q50" (with their PCC).
I've got access to and have been reading the Sabre Dev Studio, have access to the Sabre SOAP API (I have my client's credentials and PCC) and have followed the "Low Far Search and Book" workflow here (https://developer.sabre.com/docs/read/workflows/Low_Fare_Search_and_Book) exchanging EnhancedAirBookRQ and PassengerDetailsRQ for CreatePassengerNameRecordRQ as advised on that page and inserting payment before, my proposed work flow is:
Create a token with TokenCreateRQ
Use token to perform a search with BargainFinderMaxRQ
Display results to customer, customer picks an itinerary / flight segments
Collect customer details from customer
External payment gateway take payment for amount returned in BarginFinderMaxRQ
Book the desired flight segments using the orchestrated API CreatePassengerNameRecordRQ, including:
Adding passenger details and flight segments
Specifying that the payment was in cash
Performing the queue transfer?
I've got BargainFinderMaxRQ coded up and working.
I'm starting the integration with CreatePassengerNameRecordRQ and have noticed the price returned can be different to the price returned from BargainFinderMaxRQ. Which makes me question the above work flow. I selected it due to the easier integration (I can use tokens rather than manage a session and it's just one API call).
So, my questions:
Is my understanding correct, is this the correct work flow for the project? Given that my client is taking payment via an external payment gateway and want to display the final figure to the customer before they pay.
I'm struggling to understand how the ticketing robot fits into the process. Hoping for a steer on how that affects the PNR call(s). Do I still set the ticket type to "7TAW" and queue place onto their PCC + queue number?
Thank you for any help, greatly appreciated.
1) Yes, the process is correct, but there are scenarios in which airlines change fares or where the airline does not confirm the availability immediately, so when you price you are actually pricing an IATA fare, which is usually more expensive. For particular scenarios, I recommend you to contact the API support.
2) The "7TAW", which is the ticketing time limit, is meant to have the limit set by the airline until when you can issue the ticket without having the possibility of losing the given price. Some airlines require that to be done on the same day of the booking (which is what you are setting with the 7TAW). Some airlines give you some days and some others can give you just 30 minutes after booking. It is almost impossible for us to respond on how would the robot require this to be provided, so for you to be sure, I would recommend you checking with the owners of that robot and ask them how would they want it, maybe they don't even care.

Delegating Tasks for Mission Critical Application

I'm working on a mission critical application.
The application fetches Stock Market data from different stock markets like NYSE, NASDAQ, etc. using third party service.
Customers can come to the application and add their Portfolio (which company's shares they have).
And then set Alerts. eg. Notify me when AAPL price goes above $xxx on NASDAQ. when MSFT price goes below $zzz on NYSE.
I've a cron job that fetches market data from third party service for all the tickers users have added (AAPL, GOOG, MSFT, etc...) every 1 min.
After I get the data, I fetch all the alerts that users have created and then send them notification via Email, SMS, Pushover, Twitter, Facebook Message, etc. Also add that notification to app's database so user can see it in App when they log in.
Now since this is time intensive application, failure to fetch data may result in big loss since customers are paying for the time critical data.
Currently, I'm pushing all the notification sending part to Queue. Worker (on my server) sends notification.
Are there any other better ways to delegate as much work as possible to third party servers?
Would you recommend using Iron.io worker so it does the job of sending the notifications as well.
And may be also fetching data from the market.
Thanks!
Architecturally there are a number of approaches but it sounds as if you're making the right choices. Using a queue to decouple the producer from the notification process makes sense. This enables a more proper SOA architecture where you can change/update/evolve various parts of the app independently without worrying too much about tightly coupled code.
That said, your question is specifically around offloading to third parties. There are third parties that can abstract the notification part out of your code. I'm not super familiar with them but there are many options: PubNub, Pusher, Twilio, SendGrid, Mailgun, AWS SNS, etc.
I work for Iron.io. We have many customers doing exactly what you're trying to accomplish: creating workers that become little mini-services and calling them from either push events, scheduled tasks, or on-demand. This frees you up from having to deal with the queuing, routing, scheduling, and worker/background server capacity.
We're happy to help you architect things right from the beginning, just reach out to support#iron.io.

Resources