I have never used Vision API before but recently I have found it very powerful for a project of mine. However I have two concerns regarding its budget limiting, in order to not get an unexpected bill:
Is it possible to set a monthly cost limit? I have been used to Compute Engine which gives me an almost exact cost of the month but this seems not possible here. Since I will be using the API for labelling I have set the label detections requests per minute and per user to a specific amount, also to be sure I have set the global request per minute and per user to the same amount, all the other quotas to 0. If I have understood correctly, setting the max calls quota per minute to 4, for example, should provide a maximum of 178560 calls per month, right? Should this limit my budget? Am I safe?
The API will be used as an API key in a mobile app. I have followed the code examples for iOS & Android and I have seen the key is written in the code. Is this safe? For a better security I have restricted the key to iOS/Android apps bundle and to Cloud Vision API only. Would it be a safe enough option?
Thanks everyone for any help!
Yes, it’s possible to set a monthly cost limit. Refer to this doc for more information about creating the budget, setting the budget scope, budget amount and threshold. Yes your understanding is correct by setting the max calls quota per minute to 4, it should provide a maximum of 178560 calls per month. It shouldn’t limit the maximum quotas.
API keys that are embedded in the code are not safe and secured.
Do not embed API keys directly in code. API keys that are embedded in code can be accidentally exposed to the public. For example, you may forget to remove the keys from code that you share. Instead of embedding your API keys in your applications, store them in environment variables or in files outside of your application's source tree.
Refer to this doc for more information about best practices for securing an API key.
Edit based on a question in the comment:
Can the quotas be seen as a hard limit?
The quotas might be seen as a hard limit only if you don't have any other resources running in your GCP project other than Vision API requests. Refer to this doc for more information about capping API usage.
If you want to set a hard limit and disable billing, configure a Cloud Function to call the Cloud Billing API that disables billing for the project as described in the GCP doc.
Note: Use this feature only if you want to stop the spending and might be willing to shutdown all your Google Cloud services and usage when your budget limit is reached.
Some functions in the Google Developers Console, like the Analytics API, are free until you reach a quota. Other functions, like Google Cloud Storage, create costs from the first click.
When I upload a file under https://console.developers.google.com/ > Storage > Cloud Storage > Storage Browser and I make this file publicly available, I pay about $0.12 per GB traffic.
But theoretically the traffic to this link could explode, e.g. because of sudden popularity. Therefore I would like to set something like a daily or monthly cost limit.
Q: How do I protect myself from overly high costs in the Google Developers Console?
You cannot. I asked Google about this, here's their response, from May 7 2016:
(GCE = Google cloud engine. No spending limits.
GAE = Google app engine — yes it has spending limits.)
... you are eligible for support on ... only ...
... [various helpful links] ...
That been said, at the moment there is no a feature that allows you to
configure a limited budget on GCE. This feature is certainly available
for GAE [1]. As you mentioned in your comments, you either can totally
shut down your VMs (will depend on your use case) or set the VMs to
send you alerts if they reach a certain traffic limit [2].
Sincerely,
Someone's first name
Technical Solutions Representative
Google Cloud Platform
[1] https://cloud.google.com/appengine/docs/quotas
[2] https://cloud.google.com/monitoring/support/notification-options
#wmdry, you wrote: "traffic to this link could explode" — I'm afraid of this too. That's why I asked Google about this. And I'm planning to avoid Google's CDN because of this, and use another CDN provider instead, which has spending limits. Because, unlike Nginx, I don't see any way for me to rate limit / throttle Google's CDN.
I do plan to use GCE (Google Cloud Engine) though. Therefore, right now I'm reading about how to rate limit my Nginx server. Because if I just configure Nginx correctly, then those $0.12 / GB you mentioned, cannot possible explode to ... like $10k in a month? What if Google sends a $10k bill when I'm back from an a few week's vacation, just because of my hobby project and a few people downloading a 1 MB movie over and over again forever (because: evil). Hmm, & the bigger & faster my servers, the higher the risk.
I hope Google will add spending limits, because I did want to use Google's CDN.
Update 2020: Apparently this does bite people from time to time — look here:
"Burnt $72k testing Firebase and Cloud Run and almost went bankrupt", Dec 08, 2020, https://news.ycombinator.com/item?id=25372336,
In that case, they could contact Google and in the end didn't need to pay.
As of July 2017 you can set budgets that send notifications via email but do not cap spending:
To set an alert-only budget, which will not cap spending:
Go to the Cloud Platform Console.
Open the console left side menu and click Billing
If you have more than one billing account, click the billing account name.
On the left, click Budgets & alerts.
Official help page: https://support.google.com/cloud/answer/6293540?hl=en
I found that Google's documentation now provides two methods to actually limit the cost of a GCP project. It involves the following setup:
Create a Cloud Function that checks the cost against the budget, and carries out a certain action if the cost exceeds the budget. Google's Documentation provides a sample code snip that can either shutdown all VM instances in a Project or disable the billing for a project. Shutting down all VMs would stop all VM-related cost but you get to keep your data (and still have to pay for the storage). Disabling the billing for a project would effectively zap all cost-related activities and you could lose data. You can name the Cloud Function "budget-enforcer".
The Google code snip as provided above has a hard coded ZONE variable. Remember to change it to match your zone!
Create a Service Account to run the Cloud Function "budget-enforcer". For shutting down VMs, the Service Account would need role "Compute Instance Admin (v1)". For disabling billing on a project, the Service Account would need role "Project Billing Manager".
Set a Topic for the Cloud Function (I call mine "proj-name-stop-vm" and "proj-name-disable-bill").
Set up a budget alert as usual, and connect it to one of the Pub/Sub topic above.
Please be noted that Google's documentation did mention that there could be a delay between the cost exceeds a budget and the function is triggered, so you should build in a buffer if you have an absolute hard cost limit. I use 90% of the budget as the trigger line for shutting down my instances.
The API usage can be limited with a hard limit:
Depending on the API, you can explicitly cap requests in a variety of
ways, including: requests per day, requests per 100 seconds, and
requests per 100 seconds per user. You might want to limit the
billable usage by setting caps. For example, to prevent getting billed
for usage beyond the free courtesy usage limits, you can set requests
per day caps
Source
You can combine budget pub/sub alerts with a cloud function that can disable billing on your entire account if a threshold is met.
Full Tutorial Here:
https://www.youtube.com/watch?v=KiTg8RPpGG4
GitHub Repo Here: https://github.com/aioverlords/Google-Cloud-Platform-Killswitch
To Disable Billing
const _disableBillingForProject = async projectName => {
const res = await billing.updateBillingInfo({
name: projectName,
resource: {
billingAccountName: ''
}, // Disable billing
});
console.log(res);
console.log("Billing Disabled");
return `Billing disabled: ${JSON.stringify(res.data)}`;
};
Simply go to the developer console:
https://console.developers.google.com/project
Select your project.
Select "billings & settings"
Enable billing.
Then go to Compute/AppEngine/Settings and set a daily budget.
Go to Google Cloud console, and then to Billing / Budgets and Alerts and create a new budget for one or all your projects. You can select which services should be included in the limit and set a monthly amount that should not be exceeded.
Wanting to validate my ARM template was deployed ok and to get an understanding of the telemetry options...
Under what circumstances do the following get logged to Log Analytics?
DataPlaneRequests
MongoRequests
QueryRuntimeStatistics
Metrics
From what I can tell arduously in the last few days connecting in different ways.
DataPlaneRequests are logged for:
SQL API calls
Table API calls even when the account was setup for SQL API
Graph API calls against an account setup for Graph API
Table API calls against an account setup for Table API
MongoRequests are logged for:
Mongo requests even when the account was setup for SQL API
However I haven't been able to see anything for QueryRuntimeStastics (even when turning on PopulateQueryMetrics) nor have I seen any AzureMetrics appear?
Thanks Alex for spending time and trying out different options of logging for Azure Cosmos DB.
There are primarily two types of monitoring paths for Azure Cosmos DB.
Metrics: These are low latency (<5 min) and aggregated metrics which are exposed on Azure Monitor API for consumption. THese metrics are primarily used for diagnosis of the app for any live site issues.
Logs: These are raw request logs coming at 2hours+ latency and are used for customer for primarily audit scenarios to understand who accessed the data.
Depending on your need you can choose either of the approaches.
DataPlaneRequests by default shows all the requests across all the API's and Mongo Requests only show Mongo specific calls. Please note Mongo requests would also be seen in Data Plane requests.
Metrics would not be see in Log Analytics due to a knowwn which our partner team is fixing.
Let me know if you have any further questions here.
My squad is building a Grafana dashboard that uses a custom datasource plugin https://github.com/vistaprint/Application-Insights-Datasource-Plugin which queries Azure Application Insights APIs.
We have discovered that there is a limit on max number of API requests https://dev.applicationinsights.io/documentation/Authorization/Rate-limits:
* Throttling limit: no more than 15 requests can be made in a minute across all API paths (/metrics, /events and /query), and
* Daily cap: no more than 1500 requests per day (UTC day) per API key can be made across all API paths.
We're worried that 1500 requests per day may not be enough, especially if the dashboard is refreshed frequently and accessed by many users. We wonder if there is a way to increase that limit.
the next section of the article you linked states:
Limits when using Azure Active Directory authentication
If using the
Azure API and Azure Active Directory for per user authentication, the
throttling limit is 60 requests per minute per user and there is no
daily cap. [emphasis added]
if you need higher than what the API Key auth allows, you'll need to look at using AAD auth instead.
Recently I was developing an application using Linkedin people-search API. Documentation says that a developer registration has 1 lac API calls per day, but when I have registered this API, and ran a python script, after some 300 calls it says throttle limit exceeds.
Did anyone face such kind of issue using Linkedin API, comments are appreciated.
Thanks in advance.
It's been a while but the stats suggest people still look at this and I'm experimenting with the LinkedIn API and can provide some more detail.
The typical throttles are stated as both a max (e.g. 100K) and a per-user-token number (e.g. 500). Those numbers together mean you can get up to a maximum of 100,000 calls per day to the API but even as a developer a single user token means a maximum of 500 per day.
I ran into this, and after setting up a barebones app and getting some users I can confirm a daily throttle of several thousands of API calls. [Deleted discussion of what was probably, upon further consideration, an accidental back door in the LinkedIn API.]
As per the Throttle Limits published by LinkedIn:
LinkedIn API keys are throttled by default. The throttles are designed
to ensure maximum performance for all developers and to protect the
user experience of all users on LinkedIn.
There are three types of throttles applied to all API keys:
Application throttles: These throttles limit the number of each API call your application can make using its API key.
User throttles: These throttles limit the number of calls for any individual user of your application. User-level throttles serve
several purposes, but in general are implemented where there is a
significant potential impact to the user experience for LinkedIn
users.
Developer throttles: For people listed as developers on their API keys, they will see user throttles that are approximately four times
higher than the user throttles for most calls. This gives you extra
capacity to build and test your application. Be aware that the
developer throttles give you higher throttle limits as a developer of
your application. But your users will experience the User throttle
limits, which are lower. Take care to make sure that your application
functions correctly with the User throttle limits, not just for the
throttle limits for your usage as a developer.
Note: To view current API usage of your application and to ensure you haven't hit any throttle limits, visit
https://www.linkedin.com/developer/apps and click on "Usage & Limits".
The throttle limit for individual users of People Search is 100, with 400 being the limit for the person that is associated with the Application as the developer:
https://developer.linkedin.com/documents/throttle-limits
When you run into a limit, view the api usage for the application on the application page to see which throttle you are hitting.