I am trying to get email(using get lead by id) of all leads retrieved from get lead activities call for a particular activity type using Marketo rest api calls. In some cases, I am getting "Max rate limit '100' exceeded with in '20’ secs" error message. In that case is there any data lost can occur?
The Marketo API is rate-limited to 100 requests each twenty seconds, so any calls made exceeding that limit will not be executed. Whether data can be lost is dependent on whether or not you can retry the call after you've gone back below to 100/20s threshold. If you're only trying ton retrieve activity records and not update any marketo-side records, then there should be no risk to you.
Related
We are building a tool to off-board existing employees, including clearing their calendars of all existing events. When querying the /calendars/{calendarID}/events/ endpoint, we are occasionally getting a 500 - Stack limit exceeded error. We're only generating a few dozen to hundred requests, so we don't seem to be hitting any rate limits, which appear to be 10k per day; additionally, it's only intermittent, rather than a failing continuously, as a rate limit would generally cause. Anyone familiar with this error?
You can find all the Calendar API related errors by checking this link here.
As for the error message 500 - Stack limit exceeded you are receiving, it looks like the issue might in fact be coming from somewhere else.
You can also test the Calendar API by using the Calendar API Reference here.
Reference
Calendar API Errors;
Calendar API Events:get.
Using google analytics and it's measurement protocol, I am trying to track eCommerce transactions based on my customers (who aren't end-consumers meaning not sparse unique userid's, locations, etc...) which have a semantic idea of a "sale" with revenue.
The problem is that not all of my logged requests to the ga mp API are resulting in "rows" of transactions when looking at conversions->ecommerce->transactions. And additionally, the revenue reported is respectively missing too. An example of the discrepancy is listing all my non-zero transaction revenue API calls, I should see 321 transactions in the analytics dashboard. However, I see only 106... 30%!!! This is about the same every day even tweaking some attributes which I would think would force uniqueness of a session or transaction.
A semantic difference is that a unique consumer (cid or uid) can send a "t=transaction" with a unique "ti" (transaction id) which overlap and are not serial. I say this to suggest that maybe there is some session related deduplication happening even though my "ti" attribute is definitely unique across my notion of a "transaction". In other words, a particular cid/uid maybe have many different ti's in the same minute.
I have no google analytics javascript or client-side components in use and are simply not applicable to how I need to use google analytics which takes me to using the measurement protocol.
Using the hit-builder, /debug/collect, and logging of any http non-200 responses, I see absolutely no indication that all of my "t=transaction" messages would not be received and processed. Some of the typical debugging points I think are eliminated with this list of what I have tried
sent message via /collect
sent multiple message via /batch (t=transaction and t=item)
sent my UUID of my consumer as cid=, uid= and both
tried with and without "sc=start" to ensure there was no session deduplication of a transaction
tried with and without ua (user-agent) and uip (ip override) since it's server side but hits from consumers do come from different originations sometimes
took into consideration my timezone (UTC-8) and how my server logs these requests (UTC)
waited 24 to 48 hours to ensure data
ecommerce is turned on for my view
amount of calls to measurement protocol are < 10000 per day so I don't think I am hitting any limits
I have t=event messages too although I am taking a step back from using them for now until I can see that data is represented at least to 90%+.
Here is an example t=transaction call.
curl \
--verbose \
--request POST \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data 'ta=customer1&t=transaction&sc=start&v=1&cid=4b014cff-ccce-44c2-bfb8-e0f05fc7827c&tr=0.0&uid=4b014cff-ccce-44c2-bfb8-e0f05fc7827c&tid=UA-xxxxxxxxx-1&ti=5ef618370b01009807f780c5' \
'https://www.google-analytics.com/collect'
You've done a very good job debugging so unfortunately there isn't much left to do, a few things left to check:
View bot/spider filter: disable this option to be on the safe sife
500 hits / session: if you're sending lots of hits for the same cid/uid within 30 minutes (whatever your session-timeout is), then these would be recorded as per of the same session and thus you could reach quota limit.
10M hits / property / month: you didn't mention overall properly volume so I'm mentioning this in case
Paylod limit of 1KB = 8192 bytes: I've seen people running into that issue when tracking transactions with a crazy amount of products attached to it
Other view filters: same thing, you didn't mention so I'm mentioning just in case
Further debugging could include:
Using events instead of transactions: the problem with transactions is that they're a black box, if they don't show up you don't have any debug. I personally always track my transactions via events and I set a copy of the Ecommerce payload as event label (JSON string) for debugging, so if the event is present I know it's not a data ingestion issue but most likely my ecommerce payload which is malformed (and I have the event label to debug it), and if the event is missing then it's a data ingestion problem. See below example, replace UA-XXXXXXX-1 with your own:
v=1&t=event&tid=UA-XXXXXXX-1&cid=1373730658.1593409595&ec=Ecommerce&ea=Purchase&ti=T12345&ta=Online%20Store&tr=15.25&tt=0.00&ts=0.00&tcc=SUMMER_SALE&pa=purchase&pr1nm=Triblend%20Android%20T-Shirt&pr1id=12345&pr1pr=15.25&pr1br=Google&pr1ca=Apparel&pr1va=Gray&pr1qt=1&pr1cc=&el=%7B%22purchase%22%3A%7B%22actionField%22%3A%7B%22id%22%3A%22T12345%22%2C%22affiliation%22%3A%22Online%20Store%22%2C%22revenue%22%3A%2215.25%22%2C%22tax%22%3A%220.00%22%2C%22shipping%22%3A%220.00%22%2C%22coupon%22%3A%22SUMMER_SALE%22%7D%2C%22products%22%3A%5B%7B%22name%22%3A%22Triblend%20Android%20T-Shirt%22%2C%22id%22%3A%2212345%22%2C%22price%22%3A%2215.25%22%2C%22brand%22%3A%22Google%22%2C%22category%22%3A%22Apparel%22%2C%22variant%22%3A%22Gray%22%2C%22quantity%22%3A1%2C%22coupon%22%3A%22%22%7D%5D%7D%7D
Using a data collection platform like Segment: which will give you an extra level of debugging via their debugger (although their payload syntax is != than GA so that introduces another level of complexity, it does help me from time to time to spot issues with the underlying data though as I'm familiar with its syntax).
I am using the Google Calendar API to preprocess events that are being added (adjust their content depending on certain values they may contain). This means that theoretically I need to update any number of events at any given time, depending on how many are created.
The Google Calendar API has usage quotas, especially one stating a maximum of 500 operations per 100 seconds.
To tackle this I am using a time-based trigger (every 2 minutes) that does up to 500 operations (and only updates sync tokens when all events are processed). The downside of this approach is that I have to run a check every 2 minutes, whether or not anything has actually changed.
I would like to replace the time-based trigger with a watch. I'm not sure though if there is any way to limit the amount of watch calls so that I can ensure the 100 seconds quota is not exceeded.
My research so far shows me that it cannot be done. I'm hoping I'm wrong. Any ideas on how this can be solved?
AFAIK, that is one of the best practice suggested by Google. Using watch and push notification allows you to eliminate the extra network and compute costs involved with polling resources to determine if they have changed. Here are some tips to best manage working within the quota from this blog:
Use push notifications instead of polling.
If you cannot avoid polling, make sure you only poll when necessary (for example poll very seldomly at night).
Use incremental synchronization with sync tokens for all collections instead of repeatedly retrieving all the entries.
Increase page size to retrieve more data at once by using the maxResults parameter.
Update events when they change, avoid re-creating all the events on every sync.
Use exponential backoff for error retries.
Also, if you cannot avoid exceeding to your current limit. You can always request for additional quota.
Firestore offers 50000 documents read operations as part of its free bundle.
However, in my application, the client is fetching a collection containing price data. The price data is created over time. Hence, starting from a specific timestamp, the client can read up to 1000 documents. Each document represents one timestamp with the price information.
This is means that if the client refreshes his/her web browser 50 times, it will exhaust my quota immediately. And that is just for a single client.
That is what happened. And got this error:
Error: 8 RESOURCE_EXHAUSTED: Quota exceeded
The price data are static. Once they have been written, it is not supposed to change.
Is there a solution for this issue or I should consider other database other than Firestore?
The error message indicates that you've exhausted the quota that is available. On the free plan the quota is 50,000 document reads per day, so you've read that number of documents already.
Possible solutions:
Upgrade to a paid plan, which has a much higher quota.
Wait until tomorrow to continue, since the quota resets every day.
Try in another free project, since each project has its own quota.
If you have a dataset that will never-ever (or rarely) change, why not write it as JSON object in the app itself. You could make it a separate .js file and then import for reading to make your table.
Alternatively - is there a reason your users would ever navigate through all 1,000 records. You can simulate a full table even with limiting to calls to say 10 or so and then paginate to get more results if needed.
As far as I know, Marketo limits the number of REST API requests to 10,000 per day. Is there a way to overcome this limit? Can I pay and get more of those?
I found out that the REST API requests and the SOAP API requests counts separately but I'm trying to find a solution that is limited to REST API.
Moreover, in order to get an access token I need to sacrifice a request. I need to know how long this access token will be alive in order to save as much requests as possible.
You can increase your limit just by asking your account manager. It costs about 15K per year to increase your limit by 10K API calls.
Here are the default limits in case you don't have them yet:
Default Daily API Quota: 10,000 API calls (counter resets daily at 12:00 AM CST)
Rate Limit: 100 API calls in a 20 second window
Documentation: REST API
You'll want to ask your Marketo account manager about this.
I thought I would update this with some more information since I get this question a lot:
http://developers.marketo.com/rest-api/
Daily Quota: Most subscriptions are allocated 10,000 API calls per day (which resets daily at 12:00AM CST). You can increase your daily quota through your account manager.
Rate Limit: API access per instance limited to 100 calls per 20 seconds.
Concurrency Limit: Maximum of 10 concurrent API calls.
For the Daily limit:
Option 1: Call your account manager. This will cost you $'s. For a client I work for we have negotiated a much higher limit.
Option 2: Store and Batch your records. For example, you can send a batch of 300 leads in a single lead insert/update call. Which means you can insert/update 3,000,000 leads per day.
For the Rate limit:
Option 1 will probably not work. Your account manager will be reluctant to change this unless you a very large company.
Option 2: You need to add some governance to your code. There are several ways to do this, including queues, timers with a counter, etc. If you make multi-threaded calls, you will need to take into account concurrency etc.
Concurrent call limit:
You have to limit your concurrent threads to 10.
There are multiple ways to handle API Quota limits.
If you all together want to avoid hitting API limit, try to achieve your functionality thru Marketo Webhooks. Marketo webhook will not have API limits, but it has its own CONS. Please research on this.
You may use REST API, but design your strategy to batch the maximum records in a single payload instead of smaller chunks, e.g. sending 10 different API calls with each 20 records, accumulate the max allowed payload and call Marketo API once.
The access token is valid for 1 hour after authenticating.
Marketo's Bulk API can be helpful in regard to rate limiting as once you have the raw activities the updates, etc on the lead object can be done without pinging marketo for each lead: http://developers.marketo.com/rest-api/bulk-extract/ however be aware of export limits that you may run into when bulk exporting lead + activities. Currently, Marketo only counts the size of the export against the limit when the job has been completed which means you can launch a max of 2 concurrent export jobs(which sum to more than the limit) at the same time as a workaround. Marketo will not kill a running job if a limit has been reached so long as the job was launched prior to the limit being reached.
Marketo has recently upgraded the maximum limit
Daily Quota: Subscriptions are allocated 50,000 API calls per day (which resets daily at 12:00AM CST). You can increase your daily quota through your account manager.
Rate Limit: API access per instance limited to 100 calls per 20 seconds.
Concurrency Limit: Maximum of 10 concurrent API calls.
https://developers.marketo.com/rest-api/