If a Webhook call fails for any reason will it be triggered again, how do I know that it failed?
Do I need to pull in any case to make sure I get all the notes?
If I am pulling am I restricted to 1 call every 15 minutes per account, or 1 call every 15 minutes globally and if I need to retrieve 2000 notes I need to make several calls, will that also be restricted per each chunk?
When you're done processing the webhook, you should return a status code of 200 to let the Evernote service know that you successfully received it. If it sends a webhook and does not get a 200 back, it should keep re-sending the webhook until it does, although there may be a limit to its number of retries, I'm not sure.
When you say "pull", I assume you mean to use the Evernote API to query the Evernote service. In that case, you're not limited to 1 call every 15 minutes - you can make many more calls than that. It's not unlimited, though; there is a documented rate limit in place.
Related
I have a simple PHP script (using Botman) called by my Telegram bot via a webhook. It's supposed to respond on receiving a keyword. This works, sending response messages to my phone.
Randomly however it repeats the messages, without any input. Though random, its so frequent I would estimate it does this at least 10 times per minute.
Thinking this could be due to some web bots calling my URL, I modified the script to check the presence (and value of) and query parameter and all random messages stopped. The web bots wouldn't know this secret parameter, right?
As expected, once I updated the PHP script (without updating Telegram with the modified webhook), the messages stopped. So far so good.
Next, updated the Telegram with the webhook containing the secret query parameter, then waited 5 minutes. No messages: still looking good.
Alas, once I send my keyword, it gives the expected response but then keeps repeating endlessly again.
Where do I look to fix this?
Ps. To test, the script also returns general info of the user. I can see it keeps returning my info in the repeated message, as if I made each request. Could this be a bug with Telegram?
According to your description, it seems that your webhook architecture it still looking for the most recent updates returned by Telegram getUpdates method: if your script repeats answering to the same exact message, it means that it's receiving the same exact update object more than one time.
A good solution to solve this problem could be have a look at the webhook working of, how it communicates with Telegram servers in order to understand how does it handle updates received from the Telegram Chatbot itself.
Having spent long hours trying to find documentation and help around this resulting in nothing, I have decided to reach out to the community.
I would like to read messages from a topic subscription. Using the message, a UI is populated for a human to work on it. The time it approximately takes to process each message is 15 minutes and each client can work on only one message. At the end of processing the message, the client can either decide to stop processing messages or request a new message.
With the max lock time set at 5 minutes on the subscription, I need to be able to automatically renew my lock for up to 15 minutes.
The first attempted approach was to use CreateReceiver and fetch the message, read it and Complete message when done. The issue with this is I have not been able to figure out how to automatically renew the lock for 15 minutes. I see the RenewLockAsync function but would like for this to be automatic and not have to run a background timer to keep track of the expiring lock.
The second attempted approach was to try using ServiceBusClient.CreateProcessor() with options to set the AutoLockRenewal timespan. The issue faced here is with the processor itself running based on events in the background. Since I need to populated a UI, I need to be able to stop the processor after the message has been read, return the callback and once the human interaction is done, complete the message. I have been unable to find a way to do this.
What would be a good approach to achieve this? The subscription acts as a workqueue that multiple people pull items from and individually work them. Any help in a proposing an approach to this is appreciated.
Using google analytics and it's measurement protocol, I am trying to track eCommerce transactions based on my customers (who aren't end-consumers meaning not sparse unique userid's, locations, etc...) which have a semantic idea of a "sale" with revenue.
The problem is that not all of my logged requests to the ga mp API are resulting in "rows" of transactions when looking at conversions->ecommerce->transactions. And additionally, the revenue reported is respectively missing too. An example of the discrepancy is listing all my non-zero transaction revenue API calls, I should see 321 transactions in the analytics dashboard. However, I see only 106... 30%!!! This is about the same every day even tweaking some attributes which I would think would force uniqueness of a session or transaction.
A semantic difference is that a unique consumer (cid or uid) can send a "t=transaction" with a unique "ti" (transaction id) which overlap and are not serial. I say this to suggest that maybe there is some session related deduplication happening even though my "ti" attribute is definitely unique across my notion of a "transaction". In other words, a particular cid/uid maybe have many different ti's in the same minute.
I have no google analytics javascript or client-side components in use and are simply not applicable to how I need to use google analytics which takes me to using the measurement protocol.
Using the hit-builder, /debug/collect, and logging of any http non-200 responses, I see absolutely no indication that all of my "t=transaction" messages would not be received and processed. Some of the typical debugging points I think are eliminated with this list of what I have tried
sent message via /collect
sent multiple message via /batch (t=transaction and t=item)
sent my UUID of my consumer as cid=, uid= and both
tried with and without "sc=start" to ensure there was no session deduplication of a transaction
tried with and without ua (user-agent) and uip (ip override) since it's server side but hits from consumers do come from different originations sometimes
took into consideration my timezone (UTC-8) and how my server logs these requests (UTC)
waited 24 to 48 hours to ensure data
ecommerce is turned on for my view
amount of calls to measurement protocol are < 10000 per day so I don't think I am hitting any limits
I have t=event messages too although I am taking a step back from using them for now until I can see that data is represented at least to 90%+.
Here is an example t=transaction call.
curl \
--verbose \
--request POST \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data 'ta=customer1&t=transaction&sc=start&v=1&cid=4b014cff-ccce-44c2-bfb8-e0f05fc7827c&tr=0.0&uid=4b014cff-ccce-44c2-bfb8-e0f05fc7827c&tid=UA-xxxxxxxxx-1&ti=5ef618370b01009807f780c5' \
'https://www.google-analytics.com/collect'
You've done a very good job debugging so unfortunately there isn't much left to do, a few things left to check:
View bot/spider filter: disable this option to be on the safe sife
500 hits / session: if you're sending lots of hits for the same cid/uid within 30 minutes (whatever your session-timeout is), then these would be recorded as per of the same session and thus you could reach quota limit.
10M hits / property / month: you didn't mention overall properly volume so I'm mentioning this in case
Paylod limit of 1KB = 8192 bytes: I've seen people running into that issue when tracking transactions with a crazy amount of products attached to it
Other view filters: same thing, you didn't mention so I'm mentioning just in case
Further debugging could include:
Using events instead of transactions: the problem with transactions is that they're a black box, if they don't show up you don't have any debug. I personally always track my transactions via events and I set a copy of the Ecommerce payload as event label (JSON string) for debugging, so if the event is present I know it's not a data ingestion issue but most likely my ecommerce payload which is malformed (and I have the event label to debug it), and if the event is missing then it's a data ingestion problem. See below example, replace UA-XXXXXXX-1 with your own:
v=1&t=event&tid=UA-XXXXXXX-1&cid=1373730658.1593409595&ec=Ecommerce&ea=Purchase&ti=T12345&ta=Online%20Store&tr=15.25&tt=0.00&ts=0.00&tcc=SUMMER_SALE&pa=purchase&pr1nm=Triblend%20Android%20T-Shirt&pr1id=12345&pr1pr=15.25&pr1br=Google&pr1ca=Apparel&pr1va=Gray&pr1qt=1&pr1cc=&el=%7B%22purchase%22%3A%7B%22actionField%22%3A%7B%22id%22%3A%22T12345%22%2C%22affiliation%22%3A%22Online%20Store%22%2C%22revenue%22%3A%2215.25%22%2C%22tax%22%3A%220.00%22%2C%22shipping%22%3A%220.00%22%2C%22coupon%22%3A%22SUMMER_SALE%22%7D%2C%22products%22%3A%5B%7B%22name%22%3A%22Triblend%20Android%20T-Shirt%22%2C%22id%22%3A%2212345%22%2C%22price%22%3A%2215.25%22%2C%22brand%22%3A%22Google%22%2C%22category%22%3A%22Apparel%22%2C%22variant%22%3A%22Gray%22%2C%22quantity%22%3A1%2C%22coupon%22%3A%22%22%7D%5D%7D%7D
Using a data collection platform like Segment: which will give you an extra level of debugging via their debugger (although their payload syntax is != than GA so that introduces another level of complexity, it does help me from time to time to spot issues with the underlying data though as I'm familiar with its syntax).
I am using the Google Calendar API to preprocess events that are being added (adjust their content depending on certain values they may contain). This means that theoretically I need to update any number of events at any given time, depending on how many are created.
The Google Calendar API has usage quotas, especially one stating a maximum of 500 operations per 100 seconds.
To tackle this I am using a time-based trigger (every 2 minutes) that does up to 500 operations (and only updates sync tokens when all events are processed). The downside of this approach is that I have to run a check every 2 minutes, whether or not anything has actually changed.
I would like to replace the time-based trigger with a watch. I'm not sure though if there is any way to limit the amount of watch calls so that I can ensure the 100 seconds quota is not exceeded.
My research so far shows me that it cannot be done. I'm hoping I'm wrong. Any ideas on how this can be solved?
AFAIK, that is one of the best practice suggested by Google. Using watch and push notification allows you to eliminate the extra network and compute costs involved with polling resources to determine if they have changed. Here are some tips to best manage working within the quota from this blog:
Use push notifications instead of polling.
If you cannot avoid polling, make sure you only poll when necessary (for example poll very seldomly at night).
Use incremental synchronization with sync tokens for all collections instead of repeatedly retrieving all the entries.
Increase page size to retrieve more data at once by using the maxResults parameter.
Update events when they change, avoid re-creating all the events on every sync.
Use exponential backoff for error retries.
Also, if you cannot avoid exceeding to your current limit. You can always request for additional quota.
Given that Evernote don't publish their exact API rate limits (at least I can't find them), I'd like to ask for some guidance on it's usage.
I'm creating an application that will sync the user's notes and store them locally. I'm using getFilteredSyncChunk to do this.
I'd like to know how often I can make this API call without hitting the limits. I understand that the limits are on a per-user basis, so would it be acceptable to call this every 5 minutes to get the latest notes?
TIA
The rate limit is on a per API key basis. You'll be okay calling getFilteredSyncChunk every five minutes, although it's a little more efficient to call getSyncState instead.
In case you haven't seen it yet, check out this guide for info on sync (accessible from this page).