Apigee Quota type not defined - apigee

When the Quota type is not defined in a quota limit policy, (which means that is is a default Quota configuration), what happens when the limit is exceeded? For how long will the requests be throttled before the client can successfully send requests?
Apigee documentation only states the below. It does not provide any information on the behavior -
Default: A Quota policy that does not explicitly define a type is a default Quota configuration.

In the default configuration, quota enforcement is calendar type without StartTime configured.
The StartTime will be calendar/clock start of the corresponding interval such as hour, day, week or month. Example:
<Quota name="QuotaPolicy">
<Allow count="10000"/>
<Interval>1</Interval>
<TimeUnit>hour</TimeUnit>
</Quota>
Let's say the current time when the first message received is 2013-07-08 07:35:28. Then, the StartTime will be 2013-07-08 07:00:00 and corresponding end time is 2013-07-08 08:00:00 (1 hour from the StartTime).

Related

Can Azure Application Insights Kusto queries obtain the user's current time zone?

We have a chart that restricts details to "today". Today means something different depending on the viewer's time zone: right now for me (in Adelaide, AUS) today means everything in the last 15 hours but for someone in the UK it only means everything in the last 6.5 hours. Is there any way of retrieving the client's time zone in a Kusto query in Application Insights?
Go to settings of the query console:
Change timezone to "Local Time" (default is UTC)

Google Analytics Limits and Quotas on API Requests (Reporting APIs request errors)

GA doc:
If your request to the Reporting API fails and you get a response code 500 or 503, you can resubmit it. Google Analytics allows:
10 failed requests per project per profile per hour
50 failed requests per project per profile per day
If the number of your failed requests exceeds these quotas, you'll get the following error...
How does GA count these failed requests and set limits? For example, you get a response code 500 at 15:50. So at 15:50 you have 1 error per hour and 1 error per day (the same). The error counter per hour in GA is reset to zero at 16:00 or 16:50?
In the documentation (another section) it is written that
Daily quotas are refreshed at midnight Pacific Standard Time
Does this also apply to failed requests (day's counter)?
How long the profile restriction lasts after exceeding the limits (attempt to make a request to GA after 10 failed requests)
For the failed requests quota, GA counts these failed requests by a continuous running window. In your example, the error counter per hour in GA is reset to zero at 16:50. The error counter per day is reset at 15:50 on the next day.

update policy query and ingestion retry in ADX

By default update policy on a Kusto table is non-transactional. Lets say I have an Update Policy defined on a table MyTarget for which the source is defined in the update policy as MySource. The update policy is defined as transactional. Ingestion has been set on the table MySource. So continuously data will be getting loaded to MySource. Now say certain ingestion data batch is loaded to MySource, right after that the query defined in the Update Policy will be triggered. Now lets say this query fails , due to memory issues etc -- even the data batch loaded to MySource will not be committed (because the update policy is transactional). I have heard that in this case the ingestion will be re-tried automatically. Is it so? I haven't found any documentation regarding this retry. Anyways -- my simple question is -- how many times retry will be attempted and how much is the interval after each attempt? Are these configurable properties (I am talking about ADX cluster which is available through Azure) if I am owner of the ADX cluster?
yes, there's an automatic retry for ingestions that failed due to a failure in a transactional update policy.
the full details can be found here: https://learn.microsoft.com/en-us/azure/kusto/management/updatepolicy#failures
Failures are treated as follows:
Non-transactional policy: The failure is ignored by Kusto. Any retry is the responsibility of the data owner.
Transactional policy: The original ingestion operation that triggered the update will fail as well. The source table and the database will not be modified with new data.
In case the ingestion method is pull (Kusto's Data Management service is involved in the ingestion process), there's an automated retry on the entire ingestion operation, orchestrated by Kusto's Data Management service, according to the following logic:
Retries are done until reaching the earliest between the maximum retry period (2 days) and maximum retry attempts (10 attempts).
The backoff period starts from 2 minutes, and grows exponentially (2 -> 4 -> 8 -> 16 ... minutes)
In any other case, any retry is the responsibility of the data owner.

Quota violation is not working as per quota set in API proxies

I have created a below quota which can consume API 6 times per hour. This is an verify API key authentiication type.
URL is http://damuorgn-test.apigee.net/weatherforecastforlongandlat?apikey=dJAXoH8y6GfVNJSjlDhpVIB4XCVyJZ1R
But Quota exception occurs after 8th time only (actually it should be on 7th time). Also, when i try to change quota limit and re-deploy the API proxies, still I see Quota exception on first time itself. PLease advise.
I am using free organization from cloud computing.
Quota 1
1
false
false
hour
2014-6-11 19:00:00
20
5
Okay two things...
1) Your Quota is set to <Distributed>false</Distributed>.
By default your Apigee instance runs on two separate Message Processors (the servers that do the heavy lifting). This means that each MP will count and with a Quota of 6 you effectively have 6 * 2 servers = 12.
2) Your Quota is Distributed but Asynchronous in the second example.
If you don't set <Distributed> and <Synchronous> to false, Apigee will share Quota counts by checking in with the central data server. There will always be some lag with this, but you have set your AsynchronousConfiguration to check in with the central server every 20 seconds or every 5 messages, so you could, count up to 5 on each MP processor before checking in with the other servers.
Keep in mind that in a distributed processing model like Apigee you will never get an absolutely precise number because even with Distributed set to true and Asynchronous set to false there will always be some lag with the servers talking to each other.
Also, you might want to strip out the request.header.quota_count and other request.header variables -- if I passed a number (say 100000) as a header like
quota_count: 100000
Apigee will use the 100000 rather than your value of 1 (it uses the referenced variables and rolls back to the default value only if the reference is NULL).
And... you probably want to add an <Idnetifier ref="client_ip"> or something otherwise the quota is global to all users. See the Apigee variables reference at http://apigee.com/docs/api-services/api/variables-reference for the variables that are available in every flow.

symfony2 logout after a period of inactivity

I got a Symfony2 (2.0.9) project and the B-Office part I do not understand where is set the idle time ?
identification generates a good session cookie "PHPSESSID" equal to the variable (config.yml) framework:session:lifetime:86400 (one day) BUT I'm automatically disconnected after an hour !
Did You check session lifetime on the server ? See the documentation for session.gc_maxlifetime. The default value is:
session.gc_maxlifetime = 1440
Which means 24 minutes.

Resources