multiple Spike arrest policies in Apigee? - apigee

I just want to set 1ps for specific client that I will pass a header as Partner-ID. For other clients I want to use some other rate limit ex:5ps. How can I achieve this in Apigee using Spike Arrest?

Related

Can I get\download the traffic flow data of a specific date and time?

Is there a way to get the traffic flow data of a specific date and time?
For example, the below request gives real-time traffic data.
https://traffic.ls.hereapi.com/traffic/6.1/flow.json?bbox=29.6890%2C-95.4008%3B29.7165%2C-95.5007&apiKey={API_KEY}
How can I specify date-time here? Thanks.
HERE provides several ways to get the historical traffic information:
Get the Traffic Pattern from HERE Map Content catalog
https://developer.here.com/documentation/here-map-content/dev_guide/topics-attributes/traffic-pattern-attributes.html
Get the Traffic Pattern from Mobile SDK
https://developer.here.com/documentation/android-premium/3.19/dev_guide/topics/traffic-history.html
Get the Traffic Pattern and Speed from HERE Map Attributes API with TRAFFIC_PATTERN_FC* layers and TRAFFIC_SPEED_RECORD_FC* layers.
https://developer.here.com/documentation/content-map-attributes/dev_guide/topics/here-map-content.html

Terraform + Dynamodb - understand aws_appautoscaling_target and aws_appautoscaling_policy

I am trying to implement dynamodb autoscaling using terraform but I am having a bit of difficulty in understanding the difference between aws_appautoscaling_target and aws_appautoscaling_policy.
Do we need both specified for the autoscaling group? Can some one kidly explain what each is meant for?
Thanks a ton!!
The aws_appautoscaling_target ties your policy to the DynamoDB table. You can define a policy once and use it over and over (i.e. build standard set of scaling policies for your organization to use), the target allows you to bind a policy to a resource.
An auto scaling group doesn't have to have either a target or a resource. An ASG can scale EC2 instances in/out based other triggers such as instance health (defined by EC2 health checks or LB health checks) or desired capacity. This allows a load balanced application to replace bad instances when they are unable to respond to instance traffic and also recover from failures to keep your cluster at the right size. You could add additional scaling policies to better react to demand. For example, your cluster has 2 instances but they're at max capacity, a scaling policy can watch those instances and add more when needed and then remove them when demand falls.

rate limit in yii2 vs using nginx for rate limiting

what is difference between rate limiting via yii2 versus using nginx for example as reverse proxy and rate limiter ?
REF: Yii2 Rate Limiting Api
Application rate limit (like yii2) more flexible. You can write different limits per user, for example. Or put request to some queue for future execution. But each request over that limit still hit PHP scripts.
Nginx limits less flexible, but allow to stop request before PHP script.
Nginx limits usually used as DOS protection. Usual task: do not allow to spawn too much PHP processes from one IP, for example.
Application rate limit used as application backend overloading protection. It's can be database or external API. Also, application limits can be used as part of business logic (different rate limits for different tariff plans, etc)
The difference is in what layer of your web application you configure the rate limit for the calls of your api server.
in the first case Yii2, you configure a limitation directly in the php code.
With the yii\filters\RateLimitInterface you implement the methods in an Identity class (the model used for manage the data for the api calls), then yii will automatically use the yii\filters\RateLimiter for adding the limit headers to the response.
Conversely, in nginx you set this limitation directly in the Http Server configuration, the server will take charge of dialogue with the headers and then limit the requests.
The real Question here is "What should i use the yii or the nginx approach?". The answer can mute in the way you will build your api services.
Lots of people can say that using the http server for take care of this aspect is the most "Naturally" way, however yii2 give you can use php to customize the rate limiting, and this come to your advantage when you want to develop an api server with a medium/high level of complexity.
In some (very) rare case you can combine yii2 with nginx for obtain something even more custom.

Apigee - Encrypt syslog policy

We are using Apigee Cloud Edgeand want to log some additional information about our requests. The Syslog policy seems ideal, but I want to ensure that the log messages are encrypted over the wire. Is this possible using the policy.
Alternatively, I can expose a logging service in our back end and log over https: but I don't want to slow things down with a synchronous call.
Any thoughts on the best way to achieve this?
No. The syslog policy does not encrypt.
To use your own service to receive data asynchronously, you can try a service callout policy in the PostFlow - Response section of a proxy and make it asynchronous by setting async="true" attribute in the policy XML.
Some variables however (e.g. request-scoped default variables) are not available in the PostFlow. So you may need to create your own variables at appropriate point in the flow to log correctly.

Apigee spike arrest applies to each API bundle or all API bundles

When I add a spike arrest policy as pasted below, to my Apigee APIs, does it count all the API calls from that client IP to Apigee to calculate whether the limit was exceeded? Or does it maintain a count per API individually and apply the policy per API/ API bundle?
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<SpikeArrest enabled="true" continueOnError="true" async="false" name="SpikeArrestCheck">
<DisplayName>Spike Arrest Policy</DisplayName>
<FaultRules/>
<Properties/>
<Identifier ref="proxy.client.ip"/>
<Rate>100ps</Rate>
</SpikeArrest>
When I add a spike arrest policy as pasted below, to my Apigee APIs,
does it count all the API calls from that client IP to Apigee to
calculate whether the limit was exceeded? Or does it maintain a count
per API individually and apply the policy per API/ API bundle?
Count is maintained per API bundle, per policy name (org and env is a given). Even if you use the same Identifier across bundles, there is no way to tie different API bundle spike arrests together.
I have tested this using SpikeArrest policy and observing the value of ratelimit.<spike arrest policy name>.used.count as tested across 2 different API bundles, both policies with the same name and same Identifier. The 2 buckets/counters are treated independently
You can set a spike arrest identifier like this:
<SpikeArrest name="SpikeArrest">
<Rate>10ps</Rate>
<Identifier ref="someVariable" />
</SpikeArrest>
The scope of the spike arrest policy above is limited to the current organization, environment, bundle, and policy name. No traffic traveling through a different policy, bundle, environment, or organization will affect the spike arresting of the above policy. In addition, since an identifier is specified, only traffic that has the same value stored in "someVariable" will be "counted" together. If the policy had no identifier specified, all traffic for the same policy, bundle, environment and organization would be counted together.
Note that spike arrests are tracked separately per message processor. They are also currently implemented as rate limiting, not a count. If you specify 100 per second, it means that your requests can only come in one per 10 ms (1/100 sec). A second request within 10 ms on the same message processor will be rejected. A small number is generally not recommended. Even with a large number, if two requests come in nearly simultaneously to the same message processor, one will be rejected.
Some observations for best practice:
Ideally you should track traffic access to your API based on a key that is static regardless the source. Using an IP address leaves room to consumer Apps to be too broad, so Spike Arrest policies never trigger because each mobile device will have a different IP address assigned to it. So, as a best practice either retrieve consumer key through OAuthV2 Policy after validating the token or directly when key is provided in the request. Exceptions to the rule is that API is not publicly accessible to consumer Apps, in which case access is provided to App servers only, which anyway you may want to manage traffic implementing Key Verification.
The counter "bucket" is determined by how you use Identifier. If you don't specify Identifier, then the bucket is the entire API Proxy. Or you can use Identifier Ref to make a more granular bucket. For example, if you wanted to make the bucket be per-developer (assuming you previously did a VerifyApiKey or VerifyAccessToken), you would do this:
<Identifier ref="client_id" />.
And if you wanted to, you could set the bucket to be based on ip address by doing this:
<Identifier ref="client.ip"/>
So the way you did it, it would be per-client ip.

Resources