Preventing Replay Attacks in Google Apigee - apigee

We are working on a payment service where we want to make sure that a request to the service is not being replayed, whether on purpose or accidentally. We are going to be using Google Apigee as our API gateway. Is there some policy or configuration setting so that we can set this up in Apigee itself? We are hoping to avoid having to code this in our services.
I am finding some hits on Google with "Apigee" and "replay attack", but they just include the term in a sentence, and then never explain how Apigee does it or how to set it up.

Apigee itself implements really good security to your APIs, but they also have a product Apigee Sense that you should take a look. As a recommendation, for your API proxies, you should include the Spike Arrest policy it will help in prevent misuse or attacks.
Hope this helps,
Regards.

Related

How Spike Arrest and Quota policy works parallelly in ApiGee?

I am investigating one issue related to the rate limiting in my server. Previous developer has setup spike arrest and quota policy in ApiGee. I read the documentation but I am unable to understand how both the policy works parallelly?
For example:
Client (Web, Mobile) calling the API. There are more than 100 concurrent users access the API. So which policy applied? Spike arrest or Quota?
If anyone has real world idea about this then please provide some insight.
Thanks
The particular behavior of the API proxy will depend on the placement of the two policies within flows, but assuming a standard request flow with serial policies, then generally the spike-arrest policy will protect your back-end services in aggregate, while the quota policy will enforce rate-limits on some chosen client-specific criteria. Thus one is a general overall safety protection for your business-logic back-end (spike arrest), and the other is more for enforcing client-specific constraints as dictated by your end-to-end application design and expected use-case interactions (quota). Both are configurable though, so the details of those configurations matter in the final analysis.
Comparison docs are here: https://docs.apigee.com/api-platform/develop/comparing-quota-spike-arrest-and-concurrent-rate-limit-policies

Cloud Functions - DDoS protection with max instances cap + node express rate limiter?

I've been using Cloud Functions for a while and it's been great so far - though, it seems like there's no builtin way to set limits on how often the function is invoked.
I've set the max # instances to a reasonable number, but for the # invocations, Firebase doesn't really provide a way to set this. Would using a Node package that limits or slows down requests, when combined with the limited max instances be sufficient to slow down attacks if they happen?
Also know Cloud Endpoints exist - I'm pretty new to OpenAPI and it seems like something that should just be integrated with Functions at an additional cost... but wondering if that would be a good solution too.
Pretty new to all this so appreciate any help!
If you use only Google Cloud services (I don't know the other cloud provider offers to solve your issue, or even existing framework for this), you can limit the unwanted access at different layer
Firtly, Google Front End (GFE) protects all Google resources (Gmail, Maps, Cloud, Your cloud functions,...) especially against layer 3 and layer 4 common DDoS attacks. In addition, this layer is in charge of the TLS communication establishment, and will also discard the bad connexions.
Activate the "private mode". This mode forbid the unauthenticated request. With this feature, Google Front End will check if
A id_token is present in the request header
If the token is valid (correct signature, not expired)
If the identity of the token is authorized to access to the resource.
-> Only the valid request reach your service and you will pay only for that. All the bad traffic is processed by Google and "paid" by Google.
Use a Load balancer with Cloud Armor activated. You can also customize your WAF policies if you need them. Use it in front of your Cloud Functions thanks to the serverless NEG feature
If you use API Keys, you can use Cloud Endpoint (or API Gateway, a managed version of Cloud Endpoint) where you can enforce rate limit per API keys. I wrote an article on this (Cloud Endpoinr + ESPv2)

NGINX as a revere-proxy for Firebase functions to protect from DDOS attacks?

We are currently evaluating if its ideal to add NGINX web server layer in front of firebase functions for the following reasons
Handle DDoS attacks
Rate Limiting
OAuth token validation
We see firebase functions are very open for any kind of abuse attacks.
Does this kind of architecture adds any extra problems?
There are other ways you can handle DDos, Rate limiting and OAuth token validation, I would suggest you take a look at this other question were there is an explanation on your options to secure Firebase Functions.
Another resource you might want to check is the Firebase documentation, especifically here whare they suggest to use Express.js middleware to deal with DDoS and securing your functions.
Finally, you can use NGINX as a sort of reverse proxy if you are more familiar with this, the only extra problems would really be that you would add an extra layer that you would need to manage instead.
Hope you find this useful!

How To Validate HTTP Reports from User Agents

With the new HTTP reporting headers being developed and refined, it seems more important than ever to be able to tell/validate where the reports are coming from.
For example, someone attempting to "hack" the site can very easily flood the reporting endpoint with false reports, drowning out the details of what they're attempting. It's also a vector for a DDOS attack.
Is there some mechanism for doing this aside from obfuscation?
Do the User Agents sign their reports?
Any advice would be much appreciated!
I took a quick glance through the standard draft for the Report-To header, but it doesn't seem to touch on it.
One thought on application-level mitigation: record the IPs of all clients that are connected and authenticated and only accept reports from IPs that are whitelisted in this way. This assumes that the browser sends its reports direct from the client machine (I believe this is the case, but can anyone confirm?).

Restrict number of requests for particular mapping in Spring context

I am just unsure whether Spring has any mechanism preventing users/malicious bots from spamming for example registration request hundred times on my web app.
Does spring offer this kind of protection under the hood and if does not which direction I am to look? Some magical property in Spring Security?
Also does AWS provide any protection against this kind of brute attack when my application is deployed there?
The short answer to both your questions is no. There is no built-in mechanisms in either Spring or Amazon Web services to prevent this.
You will likely have to provide your own implementation to prevent excessive access to your API.
There are a couple of useful resources that can help:
Jeff Atwood's piece on throttling failed log-in attempts should give you a good starting point on how to implement a good strategy for this.
Spring Security's Authorization architecture is really well designed and you can plug in your own implementations fairly easily. It is well documented too.
There is the official Amazon Web Services documentation for using Security Groups, which again should help you ensure you're running on AWS with least permissions in terms of network access
Finally you could look at a service like Fail2Ban for monitoring log files and blocking malicious requests.
So in short there isn't really a simple ready-to-roll solution, but using the above resources should get you on the road to running something that ensures you're using the best practices possible to prevent malicious attempts to access your system.

Resources