We had a 3rd party security company review our site and they came back saying we should update to TLS 1.2 but im not sure how to do so on GCP. They also said we should update our SSL ciphers to more than 112 bits not sure how to do that either. If someone knows how to fix these or has links on how that would be amazing
According to the information that you provided, you're using Apache with Cloud Functions and Firestore, please correct me if I'm wrong.
From the Google side if you have a Google Cloud Load Balancer (GCLB) with Serverless NEGs (Cloud Run, GAE, or GCF custom domains) you can define a SSL security policy to restrict the TLS versions and cipher suites used. I suggest you that before you configure your TLS version to check the SSL policies overview.
Otherwhise if you have another set up, I suggest you to check the link that #John Hanley shared with you in the comments.
Related
Context: I just got an email which I believe is spam from admin#typingchimp.com saying my auth users accounts can be stolen and asked if there are security bounties. I use firebase auth, and it should be easy to see that checking client side JS code. Although I think it's spam, it leads me to ask:
Are there any known security vulnerabilities or ideal security related settings for firebase auth? Perhaps an article or documentation beyond https://firebase.google.com/docs/rules/basics or https://firebase.google.com/docs/rules/rules-and-auth ?
PS: This is an auth-only question, but yes my Real-time DB restricts read access to the signed in user and doesn't allow write access. No other settings have been changed beyond this. My site uses SSL of course.
I know google limits individual IPs from making a bunch of failed login attempts and will block you temporarily.
I regularly get these types of emails (spam!) indicating that they have already found security flaws, or that they will, for a "finders fee". It is a marketing campaign trying to drum up sales activity.
Firebase Authentication has been designed and is in use by millions of apps. Hundreds of millions (or billions??) of accounts live in Firebase Auth. If there are vulnerabilities with the service, we will learn of it rapidly.
There is the potential that your particular use of Firebase Auth does not follow secure practices. For example, if you have your API keys checked into a publicly available code repository.
However if you follow the (fairly straightforward) "getting started" and recommendations docs from the Firebase team, odds are that your app is just fine.
I've been using Cloud Functions for a while and it's been great so far - though, it seems like there's no builtin way to set limits on how often the function is invoked.
I've set the max # instances to a reasonable number, but for the # invocations, Firebase doesn't really provide a way to set this. Would using a Node package that limits or slows down requests, when combined with the limited max instances be sufficient to slow down attacks if they happen?
Also know Cloud Endpoints exist - I'm pretty new to OpenAPI and it seems like something that should just be integrated with Functions at an additional cost... but wondering if that would be a good solution too.
Pretty new to all this so appreciate any help!
If you use only Google Cloud services (I don't know the other cloud provider offers to solve your issue, or even existing framework for this), you can limit the unwanted access at different layer
Firtly, Google Front End (GFE) protects all Google resources (Gmail, Maps, Cloud, Your cloud functions,...) especially against layer 3 and layer 4 common DDoS attacks. In addition, this layer is in charge of the TLS communication establishment, and will also discard the bad connexions.
Activate the "private mode". This mode forbid the unauthenticated request. With this feature, Google Front End will check if
A id_token is present in the request header
If the token is valid (correct signature, not expired)
If the identity of the token is authorized to access to the resource.
-> Only the valid request reach your service and you will pay only for that. All the bad traffic is processed by Google and "paid" by Google.
Use a Load balancer with Cloud Armor activated. You can also customize your WAF policies if you need them. Use it in front of your Cloud Functions thanks to the serverless NEG feature
If you use API Keys, you can use Cloud Endpoint (or API Gateway, a managed version of Cloud Endpoint) where you can enforce rate limit per API keys. I wrote an article on this (Cloud Endpoinr + ESPv2)
I want to set Specific domains in Azure Qna maker app cors settings, not *. Can any one let me know what is the required domains for that, as i do not find any documentation regarding this online.
When you create your QnA Maker resource, you are creating an Application (web app) in order to host your endpoint for queries:
This is due to the architecture which is the following:
So if you want to implement specific CORS rules, go the your Web App and set CORS rules in the dedicated tab:
Additional edit:
Based on the comments (thanks #sumit sharma), the necessary domains are:
qnamaker-service.trafficmanager.net
qnamaker.ai
The answer from Nicholas R gives you everything you need to solve this issue except the domains to add (tried to edit by edit queue is full). If you are looking for the domains to add here, it should be at a minimum https://qnamaker-service.trafficmanager.net and https://www.qnamaker.ai. I added these and have not had any issues with the service since removing "allow all".
I have successfully deployed a Google Cloud Endpoints Developer Portal for my API running on Endpoints. I would like to provide access to testing to people outside my organisation that are not using GCP in their projects.
Login to the portal works correctly if I enable the Service Consumer role for these people (on per-email basis). However, when they open it for the first time, they are being asked to grant some extra permissions to the portal:
This form can create totally unnecessary security concerns. Does anyone know, why is it needed?
I only would like my clients to be able to test my API using a GUI, before they could start connecting their projects (not necessary on GCP) to mine. This seems to be a valid use case for me, however I might be misunderstanding some basic concepts.
Or should I submit a feature request to Google about a new role that only enables the access to the portal, and nothing else, so no such forms are shown?
Since Endpoints APIs must be explicitly shared with customers, the portal needs to verify that the logged-in user has permission to view that Endpoints API. So the short answer is that these scopes are being requested primarily so the portal can check the user's access to this API.
Longer answer is that we (the Endpoints team) are looking into if it's possible to build narrower OAuth scopes that would correspond to the access checks we perform. We agree that it's unnecessarily broad of an access request and are hoping to improve this in the future. Thanks for your comment!
I'm trying to setup a very simple Azure deployment that consists of a few Web Apps and an API Management Gateway through which all traffic is directed. The problem I currently have is that I am unsure as to the best way of blocking traffic from going direct to the Web Apps and bypassing the gateway. Is there a 'best practice' mechanism for ensuring only traffic from the Gateway is allowed through?
I've seen suggestions for IP range blocking and 'secret key' implementations, however I wonder whether there is a better way?
Thanks
There are a few options:
1. IP whitelisting
2. Secret key
3. Basic auth
4. Mutual cert auth
IMO #4 is the best way. You can find out more information on how to enable the feature in API Management and Web Apps here:
https://azure.microsoft.com/en-us/blog/enabling-client-certificate-authentication-for-an-azure-web-app/
https://azure.microsoft.com/en-us/documentation/articles/api-management-howto-mutual-certificates/
I should I would add my own answer as I found that the new portal now has an 'API App' which allows access to API though internal only, e.g. through the gateway.
This seems to meet what I was trying to do exactly!
https://azure.microsoft.com/en-gb/documentation/articles/app-service-api-apps-why-best-platform/