Are the Google Cloud Endpoints only for REST? - google-cloud-endpoints

Are the Google Cloud Endpoints only for REST?
I have a virtual machine with cassandra, and now I need (temporarly) to expose this machine for the world (the idea is to run a cassandra client in some computers in my home/office/...). Is Google Cloud Endpoints the best way to expose this machine to world?

I am assuming that you are running Cassandra on a Google Compute Engine (CE). When one runs a compute engine, one can specify that one wants a public internet address to be associated with it. This will allow an Internet connected client application to connect with it at that address. The IP address can be declared as ephemeral (it can be changed by GCP over time) or it can be fixed (I believe there will be a modest charge for its allocation). When one attempts to connect to the software running on the Compute Engine, a firewall rule (by default) will block the vast majority of incoming connections. Fortunately, since you own the CE you also own the firewall configuration. If we look here:
https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureFireWall.html
we see the set of ports needed for different purposes. This gives us a hint as to what firewall rule changes to make.
Cloud Endpoints is for exposing APIs that YOU develop in your own applications and doesn't feel an appropriate component for accessing Cassandra.

Related

In Transit Encryption

I'm currently developing an application for a client and their requirement is that the application needs in transit and at rest encryption. I assured that it was and was required to provide documentation for that. I referenced this documentation from Google Cloud's website. They replied by asking if my claim stands in light of the following section
Using a connection directly to a VM using an external IP or network load balancer IP
If you are connecting via the VM's external IP, or via a network-load-balanced IP, the connection does not go through the GFE. This connection is not encrypted by default and its security is provided at the user's discretion
My mobile application uses Firebase SDK to talk to the Firebase database and Firebase functions. I have no background in networking nor do I understand what is exactly being referenced here despite Googling the concepts. Is my data still encrypted? Does the above section apply to my use case?
No, that applies only to VMs and network load balancers. Both Cloud Functions (so long as you're using https for all requests) and the Firebase Realtime database encrypt data in transit.

Firebase Functions cannot connect to Azure SQL Database [duplicate]

I would like to develop a Google Cloud Function that will subscribe to file changes in a Google Cloud Storage bucket and upload the file to a third party FTP site. This FTP site requires allow-listed IP addresses of clients.
As such, it is possible to get a static IP address for Google Cloud Functions containers?
Update: This feature is now available in GCP https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
First of all this is not an unreasonable request, don't get gaslighted. AWS Lambdas already support this feature and have for awhile now. If you're interested in this feature please star this feature request: https://issuetracker.google.com/issues/112629904
Secondly, we arrived at a work-around which I also posted to that issue as well, maybe this will work for you too:
Setup a VPC Connector
Create a Cloud NAT on the VPC
Create a Proxy host which does not have a public IP, so the egress traffic is routed through Cloud NAT
Configure a Cloud Function which uses the VPC Connector, and which is configured to use the Proxy server for all outbound traffic
A caveat to this approach:
We wanted to put the proxy in a Managed Instance Group and behind a GCP Internal LB so that it would dynamically scale, but GCP Support has confirmed this is not possible because the GCP ILB basically allow-lists the subnet, and the Cloud Function CIDR is outside that subnet
I hope this is helpful.
Update: Just the other day, they announced an early-access beta for this exact feature!!
"Cloud Functions PM here. We actually have an early-access preview of this feature if you'd like to test it out.
Please complete this form so we can add you..."
The form can be found in the Issue linked above.
See answer below -- it took a number of years, but this is now supported.
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
For those wanting to associate cloud functions to a static IP address in order to whitelist the IP for an API or something of the sort I recommend checking out this step by step guide which helped me a lot:
https://dev.to/alvardev/gcp-cloud-functions-with-a-static-ip-3fe9 .
I also want to specify that this solution works for Google Cloud Functions and Firebase Functions (as it is based on GCP).
This functionality is now natively part of Google Cloud Functions (see here)
It's a two-step process according to the GCF docs:
Associating function egress with a static IP address In some cases,
you might want traffic originating from your function to be associated
with a static IP address. For example, this is useful if you are
calling an external service that only allows requests from whitelisted
IP addresses.
Route your function's egress through your VPC network. See the
previous section, Routing function egress through your VPC network.
Set up Cloud NAT and specify a static IP address. Follow the guides at
Specify subnet ranges for NAT and Specify IP addresses for NAT to set
up Cloud NAT for the subnet associated with your function's Serverless
VPC Access connector.
Refer to link below:
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
As per Google, the feature has been released check out the whole thread
https://issuetracker.google.com/issues/112629904
It's not possible to assign a static IP for Google Cloud Functions, as it's pretty much orthogonal to the nature of the architecture being 'serverless' i.e. allocate and deallocate servers on demand.
You can, however, leverage a HTTP proxy to achieve a similar effect. Setup a Google Compute Engine instance, assign it a static IP and install a proxy library such as https://www.npmjs.com/package/http-proxy. You can then route all your external API calls etc through this proxy.
However, this probably reduces scale and flexibility, but it might be a workaround.

Firebase Cloud Functions fixed IP [duplicate]

I would like to develop a Google Cloud Function that will subscribe to file changes in a Google Cloud Storage bucket and upload the file to a third party FTP site. This FTP site requires allow-listed IP addresses of clients.
As such, it is possible to get a static IP address for Google Cloud Functions containers?
Update: This feature is now available in GCP https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
First of all this is not an unreasonable request, don't get gaslighted. AWS Lambdas already support this feature and have for awhile now. If you're interested in this feature please star this feature request: https://issuetracker.google.com/issues/112629904
Secondly, we arrived at a work-around which I also posted to that issue as well, maybe this will work for you too:
Setup a VPC Connector
Create a Cloud NAT on the VPC
Create a Proxy host which does not have a public IP, so the egress traffic is routed through Cloud NAT
Configure a Cloud Function which uses the VPC Connector, and which is configured to use the Proxy server for all outbound traffic
A caveat to this approach:
We wanted to put the proxy in a Managed Instance Group and behind a GCP Internal LB so that it would dynamically scale, but GCP Support has confirmed this is not possible because the GCP ILB basically allow-lists the subnet, and the Cloud Function CIDR is outside that subnet
I hope this is helpful.
Update: Just the other day, they announced an early-access beta for this exact feature!!
"Cloud Functions PM here. We actually have an early-access preview of this feature if you'd like to test it out.
Please complete this form so we can add you..."
The form can be found in the Issue linked above.
See answer below -- it took a number of years, but this is now supported.
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
For those wanting to associate cloud functions to a static IP address in order to whitelist the IP for an API or something of the sort I recommend checking out this step by step guide which helped me a lot:
https://dev.to/alvardev/gcp-cloud-functions-with-a-static-ip-3fe9 .
I also want to specify that this solution works for Google Cloud Functions and Firebase Functions (as it is based on GCP).
This functionality is now natively part of Google Cloud Functions (see here)
It's a two-step process according to the GCF docs:
Associating function egress with a static IP address In some cases,
you might want traffic originating from your function to be associated
with a static IP address. For example, this is useful if you are
calling an external service that only allows requests from whitelisted
IP addresses.
Route your function's egress through your VPC network. See the
previous section, Routing function egress through your VPC network.
Set up Cloud NAT and specify a static IP address. Follow the guides at
Specify subnet ranges for NAT and Specify IP addresses for NAT to set
up Cloud NAT for the subnet associated with your function's Serverless
VPC Access connector.
Refer to link below:
https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip
As per Google, the feature has been released check out the whole thread
https://issuetracker.google.com/issues/112629904
It's not possible to assign a static IP for Google Cloud Functions, as it's pretty much orthogonal to the nature of the architecture being 'serverless' i.e. allocate and deallocate servers on demand.
You can, however, leverage a HTTP proxy to achieve a similar effect. Setup a Google Compute Engine instance, assign it a static IP and install a proxy library such as https://www.npmjs.com/package/http-proxy. You can then route all your external API calls etc through this proxy.
However, this probably reduces scale and flexibility, but it might be a workaround.

Latency between two Azure VMs

I have an ASP.NET 4.5, chatty, web application which is hosted on one Large (4 cores 7GB) Azure VM. The WEB application is loosely coupled to the data tier via a dedicated WCF Service. The application database is hosted by a dedicated SQL Server instance on another Large (4 cores 7GB) Azure VM. The WCF endpoint communicates with the DB VM via an ASP.NET Connection String that employs the DB VM public DNS name - e.g. xyz.cloudapp.net.
Both the WEB and DB VMs appear to be operating in a different subnet but both are in the same Azure location; differing second and onwards octet values.
When running the exact same solution on one Medium (2 cores 3.5GB) Azure VM, the latency issues are much lower.
I am looking for suggestions on how to reduce the WEB to DB latency as much as possible.
If you have two VMs in the same data center that need to communicate with each other, don't use their public DNS. Create an Affinity Group, create a virtual network in that affinity group, and then place both VMs in the virtual network (you might need to shut them down, delete them without deleting their VHDs, and then create them from the data disks in the new vnet).
Accessing VMs through the DNS (thus through the Azure LB) adds about 0.5ms latency to each request - Not recommended for a chatty app.
It almost sounds like you have the two VM's running in two separate cloud services. Might I suggest placing both machines in the same cloud service? This should allow you to access the database server from the web tier via the short DNS name (aka the server name). This should not only help secure the database server by allowing you to remove any input endpoints you have declared on it, but also reduce latency since calls will be made directly from one VM to another and not pass through the Azure Fabric load balancer (which is what fronts all calls coming to the cloud service URL).
in this blog post I have measured latency in various network configurations.
[https://nicolgit.github.io/azure-measuring-latency-across-availability-zones-in-we/]
I think it is useful to understand the impact on latency of different "typical" network architectures.

Azure: How to connect one cloud service with other in one virtual network

I want deploy backend WCF service in WebRole in Cloud Service 1 only with Internal Endpoint.
And deploy ASP.NET MVC frontend in WebRole in Cloud Service 2.
Is it possible to use Azure Virtual Netowork to call backend from frontend by Internal Endpoint ?
UPDATED: I am just trying build simple SOA architect like this:
Yes and No.
An internal endpoint essentially means that the role instance has been configured to accept traffic on a given port, but that port can NOT receive traffic from outside of the cloud service (hence it being "internal" to the cloud service). Internal endpoints are also not load balanced so you're going to need to "juggle" traffic management from the callers yourself.
Now here is where the issues arise, a virtual network allows you to securely traverse cloud service boundaries, letting a role instance in cloud service 1 call a role instance in cloud service 2. However, to do this, the calling role instance needs to know how to address the receiving instance. If they were in the same cloud service, they you can crawl the cloud service topology via the RoleEnvironment class. But this class only works for the cloud service its exists in, its not aware of a virtual network.
Now you could have the receiving role instance publish its FQDN to a shared area (say Azure table storage). However, a cloud service will only use its own internal DNS resolution (which only allows you to resolve short names in the same cloud service) unless you have configured the virtual network with a self-hosted DNS server.
So yes, you can do what you're trying to accomplish, but it does present some challenges. Given this, I'd have to argue if the convenience of separating for deployment enough to justify the additional complexity of the solution? If so, then I'd also look and see if perhaps there's a better way to interconnect the two services rather then direct calls (like a queue based pattern).
#BrentDaCodeMonkey makes some very valid points in his answer, so read that first.
I, personally, would not want to give up automatic discovery and scale via load balancing. My suggestion would be that you expose the WCF endpoint via an Azure Service Bus Relay endpoint. This will give you a "fixed" endpoint with which to communicate (solving the discovery issue) and infinite scalability because multiple servers can register and listen on the same Service Bus relay address. Additionally it introduces some basic security into the mix via shared key authentication when your web application(s) connect to your WCF services.
If you co-locate the Service Bus instance with your Cloud Services the overhead of the relay in the middle is totally negligible and, IMHO, worth it for the benefits explained above.

Resources