Internal only HTTP Cloud Function unreachable - firebase

I am trying to set up a HTTP Cloud Function that allows only internal traffic, as explained in Google’s public docs.
However, when I try to access the function from a GCE instance that does not have an external IP address it does not work, and gives me the following error:
Image
As you can see in the following screenshots, I have both the Function and the GCE instance in the same region:
Functions
GCE network interface
This project only has 1 VPC network, which is the default one, and the source code for the Cloud Function is the default suggestion of the Console:
Source code
Strangely enough, if I give an external IP address to the GCE instance it works, so does it mean that the traffic is still going to the internet?

If a compute engine lacks an external IP, it can only send packets to other internal IP address destinations. You still have to connect to a set of external IP addresses used by Google APIs and services. This can be done by enabling Private Google Access on the subnet used by the VM/function
Documentation: https://cloud.google.com/vpc/docs/configure-private-google-access

Related

GCP - No Cloud NAT but given public IP leaves VPC

We have a VPC which has VMs with private IP addresses only. There is no Cloud NAT attached to this VPC, so we should not be able to reach out public IPs.
Despite of the aboves, we experienced that we were able to curl the following public IP address from an internal VM.
64.233.166.153
The subnet of the VM has Private Google Access enabled and there is a default route to the default internet gateway, no other route entry matches for this IP. But there is no Cloud NAT.
My questions:
How is it possible to reach public IPs without NAT at all?
Are there other reachable public IPs? (without Cloud NAT)
What are these IPs used for?
Looks like the IP address belongs to a GCP resource/API.
As per GCP documentation[1], when PGA(Private Google Access) is enabled GCP VM instances without external IP can connect to the set of external IP addresses used by Google APIs and services by enabling Private Google Access on the subnet used by the VM's network interface.
This could be the potential reason why your VM was able to speak with the Public IP.
[1] https://cloud.google.com/vpc/docs/configure-private-google-access
Answer provided by #dp nulletla is right.
#Robert - For your use case that you mentioned in the comments - to reach BQ API from GCE with private IP without leaving google backbone network, I believe VPC Private Service Connect (PSC) for Google APIs is the right solution approach for you.
By default, if you have an application that uses a Google service, such as Cloud Storage, your application connects to the default DNS name for that service, such as storage.googleapis.com. Even though the IP addresses for the default DNS names are publicly routable, traffic sent from Google Cloud resources remains within Google's network.
With Private Service Connect, you can create private endpoints using global internal IP addresses within your VPC network. You can assign DNS names to these internal IP addresses with meaningful names like storage-vialink1.p.googleapis.com and bigtable-adsteam.p.googleapis.com. These names and IP addresses are internal to your VPC network and any on-premises networks that are connected to it using Cloud VPN tunnels or VLAN attachments. You can control which traffic goes to which endpoint, and can demonstrate that traffic stays within Google Cloud.
Basically when you create PSC endpoint,you assign private IP address to this endpoint. You reach respective google API e.g. Big Query, you always connect via PSC endpoint IP. This way you can control egress traffic in your VPC firewall rule with deny all and allow only PSC endpoint IP.
Additionally you can go 1 step further and try to restrict traffic/data going to BQ APIs from your GCE/VPC on more granular level with the use of VPC Service Control. By setting the VPC SC perimeter you can define/enforce with more restrictive policies to avoid any sort of data exfiltration.
Thanks
BR
Omkar

How to connect to Community Edition Databricks Cluster via Outside Public Address / Application

Can someone let me know if its possible to connect or PING a Databricks Cluster via its public ip address?
For example I have issued the command ping --all-ip-addresses and I get the ip address 10.172.226.115.
I would like to be able to PING that ip address(10.172.226.115) from my on-premise PC (or connect to the cluster with an application using the ip address?
Can someone let me know if that is possible?
That public IP is not guaranteed to be your cluster; unless somehow you've installed Databricks into your own cloud provider account, where you fully control the network routes, it would be connecting to Databricks managed infrastructure where the public ip would likely be an API gateway or router that serves traffic for more than one account
Note: just because you can ping Google DNS with outbound traffic doesn't mean inbound traffic from the internet is even allowed through the firewall
connect to the cluster with an application
I'd suggest using other Databricks support channels (i.e their community forum) to see if that's even possible, but I thought you're just supposed to upload and run code within their ecosystem. At least, for the community plans
Specifically, they have a REST API to submit a remote job from your local system, but if you want to be able to send data back to your local machine, I think you'd have to write and download from DBFS or other cloud filesystem

Get IP address Of my Firebase Cloud Fucntion

I need to access some external API from my Firebase cloud function.
However, they need me to add my server IP to their IP Whitelist. Do I have the possibility to get the external IP address of a firebase cloud function?
P.S: an IP identification website give me 216.239.36.54 as the address. Is It right?
Cloud Functions are automatically provisioned and unprovisioned as requests are made to them, so they are not on a fixed IP address. If you wait 15 minutes, you might get a different IP address, and if you get many users they will be served from multiple different IP addresses.
The external API will need to allow access from the entire range of IP addresses that Cloud Functions may use. Alternatively, the documentation suggests that you can associate function egress traffic with a static IP address. Note that this last option seems non-trivial to me from a quick scan of the documentation.
Also see:
Possible to get static IP address for Google Cloud Functions?

How are external ips supposed to work in OpenShift (4.x)?

I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

Resources