Unexpected error when trying to set up a VPC for my Firebase cloud functions to use a dedicated IP address - firebase

I am using Firebase cloud functions as a backend for my app and I want to set up a dedicated IP address using a VPC for my cloud functions since I also need to interact with a Mongo Atlas DB and want to whitelist a single IP address from which it can receive requests as a security measure. According to Firebase docs (https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip), it seems this is possible using a workaround of using a VPC.
This is a very murky and uncertain area of programming for me so I am progressing using what I am thinking the docs are telling me.
Set up a VPC. I did this using manual setup. When you set up a VPC, and you create a new subnet, it requires you to put in an IP address range that must be valid. According to this document (https://cloud.google.com/vpc/docs/vpc#manually_created_subnet_ip_ranges#subnet-ranges), the IP address range 10.0.0.0/8 is a valid IP range. I used this range for my VPC subnet.
NOTE: I tried the other IP addresses in that document but they were invalid and threw an error
THIS IS WHERE I AM STUCK vvvvvvv
2) The next step is to set up a VPC serverless access connector. I started to do this. This too requires me to define an IP range. According to this document (https://cloud.google.com/vpc/docs/configure-serverless-vpc-access?&_ga=2.204931472.-1046973627.1608007278#creating_a_connector), the range 10.8.0.0 (/28) "will work in most new projects." However, when I use this range and create the VPC serverless access connector, I get an error that says: "Connector is in a bad state, manual deletion recommended" (see below).
Again, my end goal is to have a single IP address from which I can connect to my Mongo instance. I think I am going about this correctly, but could be wrong. How can I proceed from this step and silence the error I am getting? Am I doing something wrong in the initial setup? Again, the end goal is to get a single IP address from which I can connect Firebase cloud functions to MongoDB so I can whitelist that IP address on Mongo as a security measure. Thank you.
UPDATE
Screenshot of my VPC and subnet

The reason you were not able to create a Serverless VPC connector is because the CIDR of your VPC and the CIDR of the connector cannot overlap.
This is well documented here:
[IP_RANGE] is an unreserved internal IP network, and a '/28' of unallocated space is required. The value supplied is the network in CIDR notation (10.8.0.0/28). This IP range must not overlap with any existing IP address reservations in your VPC network. For example, 10.8.0.0/28 works in most new projects.
In your VPC Serverless connection creation you omitted the most important part
This IP range must not overlap with any existing IP address reservations in your VPC network.
"The range 10.8.0.0 (/28) will work in most new projects." which is true but "most" does not mean "all" so you should check your settings always.
Since you had 10.0.0.0/8 in your VPC, the CIDR 10.8.0.0/28 overlaps with the VPC CIDR, for this reason and as suggested by #guillaume-blaquiere in the comments, the CIDR 192.168.0.0/28 will work.

Related

White list ip range for connecting from ADF to Snowflake

We have a client that has data in Snowflake, but also limits IP connectivity to the data warehouse. If we are going to use the default Azure Data Factory Snowflake connector, what IP's do we have to give the client to whitelist? Is it the entire range from the datacenter location? I understand we may be able to run the ADF in a separate vnet, but don't want to add that to the deployment.
If you are using SHIR for connectivity then you can whitelist your SHIR machine IP address.
In case if you would want to use a custom Azure Intergration Runtime specific to a region then you need to whiltelist the complete Azure IR IP range of that region.
In case if you are using default Azure IR then you will have to whitelist the complete Azure IR IP range.
You can get an IP range list of service tags from the service tags IP range download link. For example, if the Azure region is AustraliaEast, you can get an IP range list from DataFactory.AustraliaEast. For more info please refer here - Azure Integration Runtime IP addresses: Specific regions

Get IP address Of my Firebase Cloud Fucntion

I need to access some external API from my Firebase cloud function.
However, they need me to add my server IP to their IP Whitelist. Do I have the possibility to get the external IP address of a firebase cloud function?
P.S: an IP identification website give me 216.239.36.54 as the address. Is It right?
Cloud Functions are automatically provisioned and unprovisioned as requests are made to them, so they are not on a fixed IP address. If you wait 15 minutes, you might get a different IP address, and if you get many users they will be served from multiple different IP addresses.
The external API will need to allow access from the entire range of IP addresses that Cloud Functions may use. Alternatively, the documentation suggests that you can associate function egress traffic with a static IP address. Note that this last option seems non-trivial to me from a quick scan of the documentation.
Also see:
Possible to get static IP address for Google Cloud Functions?

Setting up VPN between GCP Projects to access SQL Engine subnetwork

Please bear with me as my background is development and not sysadmin. Networking is something I'm learning as I go and thus why I'm writing here :)
A couple of months ago I started the process of designing the network structure of our cloud. After a couple of exchange here, I settled for having a project that will host a VPN Tunnel to the on-premise resources and some other projects that will host our products once they are moved from the on-premises servers.
All is good and I managed to set things up.
Now, one of the projects is dedicated to "storage": that means, for us, databases, buckets for statis data to be accessed around , etc.
I created a first mySQL database (2nd gen) to start testing and noticed that the only option available to access the SQL databases from Internal IPs was with the "parent project" subnetwork.
I realised that SQL Engine create a subnetwork dedicated for just that. It's written in the documentation as well, silly me.
No problem, I tear it down, enable Private Service Connection, create an allocated IP range in the VPC management and set it to export routes.
Then I went back to the SQL Engine a created a new database. As expected the new one had the IP assigned to the allocated IP range set up previously.
Now, I expected every peered network to be able to see the SQL subnetwork as well but apparently not. Again, RDFM you silly goose. It was written there as well.
I activated a bronze support subscription with GCP to have some guidance but what I got was a repeated "create a vpn tunnel between the two projects" which left me a little disappointed as the concept of Peered VPC is so good.
But anyway, let's do that then.
I created a tunnel pointing to a gateway on the project that will have K8s clusters and vice-versa.
The dashboard tells me that the tunnel are established but apparently there is a problem with the bgp settings because they are hanging on "Waiting for peer" on both side, since forever.
At this point I'm looking for anything related to BGP but all I can find is how it works in theory, what it is used for, which are the ASM numbers reserved etc etc.
I really need someone to point out the obvious and tell me what I fucked up here, so:
This is the VPN tunnel on the projects that hosts the databases:
And this is the VPN tunnel on the project where the products will be deployed, that need to access the databases.
Any help is greatly appreciated!
Regarding the BGP status "Waiting for peer" in your VPN tunnel, I believe this is due to the configured Cloud Router BGP IP and BGP peer IP. When configuring, the Cloud Router BGP IP address of tunnel1 is going to be the BGP Peer IP address for tunnel2, and the BGP Peer IP address for tunnel1 is going to be the Router BGP IP address of tunnel2.
Referring to your scenario, the IP address for stage-tunnel-to-cerberus should be:
Router BGP IP address: 169.254.1.2
and,
BGP Peer IP address: 169.254.1.1
This should put your VPN tunnels BGP session status in "BGP established".
You can't achieve what you want by VPN or by VPC Peering. In fact there is a rule in VPC which avoid peering transitivity described in the restriction part
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Now, take what you want to achieve. When you use a Cloud SQL private IP, you create a peering between your VPC and the VPC of the Cloud SQL. And you have another peering (or VPN tunnel) for the SQL engine.
SQL Engine -> Peering -> Project -> Peering -> Cloud SQL
Like this you can't.
But you can use the shared VPC. Create a shared VPC, add your 2 projects in it, create a common subnet for SQL Engine and the Cloud SQL peering. That should work.
But, be careful. All VPC features aren't available with shared VPC. For example, serverless VPC connector aren't yet compliant with it.
Hope this help!
The original setup in the OP question should work, i.e.
Network 1 <--- (VPN) ---> Network 2 <--- (Peered) ---> CloudSQL network
(the network and the peering is created by GCP)
Then resource in Network 1 is able to access a MySQL instance created in the CloudSQLz network.

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

How IP-Aliases does work on Google Cloud Computing Instance?

When setup a IP-Alias via gloud command or the interface, it works out of the box. But in the machine itself, i do not see any configuration, ip addr-entries, no firewall rules, no routes that would allow to be the machine pingable - but it's pingable (local and remote)! (for example 10.31.150.70, when you setup a 10.31.150.64/26-subnet, and you primary IP is 10.31.150.1)
On the other hand, the primary IP of the machine is a /32-Netmask. For example:
10.31.150.1/32, Gateway: 10.31.0.1/16. So, how can the machine reach the gateway, 10.31.0.1, when the gateway is out of the range?
When removing the Main-IP via ip addr del, the aliases aren't pingable anymore.
Google runs a networking daemon on your instance. It runs as the google-network-daemon service. This code is open source and viewable at this repo. This repo has a Python module called google_compute_engine which manages IP aliasing among other things. You can browse their code to understand how Google implements this (they use either ip route or ifconfig depending on the platform)
To see the alias route added by Google on a Debian box (where they use ip route underneath for aliasing) run the following command.
ip route ls table local type local dev eth0 scope host proto 66
If you know your Linux commands, you can remove appropriate routes after stopping the daemon, and then assign the alias IP address to your primary interface as the second IP address to see the ifconfig approach in action as well.
When alias IP ranges are configured, GCP automatically installs VPC network routes for primary and alias IP ranges for the subnet of the primary network interface. Alias IP ranges are routable within the GCP virtual network without requiring additional routes. That is the reason why there is no configuration on the VM itself but still it's pingable. You do not have to add a route for every IP alias and you do not have to take route quotas into account.
More information regarding Alias IP on Google Cloud Platform (GCP) can be found in this help center article.
Be aware that Compute Engine networks only support IPv4 unicast traffic and it will show the netmask as /32 on the VM. However, it will still be able to reach the Gateway of the subnet that it belongs to. For example, 10.31.0.0/16 includes hosts ranging from 10.31.0.1 to 10.31.255.254 and the host 10.31.150.1 is within that range.
To further clarify why VM instances are assigned with the /32 mask, it is important to note that /32 is an artificial construct. The instance talks to the software defined network, which creates and manages the "real" subnets. So, it is really a link between the single address and the gateway for the subnet. As long as the link layer is there, communications are established and everything works.
In addition to that, network masks are enforced at the network layer. This helps avoid generation of unnecessary broadcast traffic (which underlying network wouldn't distribute anyway).
Note that removing the primary IP will break the reachability to the metadata server and therefore the IP aliases won't be accessible.

Resources