dataflow job on google cloud fails ip exhaust - ip

extended the subnet in the vpc start up job - still get IP_SPACE_EXHAUSTED creattion failed. Can i test or verify the subnet extend? we've extended the subnet.
thanks A

Related

Unexpected error when trying to set up a VPC for my Firebase cloud functions to use a dedicated IP address

I am using Firebase cloud functions as a backend for my app and I want to set up a dedicated IP address using a VPC for my cloud functions since I also need to interact with a Mongo Atlas DB and want to whitelist a single IP address from which it can receive requests as a security measure. According to Firebase docs (https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip), it seems this is possible using a workaround of using a VPC.
This is a very murky and uncertain area of programming for me so I am progressing using what I am thinking the docs are telling me.
Set up a VPC. I did this using manual setup. When you set up a VPC, and you create a new subnet, it requires you to put in an IP address range that must be valid. According to this document (https://cloud.google.com/vpc/docs/vpc#manually_created_subnet_ip_ranges#subnet-ranges), the IP address range 10.0.0.0/8 is a valid IP range. I used this range for my VPC subnet.
NOTE: I tried the other IP addresses in that document but they were invalid and threw an error
THIS IS WHERE I AM STUCK vvvvvvv
2) The next step is to set up a VPC serverless access connector. I started to do this. This too requires me to define an IP range. According to this document (https://cloud.google.com/vpc/docs/configure-serverless-vpc-access?&_ga=2.204931472.-1046973627.1608007278#creating_a_connector), the range 10.8.0.0 (/28) "will work in most new projects." However, when I use this range and create the VPC serverless access connector, I get an error that says: "Connector is in a bad state, manual deletion recommended" (see below).
Again, my end goal is to have a single IP address from which I can connect to my Mongo instance. I think I am going about this correctly, but could be wrong. How can I proceed from this step and silence the error I am getting? Am I doing something wrong in the initial setup? Again, the end goal is to get a single IP address from which I can connect Firebase cloud functions to MongoDB so I can whitelist that IP address on Mongo as a security measure. Thank you.
UPDATE
Screenshot of my VPC and subnet
The reason you were not able to create a Serverless VPC connector is because the CIDR of your VPC and the CIDR of the connector cannot overlap.
This is well documented here:
[IP_RANGE] is an unreserved internal IP network, and a '/28' of unallocated space is required. The value supplied is the network in CIDR notation (10.8.0.0/28). This IP range must not overlap with any existing IP address reservations in your VPC network. For example, 10.8.0.0/28 works in most new projects.
In your VPC Serverless connection creation you omitted the most important part
This IP range must not overlap with any existing IP address reservations in your VPC network.
"The range 10.8.0.0 (/28) will work in most new projects." which is true but "most" does not mean "all" so you should check your settings always.
Since you had 10.0.0.0/8 in your VPC, the CIDR 10.8.0.0/28 overlaps with the VPC CIDR, for this reason and as suggested by #guillaume-blaquiere in the comments, the CIDR 192.168.0.0/28 will work.

Setting up VPN between GCP Projects to access SQL Engine subnetwork

Please bear with me as my background is development and not sysadmin. Networking is something I'm learning as I go and thus why I'm writing here :)
A couple of months ago I started the process of designing the network structure of our cloud. After a couple of exchange here, I settled for having a project that will host a VPN Tunnel to the on-premise resources and some other projects that will host our products once they are moved from the on-premises servers.
All is good and I managed to set things up.
Now, one of the projects is dedicated to "storage": that means, for us, databases, buckets for statis data to be accessed around , etc.
I created a first mySQL database (2nd gen) to start testing and noticed that the only option available to access the SQL databases from Internal IPs was with the "parent project" subnetwork.
I realised that SQL Engine create a subnetwork dedicated for just that. It's written in the documentation as well, silly me.
No problem, I tear it down, enable Private Service Connection, create an allocated IP range in the VPC management and set it to export routes.
Then I went back to the SQL Engine a created a new database. As expected the new one had the IP assigned to the allocated IP range set up previously.
Now, I expected every peered network to be able to see the SQL subnetwork as well but apparently not. Again, RDFM you silly goose. It was written there as well.
I activated a bronze support subscription with GCP to have some guidance but what I got was a repeated "create a vpn tunnel between the two projects" which left me a little disappointed as the concept of Peered VPC is so good.
But anyway, let's do that then.
I created a tunnel pointing to a gateway on the project that will have K8s clusters and vice-versa.
The dashboard tells me that the tunnel are established but apparently there is a problem with the bgp settings because they are hanging on "Waiting for peer" on both side, since forever.
At this point I'm looking for anything related to BGP but all I can find is how it works in theory, what it is used for, which are the ASM numbers reserved etc etc.
I really need someone to point out the obvious and tell me what I fucked up here, so:
This is the VPN tunnel on the projects that hosts the databases:
And this is the VPN tunnel on the project where the products will be deployed, that need to access the databases.
Any help is greatly appreciated!
Regarding the BGP status "Waiting for peer" in your VPN tunnel, I believe this is due to the configured Cloud Router BGP IP and BGP peer IP. When configuring, the Cloud Router BGP IP address of tunnel1 is going to be the BGP Peer IP address for tunnel2, and the BGP Peer IP address for tunnel1 is going to be the Router BGP IP address of tunnel2.
Referring to your scenario, the IP address for stage-tunnel-to-cerberus should be:
Router BGP IP address: 169.254.1.2
and,
BGP Peer IP address: 169.254.1.1
This should put your VPN tunnels BGP session status in "BGP established".
You can't achieve what you want by VPN or by VPC Peering. In fact there is a rule in VPC which avoid peering transitivity described in the restriction part
Only directly peered networks can communicate. Transitive peering is not supported. In other words, if VPC network N1 is peered with N2 and N3, but N2 and N3 are not directly connected, VPC network N2 cannot communicate with VPC network N3 over VPC Network Peering.
Now, take what you want to achieve. When you use a Cloud SQL private IP, you create a peering between your VPC and the VPC of the Cloud SQL. And you have another peering (or VPN tunnel) for the SQL engine.
SQL Engine -> Peering -> Project -> Peering -> Cloud SQL
Like this you can't.
But you can use the shared VPC. Create a shared VPC, add your 2 projects in it, create a common subnet for SQL Engine and the Cloud SQL peering. That should work.
But, be careful. All VPC features aren't available with shared VPC. For example, serverless VPC connector aren't yet compliant with it.
Hope this help!
The original setup in the OP question should work, i.e.
Network 1 <--- (VPN) ---> Network 2 <--- (Peered) ---> CloudSQL network
(the network and the peering is created by GCP)
Then resource in Network 1 is able to access a MySQL instance created in the CloudSQLz network.

Google Cloud Platform networking: Resolve VM hostname to its assigned internal IP even when not running?

Is there any way in the GCP, to allow VM hostnames to be resolved to their IPs even when the VMs are stopped?
Listing VMs in a project reveals their assigned internal IP addresses even when the VMs are stopped. This means that, as long as the VMs aren't re-created, their internal IPs are statically assigned.
However, when our VMs are stopped, the DNS resolution stops working:
ping: my-vm: Name or service not known
even though the IP is kept assigned to it, according to gcloud compute instances list.
I've tried reserving the VM's current internal IP:
gcloud compute addresses create my-vm --addresses 10.123.0.123 --region europe-west1 --subnet default
However, the address name my-vm above is not related to the VM name my-vm and the reservation has no effect (except for making the IP unavailable for automatic assignment in case of VM re-creation).
But why?
Some fault-tolerant software will have a configuration for connecting to multiple machines for redundancy, and if at least one of the connections could be established, the software will run fine. But if the hostname cannot be resolved, this software would not start at all, forcing us to hard-code the DNS in /etc/hosts (which doesn't scale well to a cluster of two dozen VMs) or to use IP addresses (which gets hairy after a while). Specific example here is freeDiameter.
Ping uses the IP ICMP protocol. This requires that the target is running and responding to network requests.
Google Compute Engine VMs use DHCP for private IP addresses. DHCP is integrated with (communicates with) Google DNS. DHCP informs DNS about running network services (VM IP address and hostname). If the VM is shutdown, this link does not exist. DHCP/DNS information is updated/replaced/deleted hourly.
You can set up Google Cloud DNS private zones, create entries for your VPC resources and resolve private IP addresses and hostnames that persist.

I wanted to create an Uptime check for my API which is an internal TCP load balancer under VPC network?

I wanted to create an Uptime check for my API which is an internal TCP load balancer under VPC network . I have a firewall set-up and i have allowed the IP Address for US region to access this internal TCP load balancer . But I am getting the error "responded with Skipping Unsafe Address". I have provided the IP Address of my internal TCP load balancer with port 8082 and protocol is HTTP and Resource Type is URL and I have given the value in path.
Currently, Stackdriver checks cannot check "non-routable" (also known as "private" IP addresses: (10.0.0.0 - 10.255.255.255, 172.16.0.0 - 172.31.255.255, 192.168.0.0 - 192.168.255.255). This is what accounts for the "Unsafe Address" error.
However, Private Checkers will be entering EAP soon, and you should be able to use them once they get to Beta. Private Checkers live on your VPC, and can probe non-routable addresses on your network.
The capability is now (June 2021) in beta - If you are interested please contact us (or me directly)and ask to join the beta program.
The new capability allows you to run Private uptime checks - which means you can run your uptime checks on a private network.

How IP-Aliases does work on Google Cloud Computing Instance?

When setup a IP-Alias via gloud command or the interface, it works out of the box. But in the machine itself, i do not see any configuration, ip addr-entries, no firewall rules, no routes that would allow to be the machine pingable - but it's pingable (local and remote)! (for example 10.31.150.70, when you setup a 10.31.150.64/26-subnet, and you primary IP is 10.31.150.1)
On the other hand, the primary IP of the machine is a /32-Netmask. For example:
10.31.150.1/32, Gateway: 10.31.0.1/16. So, how can the machine reach the gateway, 10.31.0.1, when the gateway is out of the range?
When removing the Main-IP via ip addr del, the aliases aren't pingable anymore.
Google runs a networking daemon on your instance. It runs as the google-network-daemon service. This code is open source and viewable at this repo. This repo has a Python module called google_compute_engine which manages IP aliasing among other things. You can browse their code to understand how Google implements this (they use either ip route or ifconfig depending on the platform)
To see the alias route added by Google on a Debian box (where they use ip route underneath for aliasing) run the following command.
ip route ls table local type local dev eth0 scope host proto 66
If you know your Linux commands, you can remove appropriate routes after stopping the daemon, and then assign the alias IP address to your primary interface as the second IP address to see the ifconfig approach in action as well.
When alias IP ranges are configured, GCP automatically installs VPC network routes for primary and alias IP ranges for the subnet of the primary network interface. Alias IP ranges are routable within the GCP virtual network without requiring additional routes. That is the reason why there is no configuration on the VM itself but still it's pingable. You do not have to add a route for every IP alias and you do not have to take route quotas into account.
More information regarding Alias IP on Google Cloud Platform (GCP) can be found in this help center article.
Be aware that Compute Engine networks only support IPv4 unicast traffic and it will show the netmask as /32 on the VM. However, it will still be able to reach the Gateway of the subnet that it belongs to. For example, 10.31.0.0/16 includes hosts ranging from 10.31.0.1 to 10.31.255.254 and the host 10.31.150.1 is within that range.
To further clarify why VM instances are assigned with the /32 mask, it is important to note that /32 is an artificial construct. The instance talks to the software defined network, which creates and manages the "real" subnets. So, it is really a link between the single address and the gateway for the subnet. As long as the link layer is there, communications are established and everything works.
In addition to that, network masks are enforced at the network layer. This helps avoid generation of unnecessary broadcast traffic (which underlying network wouldn't distribute anyway).
Note that removing the primary IP will break the reachability to the metadata server and therefore the IP aliases won't be accessible.

Resources