Is there a way to start an Airflow worker with a stable IP address on GKE? - airflow

I am using Airflow on GKE and I need to access a SFTP server. This server is protected with an IP whitelisting. The thing is, everytime a worker starts, a new pod is created along with a new IP address.
Does anybody know how to assign a static outgoing ip to the cluster/workers?

You can use Cloud NAT mapping.
Then all the outgoing connections from GKE will be automatically NAT-ed with the outgoing IP of the gateway.
You can read more about it here:
Cloud NAT (https://cloud.google.com/nat/docs/overview)
You can have stable egrees IP there.

Related

GKE: IP Addresses

I have noticed something strange with my service deployed on GKE and I would like to understand...
When I Launch kubectl get services I can see my service EXTRNAL-IP. Let's say 35.189.192.88. That's the one I use to access my application.
Ben when my application tries to access another external API, the owner of the API sees another IP address from me : 35.205.57.21
Can you explain me why ? And is it possible to make this second IP static ?
Because my app has to access an external API, and the owner of this API filters its access by IP address
Thanks !
The IP address you have on service as EXTERNAL-IP is a load balancer IP address reserved and assigned to your new service and it is only for incoming traffic.
But when your pod is trying to reach any service outside the cluster two scenarios can happen:
The destination API is inside the same VPC, which means that no translation of IP addresses is needed and then on the last version of Kubernetes you will reach the API using the Pod IP address assigned by Kubernetes on the range 10.0.0.0/8.
When the target is outside the VPC you need to reach it using some kind of NAT, in that case, the default gateway for your VPC is used and the NAT applies the IP address of the node where the pod is running.
If you need to have and static IP address in order to whitelist it you need to use a cloud NAT
https://cloud.google.com/nat/docs/overview

Unable to SSH into VM instance on Google Cloud Platform

I have created a firewall rule in VPC network for port 22 by assigning an IP with the port e.g (192.168.xx.yy) instead of 0.0.0.0/0 in the rules. Now, when I create a compute engine VM instance in Google Cloud Platform and SSH into it, it states that "cannot connect to port 22".
I don't want the port tcp:22 to have ip range 0.0.0.0/0 but only have a single ip as stated above? How can I solve this issue?
The 192.168.x.x is an internal IP address, and in your situation would apply to a VM instance within the same network as the instance you want to connect to.
If you want to connect from outside that network, you'll need to set the source of the firewall rule to the external IP of the instance/machine you want to connect from. You can get your external IP by going to https://whatismyipaddress.com for example.
The firewall rule setting would be something like this:
Direction of traffic: Ingress
Action on match: Allow
Targets: Specified target tags (for example)
Source filter: IP ranges
Source IP ranges: x.x.x.x/32 (your external IP)
If you would not like to have your GCE instance's port 22 open to internet, but you would like to connect to it, I propose you 2 different solutions:
Create a bastion host. This VM is a proxy to access to your GCE instances. You log into the bastion and then you can perform a ssh hop to your GCE instance. Only the bastion host is opened to internet on port 22. And you can start this Bastion VM only when you need to connect to your others GCE instances, that increase the security and decrease the risk of attack on this "backdoor" instance.
For both the bastion and for directly reaching your VM on port 22, you can limit the source IP of your firewall rule to your current IP.
But remember, the IP is not a source of truth.

openstack instance can not access internet

An instance created in the OpenStack can not access the internet. I have created an instance from the ubuntu cloud image.
In the security groups, I allowed all the ports for ingress and egress request of ICMP, TCP and UDP. I can ssh the instance and ping the floating IP of the instance and all the other instances on the private network but I can not ping any other IP address outside the network. In the network topology, the router is connecting the public and private network but the instance can not access the internet and i can not ping 8.8.8.8.
Does anyone know how to resolve this issue?
check you ml2 and linuxbridge or ovs agent. this is because of miss-configuration. presumably type-driver and mechanism-driver mismatch, or provider network is not set correctly.
please post your config here, so we can find the problem.
Thanks for your answer. I was able to resolve this issue by allowing ICMP ingress requests because of port 22 in the security groups.

How can I open my local TCP port to public?

I have a TCP Server for a my personal chat, I want to expand my connection beyond my local network and I want to open my port: 28752 to my IP public of pc to enter wherever I want only when my computer is on.
I have seen different solutions for example DMZ to associate my local IP to public IP, but i want to do this without modifying to router's setting I wanted to do it from a program. Is it possible?
It is possible to open up ports. But it depends on the OS in which you are trying to accomplish it. You can use the linux iptables to manipulate the ports opened and closed to any linux machine. IptablesSome examples . The ports should also be opened on the firewall layer outside the VM. eg: It could be AWS access policy, Security group, MAC's security firewall. Your laptop, when connected to the internet, will have a public IP address, you can share that public IP. But these IP address will change when you get connected to a different router. You can use AWS cli commands to assign a static IP address for your machine and expose it publicly. At the least minimum, you would need a public DNS server to expose your IP publicly. Easy way to achieve this is by putting in web server on cloud. Without a domain , you cant expose your IP. Once you have finalized on the domain (eg: AWS Route 53, Ingree IP from K8 etc), you can change/manipulate them from your program. It need not be language specific.

VirtualBox networking for an NGINX client having multiple hostnames

I have a host laptop running Debian, and a client VM running Debian. On the client, I run NGINX, and it serves up a complex web application with several hostnames (e.g. www.host, api.host, blog.host). The laptop moves between several different networks, with a seemingly ever-changing IP address.
I'm trying to meet the following conditions with this VM:
The IP address of the client shouldn't change (e.g. always 192.168.10.10)
With a static IP, I could edit the host /etc/hosts file and keep complex hostnames
The client should have access to the Internet
No other machines need to access the client
What is the best way to set up the Attached to settings for this client?
To do this, simply add two network interfaces to the box.
The first interface will use Host-Only, and that is how your host can connect to the client. This will create an additional network adapter on the host.
The second interface will use NAT, and that is the gateway to the internet. This will create an additional network adapter on the client.
If you've already got a client running, you'll need to get the next network adapter up and running by executing sudo ifconfig eth1 up and to get an IP address, run sudo dhclient eth1.

Resources