devstack installation - floating ips - openstack

i play with a multi-node devstack installation using nova-network and having quantum disabled.
my problem is that i cannot connect to a created instance in a node A from another node B of the installation
Some comments regarding the installation:
for the fixed_ip and the floating_ip range i use two sets of private ips.
fixed_ips seem to work and after the services are up i can see a related entry in the routing table
floating_ips also work, meaning that they can be assigned to created instances however are not accesible from other nodes (or the same node) and no routing entry exists (or any other entry in the iptables)
should floating_ips be public ones ? why no routing entry is created for floating ips?

Yes floating ips should be Public ones, which can be accessed from the internet.
Please check the security group and rules in attached to those instances as the default security group does not allow incoming traffic by default.

Verify this: Openstack VM is not accessible on LAN.
In my case I did just:
echo 1 > /proc/sys/net/ipv4/conf/ens160/proxy_arp
iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE
That did my Devstack VM's visible to the world!

Related

What iptables knows about pods?

Let's say we have 2 Nodes in a cluster.
Node A has 1 replica of a pod, Node B has 2 replicas. According to this talk (YouTube video with a time tag) from Google Cloud engineers, a request which was routed to Node A might be rerouted to the Node B by iptables which is inside the Node A. I have several questions regarding this behavior:
What information iptables of Node A knows about replicas of a pod outside of it? How does it know where to send the traffic?
Can it be that iptables of the Node B reroutes this request to Node C? If so, then will the egress traffic go back to the Node B -> Node A -> Client?
I think you might be mixing up two subsystems, service proxies and CNI. CNI is first, it’s a plug-in based system that sets up the routing rules across all your nodes so that the network appears flat. A pod IP will work like normal from any node. Exactly how that happens varies by plugin, Calico uses BGP between the nodes. Then there’s the service proxies, usually implemented using iptables though also somewhat pluggable. Those define the service IP -> endpoint IP (read: pod IP) load balancing. But the actual routing is handled by whatever your CNI plugin set up. There’s a lot of special modes and cases but that’s the basic overview.
Packets can move between nodes, services and pods before reaching the final destination.
All the intra-cluster routing (node-to-node, pod-to-pod, service-to-service, pod-to-service, service-to-pod, pod-to-node, node-to-pod, etc) in kubernetes is done by:
CNI
load-balancing algorithm
kube-proxy
iptables.
Packet route in k8s also depends on many things like load in the cluster, per-node load, affinity/anti-affinity rules, nodeSelectors, taints/tolerations, autoscaling, number of pod replicas, etc.
Intra-cluster routing is transparent to the router and ideally the user need not know about it unless there are networking issues to debug.
Doing sudo iptables -L -n -v on any k8s node shows the low-level iptables rules and chains used for packet-forwarding.

How to create docker containers with the same internal IP address?

I have an environment where I need to run some external software into Docker containers. This software is trying to connect to our product by specific IP address - let's say 192.168.255.2 - and this address is fixed and cannot be changed. Moreover, host IP address must be also set to specific IP - let's say 192.168.255.3.
Product supports 2 ethernet interfaces:
first of them has strict restrictions regarding IP addressing - let's call it "first"
second does not have such restrictions and provides similar functionalities - for this example let's assume that the IP address of this interface is set to 10.1.1.2/24 - let's call it "second"
I need to run simultaneously multiple docker containers, each container shall be connected to one product (1 to 1 relationship).
Things that are run inside containers must think that they're reaching connectivity to product by using "first" network interface (one which have static IP assignment and which cannot be changed).
All I want to do is to create containers with the same IP addresses to pretend that application inside container is using "first" ethernet interface of product and then at host level I want to redirect all traffic using IPTables to "second" interface.
Therefore I have one major problem: how to create multiple docker containers with the same IP address?
From the exact phrasing of your question, docker has the option to share the network stack of another container. Simply run:
docker run -d --name containera yourimage
docker run -d --net container:containera anotherimage
And you'll see that the second container has the same IP interfaces and can even see ports being used by the first container.
I'd recommend instead that you install both interfaces on your docker host and bind to the IP on the host that you need, then don't worry about the actual IP of the container. The result will be much simpler to manage. Here's how you bind to a single IP on the host with ports 8080 and 8888 that's mapped to two different container's port 80:
docker run -d -p 192.168.255.2:8080:80 --name nginx8080 nginx
docker run -d -p 192.168.255.2:8888:80 --name nginx8888 nginx

Block IP from accessing Google Compute Engine instance

I'm trying to block a certain IP address or range to reach my WordPress server that's configured on my Google Compute Engine server.
I know I can block it via Apache, but even if I do my access_logs will still be filled with 403 error from requests from this IP.
Is there any way to block the IP entirely and don't even let it reach Apache?
Thanks in advance for any help.
If you want to block a single IP address, but allow all other traffic, the simplest option is probably to use iptables on the host. The GCE firewall rules are designed to control which IP addresses can reach your instance, but allowing everything on the internet except one address would probably be annoying to write.
To block a single IP address with iptables:
iptables -A INPUT -s $IP_ADDRESS -j DROP
or to just drop HTTP (but not HTTPS or other protocols):
iptables -A INPUT -s $IP_ADDRESS -p tcp --destination-port 80 -j DROP
Note that you'll need to run the above command as root in either case.
By default all incoming traffic to GCE is blocked except for the ports and range of IPs that are allowed to have access. Allowing everything to connect except a specific IP or a range of IP addresses is not supported on GCE firewall. As a workaround, you can setup a Load Balancer and allow incoming traffic from the LB IP address only to the instance. You can have more information in this Help Center article.
Yes you can block it using Gcloud Firewall.
Try creating the firewall rule from the command line or by logging into Google Cloud.
Example:
gcloud compute firewall-rules create tcp-deny --network example-network --source-ranges 10.0.0.0/8 --deny tcp:80
Above Rule will block the range 10.0.0.0/8 to port 80 (tcp).
Same can be done to block other IP Ranges over tcp and udp.
For more info check this: glcoud network config
Bitnami developer here
If you want to block a certain IP, you can use iptables as it's pointed in this post.
Also, if you want to have your iptables rules active when you reboot your machine you have to do the following:
sudo su
iptables-save > /opt/bitnami/iptables-rules
crontab -e
Now edit the file and include this line at the end:
#reboot /sbin/iptables-restore < /opt/bitnami/iptables-rules
This way, in every boot, the system will load the iptables rules and apply them.
To block offending IP, there are some methods on different levels to do it. From performance perspective, generally :
Network firewall > VM iptables > VM web server > VM application.
Google cloud has build-in firewall that no cost.
For example, this gcloud command create one firewall rule that can block 1 or more ips.
gcloud compute --project=your-project-id firewall-rules create your-firewall-rule-name --direction=INGRESS --priority=900 --network=default --action=DENY --rules=all --source-ranges=ip1,ip2,ip3…
Command parameters' reference see here https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/create
You can also use Google cloud console or rest api to create it, but on console it's not easy to input lots of ips.
Build-in firewall's current limit:
One project can create 100 firewall rules.
One firewall rule can block 256 ip sources.
If there are 10 other firewall rules, you can block 90x256=23040 standalone ips, that is enough for general case.
Note: Google cloud app engine firewall is separated from build-in firewall.
Linux iptables
See other answers.
Web server
Apache, Nginx can also block ip.
Application
Not recommended block ip here. But application can help analysis which ip need to block, for example login failed many times.
If you want your system to automatically block all bad ip addresses in the GCP Firewall you can check out the Gatekeeper for Google Cloud Firewall.
It analyses your network connections and WordPress/Apache logs dynamically and creates approprate rules to ward off DoS and DDoS attacks as well as spying bots.

Openstack, make my insstance acessible from different machine

How to make my instance accessible from another machine in the same network, I've already asssign a floating IP?
Once you have assigned FIP,
1. verify you have ingress/egress allow on CIDR 0.0.0.0/0 rules configured on security-group.
2. Ping from other machine which is in same network as FIP.
If step 2 succeeds, then you should be able to access VM over network.
In case if step 2 fails, check below things.
Run neutron floatingip-list and check if you have FIP configured for Instance
Go to to nova-api and check logs for clue

Connecting Azure VM with internal IP without virtual network?

I have two Virtual Machines, which due to some historical reasons are under two different subscriptions. I am trying to find a way to connect them through internal IPs.
Normally for public virtual IP, I open the relevant port on Azure portal and than add an iptable rule like
iptables -I INPUT -p tcp -m tcp -s 198.1.1.1/32 --dport 11211 -j ACCEPT
And then I can connect with the public IP. I do the same by replacing the public ip as above to the internal ip but it didn't work.
After some search it seems the normal way is to create a virtual network and add the two machines in it. But I got two questions:
Is there a way like the iptables rule like can achieve what I want without the need to setup virtual network?
Can one add non-azure machine, like an VPS, in the virtual network?
Q1:
Is there a way like the iptables rule like can achieve what I want
without the need to setup virtual network?
No. Not really. A possible workaround would be to still create an InputEndpoint (Endpoint from the Portal) for both the virtual machines. Then change your iptables rules for both public and private Addresses. But no guarantees it will work. Moreover, when not part of a Virtual Network, the internal IP Address of a VM is very likely to change sooner or later, especially on restart.
Q2:
Can one add non-azure machine, like an VPS, in the virtual network?
Technically yes. You have to use either Site-to-Site VPN (GA) or a Point-to-Site VPN (Preview). You can read more on Site-to-Site VPN here and Point to Site VPN here.

Resources