I've recently found out that the external network for our OpenStack (Ocata) setup has maxed out on the available IP addresses in its allocation table. In fact, it has over-allocated with -9 free IPs. So, to manage the limited IP addresses, is it possible to access an instance in a project directly from an external network (internet) via the project's router? This way only a single IP address needs to be allocated per project instead of allocating to multiple instances per project.
The short answer would be NO, but there are couple of workarounds that came to my mind (not that they will be good, but they will work).
In case any instance in your private network has floatingIP, you can use that host as a jump-host (bastion-host) to SSH into the target host. This also brings the benefits of port forwarding/SSH tunneling to the table if you want to access to some other port.
You can always access to any host on private networks through qdhcp or qrouter namespace from the network node
ip netns exec qdhcp-XXXXXXX ssh user#internal-IP
Related
Is there any way in the GCP, to allow VM hostnames to be resolved to their IPs even when the VMs are stopped?
Listing VMs in a project reveals their assigned internal IP addresses even when the VMs are stopped. This means that, as long as the VMs aren't re-created, their internal IPs are statically assigned.
However, when our VMs are stopped, the DNS resolution stops working:
ping: my-vm: Name or service not known
even though the IP is kept assigned to it, according to gcloud compute instances list.
I've tried reserving the VM's current internal IP:
gcloud compute addresses create my-vm --addresses 10.123.0.123 --region europe-west1 --subnet default
However, the address name my-vm above is not related to the VM name my-vm and the reservation has no effect (except for making the IP unavailable for automatic assignment in case of VM re-creation).
But why?
Some fault-tolerant software will have a configuration for connecting to multiple machines for redundancy, and if at least one of the connections could be established, the software will run fine. But if the hostname cannot be resolved, this software would not start at all, forcing us to hard-code the DNS in /etc/hosts (which doesn't scale well to a cluster of two dozen VMs) or to use IP addresses (which gets hairy after a while). Specific example here is freeDiameter.
Ping uses the IP ICMP protocol. This requires that the target is running and responding to network requests.
Google Compute Engine VMs use DHCP for private IP addresses. DHCP is integrated with (communicates with) Google DNS. DHCP informs DNS about running network services (VM IP address and hostname). If the VM is shutdown, this link does not exist. DHCP/DNS information is updated/replaced/deleted hourly.
You can set up Google Cloud DNS private zones, create entries for your VPC resources and resolve private IP addresses and hostnames that persist.
When setup a IP-Alias via gloud command or the interface, it works out of the box. But in the machine itself, i do not see any configuration, ip addr-entries, no firewall rules, no routes that would allow to be the machine pingable - but it's pingable (local and remote)! (for example 10.31.150.70, when you setup a 10.31.150.64/26-subnet, and you primary IP is 10.31.150.1)
On the other hand, the primary IP of the machine is a /32-Netmask. For example:
10.31.150.1/32, Gateway: 10.31.0.1/16. So, how can the machine reach the gateway, 10.31.0.1, when the gateway is out of the range?
When removing the Main-IP via ip addr del, the aliases aren't pingable anymore.
Google runs a networking daemon on your instance. It runs as the google-network-daemon service. This code is open source and viewable at this repo. This repo has a Python module called google_compute_engine which manages IP aliasing among other things. You can browse their code to understand how Google implements this (they use either ip route or ifconfig depending on the platform)
To see the alias route added by Google on a Debian box (where they use ip route underneath for aliasing) run the following command.
ip route ls table local type local dev eth0 scope host proto 66
If you know your Linux commands, you can remove appropriate routes after stopping the daemon, and then assign the alias IP address to your primary interface as the second IP address to see the ifconfig approach in action as well.
When alias IP ranges are configured, GCP automatically installs VPC network routes for primary and alias IP ranges for the subnet of the primary network interface. Alias IP ranges are routable within the GCP virtual network without requiring additional routes. That is the reason why there is no configuration on the VM itself but still it's pingable. You do not have to add a route for every IP alias and you do not have to take route quotas into account.
More information regarding Alias IP on Google Cloud Platform (GCP) can be found in this help center article.
Be aware that Compute Engine networks only support IPv4 unicast traffic and it will show the netmask as /32 on the VM. However, it will still be able to reach the Gateway of the subnet that it belongs to. For example, 10.31.0.0/16 includes hosts ranging from 10.31.0.1 to 10.31.255.254 and the host 10.31.150.1 is within that range.
To further clarify why VM instances are assigned with the /32 mask, it is important to note that /32 is an artificial construct. The instance talks to the software defined network, which creates and manages the "real" subnets. So, it is really a link between the single address and the gateway for the subnet. As long as the link layer is there, communications are established and everything works.
In addition to that, network masks are enforced at the network layer. This helps avoid generation of unnecessary broadcast traffic (which underlying network wouldn't distribute anyway).
Note that removing the primary IP will break the reachability to the metadata server and therefore the IP aliases won't be accessible.
I have two environments on jelastic 4.7. On one of them I have a Java Stack and a Redis server that need to be kept private without a public IP address. On the other environment, I have a Node.js Stack that have a Public IP.
So, Im searching the docs exhaustively and can't find the answer to the question.
Can I access the private IP and port of my Redis from the node app?? Every node on Jelastic has a local ip address. Can I access those between environments??
I think it's a simple question. I'm trying to avoid the overhead of creating a public IP Address for Redis.
Can I access the private IP and port of my Redis from the node app??
Every node on Jelastic has a local ip address. Can I access those
between environments??
Yes, you can connect to different nodes of different environments using just a local IP within one hosting provider or its regions (depends on providers setup). Also, you can use Endpoints in order to connect to local IPs of other providers or to the regions within one provider, if direct connection can't be established.
Besides that, you can use, for example, CNAME of database instead of a local IP.
I want to know how does the openstack assign ip to virtual machines ? and how to find out port and ips used by the VM. Is it possible for us to find out the IP and ports being used by an application running inside the VM ?
To assign an IP to your VM you can use this command:
openstack floating ip create public
To associate your VM and the IP use the command below:
openstack server add floating ip your-vm-name your-ip-number
To list all the ports used by applications, ssh to your instance and run:
sudo lsof -i
Assuming you know the VM name
do the following:
On controller run
nova interface-list VM-NAME
It will give you port-id, IP-address and mac address of VM interface.
You can login to VM and run
netstat -tlnp to see which IP and ports being used by applications running inside the VM.
As to how a VM gets IP, it depends on your deployment. On a basic openstack deployment when you create a network and create a subnet under that network, you will see on the network node a dhcp namespace getting created. (do ip netns on network node). The namespace name would be qdhcp-network-id. The dnsmasq process running inside the dhcp namespace allots IPs to VM. This is just one of the many ways in which VM gets IP.
This particular End User page of the official documentation could be a good start:
"Each instance can have a private, or fixed, IP address and a public, or floating, one.
Private IP addresses are used for communication between instances, and public ones are used for communication with the outside world.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IPs, configured by the cloud operator, is available in OpenStack Compute.
You can allocate a certain number of these to a project: The maximum number of floating IP addresses per project is defined by the quota.
You can add a floating IP address from this set to an instance of the project. Floating IP addresses can be dynamically disassociated and associated with other instances of the same project at any time.
Before you can assign a floating IP address to an instance, you first must allocate floating IPs to a project. After floating IP addresses have been allocated to the current project, you can assign them to running instances.
You can assign a floating IP address to one instance at a time."
There are of course deeper layers to look at in this section of the Admin Guide
Regarding how to find out about ports and IPs, you have two options: command line interface or API.
For example, if you are using Neutron* and want to find out the IPs or networks in use with the API:
GET v2.0/networks
And using the CLI:
$ neutron net-list
You can use similar commands for ports and subnets, however I haven't personally tested if you can get information about the application running in the VM this way.
*Check out which OpenStack release you're running. If it's an old one, chances are it's using the Compute node (Nova) for networking.
I have a host laptop running Debian, and a client VM running Debian. On the client, I run NGINX, and it serves up a complex web application with several hostnames (e.g. www.host, api.host, blog.host). The laptop moves between several different networks, with a seemingly ever-changing IP address.
I'm trying to meet the following conditions with this VM:
The IP address of the client shouldn't change (e.g. always 192.168.10.10)
With a static IP, I could edit the host /etc/hosts file and keep complex hostnames
The client should have access to the Internet
No other machines need to access the client
What is the best way to set up the Attached to settings for this client?
To do this, simply add two network interfaces to the box.
The first interface will use Host-Only, and that is how your host can connect to the client. This will create an additional network adapter on the host.
The second interface will use NAT, and that is the gateway to the internet. This will create an additional network adapter on the client.
If you've already got a client running, you'll need to get the next network adapter up and running by executing sudo ifconfig eth1 up and to get an IP address, run sudo dhclient eth1.