Assign domain name to a floating ip - openstack

I want to assign a domain name to an internal openstack floating ip, to access the instance over the internet.
I checked that you can set dnsmasq_dns_servers = 1.1.1.1 and configure dhcp_agent.ini accordingly, it seems to be a step in the right direction, but i couldn't find a way to allocate domain name to openstack instance (via horizon or cli).

The dnsmasq server that is managed by the DHCP agent is used to implement DHCP in subnets where DHCP is enabled. It does not resolve hostnames. If you want to be able to resolve hostnames internally, you could look into running a DNS server in your subnet or maintaning a hostfile on each instance that needs to communicate with the instance.
You could look at Designate. That is the DNS as a Service component of OpenStack. It is also possible to integrate Designate with an external service to manage external DNS.

See SysEleven's How to set up DNS for a Server/Website.
It walks you through the process of:
Creating the zone,
adding the DNS record, and finally
making the zone authoritative in global DNS.
It assumes you can use the OpenStack CLI, but there's also documentation on doing the same thing with Terraform, which I'd recommend as it fully automates the entire infrastructure with infrastructure as code (IaC).
It should apply to any OpenStack provider.

Related

How are external ips supposed to work in OpenShift (4.x)?

I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.

VM on GCP Lost network after set static ip in ifcfg-eth0

CentOs 7 with whm
Compute Engine VM Instance was working fine and GCP given external static ip xx.135 and internal 10.xx.x.2
Upon checking it is found that network settings was DHCP hence I
modified /etc/sysconfig/network-scripts/ifcfg-eth0 with BOOTPROTO=static with static ip given by GCP and restart network service. After that I lost the
control of VM. What is wrong? How to resolve the issues and get the control?
I do not think you needed to modify the DHCP configuration. You could follow the link here for Reserving a Static External IP Address. Also, this is the documentation if you would like to Reserve a Static Internal IP Address.
The way to fix a messed up config like this is to use the console, where the user can revert that config. Just to note here that you might have to have set the password. This is, in fact, one way.
Another way is if the disk attached is a Persistent Disk, you could attach it somewhere else and replace the config. Here is the documentation for that. There is a caution, some types of VMs that won't allow for this. It won't work if it's a local SSD.

Kubernetes LoadBalancer with new IP per service from LAN DHCP

i am trying out Kubernetes on bare-metal, as a example I have docker containers exposing port 2002 (this is not HTTP).
I do not need to load balance traffic among my pods since each of new pod is doing its own jobs not for the same network clients.
Is there a software that will allow to access each new created service with new IP from internal DHCP so I can preserve my original container port?
I can create service with NodePort and access this pod by some randomly generated port that is forwarded to my 2002 port.
But i need to preserve that 2002 port while accessing my containers.
Each new service would need to be accessible by new LAN IP but with the same port as containers.
Is there some network plugin (LoadBalancer?) that will allow to forward from IP assigned by DHCP back to this randomly generated service port so I can access containers by original ports?
Starting service in Kubernetes, and then accessing this service with IP:2002, then starting another service but the same container image as previous, and then accessing it with another_new_IP:2002
Ah, that happens automatically within the cluster -- each Pod has its own IP address. I know you said bare metal, but this post by Lyft may give you some insight into how you can skip or augment the SDN and surface the Pod's IPs into routable address space, doing exactly what you want.
In more real terms: I haven't ever had the need to attempt such a thing, but CNI is likely flexible enough to interact with a DHCP server and pull a Pod's IP from a predetermined pool, so long as the pool is big enough to accommodate the frequency of Pod creation and termination.
Either way, I would absolutely read a blog post describing your attempt -- successful or not -- to pull this off!
On a separate note, be careful because the word Service means something specific within kubernetes, even though it is regrettably a word often used in a more generic term (as I suspect you did). Thankfully, a Service is designed to do the exact opposite of what you want to happen, so there was little chance of confusion -- just be aware.

How IP-Aliases does work on Google Cloud Computing Instance?

When setup a IP-Alias via gloud command or the interface, it works out of the box. But in the machine itself, i do not see any configuration, ip addr-entries, no firewall rules, no routes that would allow to be the machine pingable - but it's pingable (local and remote)! (for example 10.31.150.70, when you setup a 10.31.150.64/26-subnet, and you primary IP is 10.31.150.1)
On the other hand, the primary IP of the machine is a /32-Netmask. For example:
10.31.150.1/32, Gateway: 10.31.0.1/16. So, how can the machine reach the gateway, 10.31.0.1, when the gateway is out of the range?
When removing the Main-IP via ip addr del, the aliases aren't pingable anymore.
Google runs a networking daemon on your instance. It runs as the google-network-daemon service. This code is open source and viewable at this repo. This repo has a Python module called google_compute_engine which manages IP aliasing among other things. You can browse their code to understand how Google implements this (they use either ip route or ifconfig depending on the platform)
To see the alias route added by Google on a Debian box (where they use ip route underneath for aliasing) run the following command.
ip route ls table local type local dev eth0 scope host proto 66
If you know your Linux commands, you can remove appropriate routes after stopping the daemon, and then assign the alias IP address to your primary interface as the second IP address to see the ifconfig approach in action as well.
When alias IP ranges are configured, GCP automatically installs VPC network routes for primary and alias IP ranges for the subnet of the primary network interface. Alias IP ranges are routable within the GCP virtual network without requiring additional routes. That is the reason why there is no configuration on the VM itself but still it's pingable. You do not have to add a route for every IP alias and you do not have to take route quotas into account.
More information regarding Alias IP on Google Cloud Platform (GCP) can be found in this help center article.
Be aware that Compute Engine networks only support IPv4 unicast traffic and it will show the netmask as /32 on the VM. However, it will still be able to reach the Gateway of the subnet that it belongs to. For example, 10.31.0.0/16 includes hosts ranging from 10.31.0.1 to 10.31.255.254 and the host 10.31.150.1 is within that range.
To further clarify why VM instances are assigned with the /32 mask, it is important to note that /32 is an artificial construct. The instance talks to the software defined network, which creates and manages the "real" subnets. So, it is really a link between the single address and the gateway for the subnet. As long as the link layer is there, communications are established and everything works.
In addition to that, network masks are enforced at the network layer. This helps avoid generation of unnecessary broadcast traffic (which underlying network wouldn't distribute anyway).
Note that removing the primary IP will break the reachability to the metadata server and therefore the IP aliases won't be accessible.

How to get associated ip address in openstack instance

I am trying to setup a consul server in an openstack cluster. I have the server provisioned and have associated an IP with the server that is accessible from vagrants on developer machines.
I am able to join the server from a local vagrant if I use the -advertise flag on the consul agent -server command and use the floating ip I set. However, I am provisioning the server with salt and need to the machine to be able to determine that IP automatically.
By default, the server is using its bind address which is set to its 10.x.x.x local IP. That local IP is the only one I seem to be able to easily determine.
Is there a way to get an instance's floating ip(s)?
Bonus points: Is there a way to get an instances name?
The information you are looking for is available to an instance using the Openstack metadata service. It is basically a REST API that an instance can hit to get information specific to this instance. See more information here:
http://docs.openstack.org/grizzly/openstack-compute/admin/content/metadata-service.html
You should be able to get both the instance name and its floating ip (look for "public-ipv4")

Resources