I have made a heat template that starts up some servers and installs puppet. In the heat template I have put for the servers their hostname by doing:
properties:
name: dir
Some servers actually gets their hostname, but there are a few that gets their hostname appended by ".novalocal".
An example for a server I have
properties:
name: server1
actual hostname: server1.novalocal
Any idea what cause this? I am at a total loss.
Reference:
Neutron Network DNS Suffix via DHCP
Nova appends the default domain name .novalocal to the hostname. This can be resolved by setting dhcp_domain to an empty string in nova.conf on the Control node.
# This option allows you to specify the domain for the DHCP server.
#
# Possible values:
#
# * Any string that is a valid domain name.
#
#dhcp_domain = novalocal
dhcp_domain =
FYI, As #Дмитрий Работягов mentioned, this option has been moved to [api] section, here is the change 480616 on Openstack Code-Review system.
Related
I'm attempting to re-run a state from another state. I'm not using watch or watch_in etc b/c i want it to run each time. I configure all my nginx virtual hosts and then at the end another state runs called nginx-certs the relevant portion is here:
nginx-frontend:
module.run:
- name: state.sls
- mods:
- nginx-frontend
During the highstate i see the state_id is executed but has no comments, nor shows it reruns that state, it just completes as Result: True. I can then jump to the salt master and run
sudo salt webserver state.sls nginx-certs
and when it hits nginx-frontend, it does reload all of the virtual hosts, putting the new cert in the config.
I'm curious why this does not run in the highstate.
I have attempted ll sorts of different variations of the simple block outlined above. This one works, but not in the highstate, which is my goal to fix.
If you wonder why i do it this way, all certificates for production and staging terminate at HAProxy and nginx only serves up 80/http1 81/h2, but when building out dev servers i want to assign the cert directly to the server as it will be public facing. I need to build out the virtual hosts first to get port 80 open which is used for lets-encrypt. Then when the cert is available, update the nginx vhosts listen directive and cert paths.
From what I understand: you have one server which you want temporarily configured with Nginx on port 80, then generate its certificate with letsencrypt, then change Nginx configuration to be on port 443.
What you can do is:
have one state which installs and configures Nginx to listen on port 80
have another state with installs/configures/runs letsencrypt
a third state which configures Nginx as you want it to be at the end [1]
you just include them in salt to be run in the specific order like
# custom_nginx.sls
include:
- temp_nginx_on_port_80
- letsencrypt_cert
- nginx
[1] for this I think its better to use formula like the one from the community https://github.com/saltstack-formulas/nginx-formula/ and configure it with pillar data. Obviously if you use it for step 3, you won't be able to use for step 1 (or at least I don't see right now how)
I'm having a hard time configuring an Openstack environment based on the All-In-One Single Machine installer for bridged networking in my LAN.
My objective is to SSH into the instances created in Openstack from my LAN.
The server is an Ubuntu 16.04 LTS with minimal installation and OpenSSH. The network configuration of the server is:
auto enp3s0
iface enp3s0 inet static
address 10.4.4.1
netmask 255.255.255.0
gateway 10.4.4.254
broadcast 10.4.4.255
network 10.4.4.0
dns-nameservers 10.4.1.12 10.4.1.10
Basically my network details are the following:
LAN 10.4.4.0
MASK 255.255.255.0
Gateway/DHCP Server 10.4.4.254
The local.conf file I've used for deploying the devstack is the following:
# Sample ``local.conf`` for user-configurable variables in ``stack.sh``
# NOTE: Copy this file to the root DevStack directory for it to work properly.
# ``local.conf`` is a user-maintained settings file that is sourced from ``stackrc``.
# This gives it the ability to override any variables set in ``stackrc``.
# Also, most of the settings in ``stack.sh`` are written to only be set if no
# value has already been set; this lets ``local.conf`` effectively override the
# default values.
# This is a collection of some of the settings we have found to be useful
# in our DevStack development environments. Additional settings are described
# in https://docs.openstack.org/devstack/latest/configuration.html#local-conf
# These should be considered as samples and are unsupported DevStack code.
# The ``localrc`` section replaces the old ``localrc`` configuration file.
# Note that if ``localrc`` is present it will be used in favor of this section.
[[local|localrc]]
# Minimal Contents
# ----------------
# While ``stack.sh`` is happy to run without ``localrc``, devlife is better when
# there are a few minimal variables set:
# If the ``*_PASSWORD`` variables are not set here you will be prompted to enter
# values for them by ``stack.sh``and they will be added to ``local.conf``.
FLOATING_RANGE=10.4.4.192/27
FIXED_RANGE=192.168.0.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=enp3s0
ADMIN_PASSWORD=nomoresecret
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
# ``HOST_IP`` and ``HOST_IPV6`` should be set manually for best results if
# the NIC configuration of the host is unusual, i.e. ``eth1`` has the default
# route but ``eth0`` is the public interface. They are auto-detected in
# ``stack.sh`` but often is indeterminate on later runs due to the IP moving
# from an Ethernet interface to a bridge on the host. Setting it here also
# makes it available for ``openrc`` to include when setting ``OS_AUTH_URL``.
# Neither is set by default.
HOST_IP=10.4.4.1
#HOST_IPV6=2001:db8::7
# Logging
# -------
# By default ``stack.sh`` output only goes to the terminal where it runs. It can
# be configured to additionally log to a file by setting ``LOGFILE`` to the full
# path of the destination log file. A timestamp will be appended to the given name.
LOGFILE=$DEST/logs/stack.sh.log
# Old log files are automatically removed after 7 days to keep things neat. Change
# the number of days by setting ``LOGDAYS``.
LOGDAYS=2
# Nova logs will be colorized if ``SYSLOG`` is not set; turn this off by setting
# ``LOG_COLOR`` false.
#LOG_COLOR=False
# Using milestone-proposed branches
# ---------------------------------
# Uncomment these to grab the milestone-proposed branches from the
# repos:
#CINDER_BRANCH=milestone-proposed
#GLANCE_BRANCH=milestone-proposed
#HORIZON_BRANCH=milestone-proposed
#KEYSTONE_BRANCH=milestone-proposed
#KEYSTONECLIENT_BRANCH=milestone-proposed
#NOVA_BRANCH=milestone-proposed
#NOVACLIENT_BRANCH=milestone-proposed
#NEUTRON_BRANCH=milestone-proposed
#SWIFT_BRANCH=milestone-proposed
# Using git versions of clients
# -----------------------------
# By default clients are installed from pip. See LIBS_FROM_GIT in
# stackrc for details on getting clients from specific branches or
# revisions. e.g.
# LIBS_FROM_GIT="python-ironicclient"
# IRONICCLIENT_BRANCH=refs/changes/44/2.../1
# Swift
# -----
# Swift is now used as the back-end for the S3-like object store. Setting the
# hash value is required and you will be prompted for it if Swift is enabled
# so just set it to something already:
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
# For development purposes the default of 3 replicas is usually not required.
# Set this to 1 to save some resources:
SWIFT_REPLICAS=1
# The data for Swift is stored by default in (``$DEST/data/swift``),
# or (``$DATA_DIR/swift``) if ``DATA_DIR`` has been set, and can be
# moved by setting ``SWIFT_DATA_DIR``. The directory will be created
# if it does not exist.
SWIFT_DATA_DIR=$DEST/data
At the end of the deployment I'm able to ping from the instance to my LAN and do nslookup on google.com for example, but I can't do it backwards, ping/ssh/telnet the instance in Openstack.
The security group permits all traffic, all ICMP ingress/egress, SSH from everywhere.
I've tried to telnet on my local computer from the Openstack instance and it's showing the IP of the Openstack host, not the host. So I'm missing something in the network topology.
netstat -ant | grep 1716
tcp6 0 0 :::1716 :::* LISTEN
tcp6 0 0 10.4.3.34:1716 10.4.4.1:42992 ESTABLISHED
Is there any type of network deployment I'm missing?
Any advice would be much appreciated!
If you are trying to access your instances from the "outside", you will need to create a floating IP pool and assign a floating IP to one of your instances.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have an ansible host file (called my_host_file) similar to this:
[my_group_name]
MY_PUBLIC_IP_FOR_VM_XYZ
Then I am attempting a few different approaches in a YAML playbook (called my_playbook.yml) similar to this:
---
- hosts: my_group_name
sudo: yes
tasks:
- debug: var=hostvars
- setup:
register: allfacts
- debug: var=allfacts
- debug: var=ansible_default_ipv4.address
- debug: var=ansible_hostname
- command: bash -c "dig +short myip.opendns.com #resolver1.opendns.com"
register: my_public_ip_as_ansible_var
I run everything like this: ansible-playbook -v -i my_host_file my_playbook.yml
I would like to get the public IP address in the my_host_file file (MY_PUBLIC_IP_FOR_VM_XYZ) at runtime in a different way than using the dig command combined with opendns then storing that into the variable my_public_ip_as_ansible_var.
After all, this has been used by ansible itself to establish the SSH session, so it may be stored somewhere.
I can not find this information either:
in the hostvars (actually here I can find it here, but I can also see all the other hosts, so I have no way to recognize the current SSH session from the group of hosts)
in the allfacts (using setup: [...]) variable (only the IP address in the private network, among many useful info about that VM like disk size, networking, OS kernel version etc.)
in ansible_default_ipv4.address (this is the IP of the private network)
in ansible_hostname (this is the host name, not the public IP I've used in my_host_file)
Is there a cleaner way / more ansible-ish way of getting the host used during the SSH session that comes from my_host_file?
inventory_hostname : host name declared in your inventory (can be the IP, the DNS or a logical name)
inventory_hostname_short : the same but with removing everything after the first dot
ansible_nodename : hostname of the host (result of the commande hostname)
ansible_hostname : short hostname of the host (result of command hostname --short)
ansible_fqdn : full hostname of the host (with domain) (result of command hostname --fqdn)
ansible_default_ipv4.address : IPv4 address to access 8.8.8.8 from the host
ansible_ethX.ipv4.address : IPV4 address of ethX interface of the host
ansible_ssh_host : hostname or IP used to access the host with SSH if defined in the inventory
Example :
# hosts
[mygroup]
myremote.foo.bar ansible_ssh_host=my-machine.mydomain.com
inventory_hostname: myremote.foo.bar
inventory_hostname_short: myremote
ansible_nodename: my-host
ansible_hostname: my-host
ansible_fqdn: my-host.domain.local
ansible_default_ipv4.address: 1.2.3.4
ansible_eth1.ipv4.address: 5.6.7.8
ansible_ssh_host: my-machine.mydomain.com
To get host alias from inventory file you would use inventory_hostname variable.
There is also ansible_host variable, because inventory alias and actual host may differ.
I have tried several combinations of proposed configurations I found either in the official documentation or the internet. However, I'm keep getting the same error shown below regardless of the configuration.
[2016-12-14T13:47:37,706][INFO ][o.e.p.PluginsService ] [vc6lzXh] no plugins loaded
[2016-12-14T13:47:43,048][INFO ][o.e.n.Node ] initialized
[2016-12-14T13:47:43,054][INFO ][o.e.n.Node ] [vc6lzXh] starting ...
[2016-12-14T13:47:43,495][INFO ][o.e.t.TransportService ] [vc6lzXh] publish_address {192.168.34.84:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-12-14T13:47:43,514][INFO ][o.e.b.BootstrapCheck ] [vc6lzXh] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
ERROR: bootstrap checks failed
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2016-12-14T13:47:43,538][INFO ][o.e.n.Node ] [vc6lzXh] stopping ...
[2016-12-14T13:47:43,680][INFO ][o.e.n.Node ] [vc6lzXh] stopped
[2016-12-14T13:47:43,681][INFO ][o.e.n.Node ] [vc6lzXh] closing ...
[2016-12-14T13:47:43,700][INFO ][o.e.n.Node ] [vc6lzXh] closed
Elasticsearch v5.1.1 is installed in a virtual machine which has a bridged ip address, 192.168.34.84
The configurations I've tried are as follows:
1.
network.host, 192.168.34.84
2.
network.host: 192.168.34.84
network.public_host: 192.168.34.84
3.
network.bind_host: 192.168.34.84
4.
network.bind_host: 0.0.0.0
network.publish_host: 192.168.34.84
None of these did work for me. I guess there are significant changes in v5.11. any help?
According to the the Bootstrap Checks, make sure you've got the important settings set in your machine.
Setting your network.host to something like this, where network.bind_host and network.public_host would pick up the value of network.host by default:
network.host: 192.168.34.84
Try having only the above property set , without the others. Maybe you could have a look at these for more:
Blog Post
Network Settings
EDIT:
So according to your logs, you have to increase vm.max_map_count to 262144 where you set this, as a root user in your machine:
sysctl -w vm.max_map_count=262144
Have a look at the details here about the VM.
Hope it helps!
After doing some research and trying few solution I found this solution working. In our case we wanted to run our elastic search on one of our dev server instead of local machine.
network.host
The node will bind to this hostname or IP address and publish (advertise) this host to other nodes in the cluster. Accepts an IP address, hostname, a special value, or an array of any combination of these. Note that any values containing a : (e.g., an IPv6 address or containing one of the special values) must be quoted because : is a special character in YAML. 0.0.0.0 is an acceptable IP address and will bind to all network interfaces. The value 0 has the same effect as the value 0.0.0.0.
Open this configuration page elasticsearch.yml
network.host: "192.168.1.1" // Your desired TP address
After changing the values restart your elasticsearch serer and it will resolve the issue.
Hope it helps.
According to the plugin:
The following environment variables are used
ping_args - Arguments to ping (default "-c 2")
ping_args2 - Arguments after the host name (required for Solaris)
ping - Ping program to use
host - Host to ping
Configuration example
[multiping]
env.host www.example.org mail.example.org
where do i specify the [multiping]
You should write this configuration into a new file located in
/etc/munin/plugin-conf.d
or add these two lines at the end of file
/etc/munin/plugin-conf.d/munin-node