Accessing nginx on a virtual machine from another computer - nginx

I am currently following a tutorial on using chef. The teacher uses vagrant to set up a virtual machine, his vagrant file contains this:
config.vm.network :hostonly, "33.33.33.10"
I didn't want to use vagrant, so created a VM from scratch and implemented all the other parts of his vagrant file manually. However I'm not quite sure what this exactly does. He then goes onto updating his hosts file in /etc/hosts to include:
33.33.33.10 kayak.test
Then he can access his nginx server using "kayak.test" in his browser on another computer. I can access my server using my private IP address "192.168.169.129" in the browser, but I added this that name to my hosts file and I can not access it in the same way. My host file now looks like this:
127.0.0.1 localhost
127.0.1.1 jack.www.jack.co.uk jack
192.168.169.129 jack.test
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
But I cannot access jack.test in the browser of my other computer. What do I need to do to get the same functionality as he has?
Thanks,
Jack.

Probably this issue related to Avahi if you have it on your system.
RFC2206 explicitly defines .test .example .localhost an other TLD's as invalid:
http://www.rfc-editor.org/rfc/rfc2606.txt
As a possible workaround you can change the line in /etc/nsswitch.conf this way:
hosts: files mdns4_minimal dns mdns4
Do not forget to restart avahi daemon after this.
Source of the workaround: http://avahi.org/wiki/AvahiAndUnicastDotLocal

Thanks #AndreySabitov you made me see where I was going wrong.
I was getting confused and updating my servers /etc/hosts file, rather than my laptops. Therefore when I updated my laptops etc/hosts file with
192.168.169.129 jack.srv
I can now access that server from my laptop.
Thanks!

Related

data is not seen from the ros robot when published from the workstation (laptop)?

I have followed this tutorial from this link https://www.youtube.com/watch?v=YMG6D... for setting up and troubleshooting the ros networks. I have modified it precisely the same way as it is provided in the video. But when sending the rostopic data from the laptop to the robot, the robot cannot receive the data, but the rostopic list will shows the topic name. I have tried disabling the firewall too, but this has no effect. What could be the possible solution to this?
OS in robot: ubiquityrobot images: ubuntu 16.04.
OS in laptop: ubuntu 16.04.
ROS distro : kinetic
PS:
I have tried using roswtf, and I have understood that two nodes are unable to establish a connection. But don't know what is blocking the rosnode to publish the data when running from the workstation.
However topic publisher data from robot is received from the workstation.
HOSTNAME and IP are set as described in the youtube link mentioned above.
Edit 1:
workstation configuration
.bashrc - > last lines
export ROS_MASTER_URI=http://ubiquityrobot.local:11311
ROS_HOSTNAME=$(hostname).local
#ROS_IP=0.0.0.0
/etc/hosts -> file
127.0.0.1 localhost
127.0.1.1 maisa-K53E
10.42.0.1 ubiquityrobot.local
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
robot side
.bashrc -> last line
ROS_IP=0.0.0.0
/etc/hosts -> lines
127.0.0.1 localhost
10.42.0.201 maisa-K53E
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 ubiquityrobot ubiquityrobot.local
roswtf output:
https://drive.google.com/file/d/1aSdPzWWtV0FwyZBTCkgjQu1knXXOJFIm/view?usp=sharing
.
Edit 2:
When publishing data on a topic from robot, we can see the topic as well as the data sent from the robot in the workstation but not vice-versa.
We have solved the problem. This is because of the firewall. Even though we have disabled the firewall using sudo ufw disable, it didn't work in our case. It seems, we have to change the rules using iptables. Interestingly, this is observed in some Linux machines only. The following link helped.
ROS communication from PC to RaspberryPi
Edit:
After disabling the firewall, I haven't rebooted the computer. Now it works fine after reboot.

Why does changing the fully qualified domain name in /etc/hosts not update the fully qualified domain name?

I am running Ubuntu 18.04 in a VM. When I check the hostname using hostname or the fully qualified domain name using hostname -f, hostname --fqdn or hostnamectl I get the default ubuntu for each. I want to permanently update the hostname to host and the fully qualified domain name to host.okd.dns.
I have changed the file /etc/hostname to include only the name host. I have also changed the file /etc/hosts to appear as follows (excluding IPv6 hosts):
127.0.0.1 localhost
127.0.1.1 host.okd.dns
After saving and rebooting the VM, when I check hostname it returns host as expected, but when I check the FQDN using hostname -f, hostname --fqdn or hostnamectl it also returns host only without the .okd.dns appended to it as I would expect.
There seem to be several methods of updating the FQDN for Ubuntu 18.04 and I have tried most of them, including this method, which seems to be the most common. What do I need to do to get the changes to the FQDN to update and stick?
Apparently, I needed to add host after host.okd.dns in the /etc/hosts file. I was sure I had tried this in the past, but perhaps there was some other error I had made somewhere and this wasn't reflected. Once doing this and after a reboot, hostname -f and hostname --fqdn both return host.okd.dns as expected.

Map host's /etc/hosts in a Docker container having bridge networking

I have a host machine with some hosts resolution defined in its /etc/hosts file.
On this machine I'm running my Docker containers configured with a Bridge network.Since I'm not on the host network my Docker containers have no access of the hosts definitions of my machine /etc/hosts file.
Unfortunately having a DNS it is not an option at the moment.
My question is how can I make use of those definitions in my containers using bridge networking? I read mounting the hosts /etc/hosts file in the container is not a good choice since that's handled internally by the docker deamon.
Do you know how else I can achieve this?
I think it may be better to use the command-line option of --add-host to add entries into the /etc/hosts of the container.
Here is an excerpt from the official Docker Reference
Managing /etc/hosts
Your container will have lines in /etc/hosts which define the hostname
of the container itself as well as localhost and a few other common
things. The --add-host flag can be used to add additional lines to
/etc/hosts.
$ docker run -it --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts
172.17.0.22 09d03f76bf2c
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
86.75.30.9 db-static
You have 2 options
docker run -v /etc/hosts:/etc/hosts <yourimage>
the problem with the option is, that your container hosts file is overwritten, which will backfire if you want to contact any other service in that docker-network.
Thus i would do
docker run -v /etc/hosts:/tmp/hosts <yourimage>
And use a entrypoint in your image, which does something among this lines
cat /tmp/hosts >> /etc/hosts
a) You want to filter out some lines like localhost, or select specific lines using grep
b) You want to ensure you do not repeat this on every container bootstrap, so write a semaphore or similar ( a file, check the file whatever )

Hostname to IP Conflict in hadoop

I am running hadoop 2.2.0 . i installed it in linux 12.04. Sample wordcount, pi-estimator worked correctly. The problem is with Web Interfaces.
my /etc/hosts file contains:
127.0.0.1 localhost
127.0.1.1 master
192.168.2.81 master
When i go with "localhost" it works fine as shown in fig
But when i change it to "master" it shows error as shown in following figure
HOw to solve this... and how come it is not determining IP address from hostname "master"?
Just have these 2.
127.0.0.1 localhost
192.168.2.81 master
Then it should be fine.

Can't form mpi ring

I am facing problem in configuring and running MPI on my systems.
Here is what I tried:
1) I ran 'mpd &' on one machine and then I ran 'mpdtrace -l' on the same machine. I got this as output: "my-lappy_53430 (127.0.1.1)"
2) On another machine I ran 'mpd -h -p 53430 &' and got this error:
akshey-desktop_39993: conn error in connect_lhs: Connection timed out
akshey-desktop_39993 (connect_lhs 924): failed to connect to lhs at 10.2.28.137 52430
akshey-desktop_39993 (enter_ring 879): lhs connect failed
akshey-desktop_39993 (run 267): failed to enter ring
Can you please help with this issue? I tried to ping and ssh the first machine(on which mpd is running) from the second machine and it worked.
After this I executed 'mpdheck' on the first machine, I got this output:
* * * first ipaddr for this host (via my-lappy) is: 127.0.1.1
These are the contents of /etc/hosts of the first machine:
127.0.0.1 localhost
127.0.1.1 my-lappy
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Then I ran 'mpdcheck -l' and got this as output:
**********
Your unqualified hostname resolves to 127.0.0.1, which is
the IP address reserved for localhost. This likely means that
you have a line similar to this one in your /etc/hosts file:
127.0.0.1 $uqhn
This should perhaps be changed to the following:
127.0.0.1 localhost.localdomain localhost
**********
Even after changing the first line of /etc/hosts to "127.0.0.1 localhost.localdomain localhost" I still got the same output from 'mpdcheck -l'
Please note that I do not have access to the DNS server of the network and these machines do not have a DNS entry in the DNS server. (I think this should not be a problem because we can always use IP addresses instead of hostnames. Isn't it so?)
Two points:
You probably don't want to wire up an MPD ring by hand. Unless you are just doing some troubleshooting with a raw mpd command, you probably want to use mpdboot. Its usage is described in the User's Guide.
Since you are using MPD, you are using MPICH2 or an MPICH2 derivative. Starting with MPICH2 1.1 there is a new process manager available, called "hydra". I encourage you to update to the latest version of MPICH2 that you can and give hydra a try. It is much more robust than MPD and has many more features, including better performance.
from my personal and recent experience, I would say that
127.0.1.1 my-lappy
must be change to you LAN address, and match your hostname. You can change it with hostname <new hostname> and/or edit permanently /etc/hostname.
Then on host1 you need to start mpd --echo and note the port on which mpd will listen:
mpd_port=N
then on host2 start:
mpd --host=host1 --port=N
It's very important that the /etc/hosts files of all the machines resolve correctly the names to the IPs.
mpdtrace -l will confirm that the ring is correctly setup.
Check for firewall on your systems that might be blocking the default ports. Turn off the firewall by turning off the ipchains and iptables to test if that is the problem.
In addition, make sure the hostnames/IP addresses are correct and can be successfully resolved.

Resources