I've currently got a cluster of servers running Centos 7 and Docker, and I want to use Keepalived to allocate a floating IP between them. I've configured Keepalived to run a check command on each node which just does curl --silent --fail localhost:80 to ensure a HTTP server is listening.
The web app is running using a Docker container bound to port 80 and --net=host on Docker 1.10.3. Firewalld is also completely disabled.
The problem I'm having is that the curl never succeeds. If I change the check command to echo '' or anything else which exits 0 (without any network interaction) it works fine, but for some reason the curl doesn't work. When I run it from a normal bash terminal it is fine, and echo $? prints a 0.
I'm not even sure how to debug this as Keepalived doesn't provide any documentation on the matter and doesn't seem to log anything in relation to errors coming from the vrrp script.
Any help or suggestions would be greatly appreciated.
Turns out I was using an ancient version of Keepalived. Compiling the latest version from source fixed the issue (rather than using the binary from Centos repos)
Related
I am developing an UEFI App that will need to perform a GET request through http.
As a start up point, I want to make sure my setup is working properly so that the http requests can actually go through.
To that end, I spent the last few days trying to make the http command work in the EFI Shell launched inside QEMU.
I can get the ping command to work properly, but calling:
http httpbin.org/get
Always returns:
Unable to open http protocol on `eth0` - Unsupported
Unable to download the file `/get` on `eth0` - Unsupported
This is my startup.nsh script to configure the EFI Shell's interface:
connect
ifconfig -r eth0
ifconfig -s eth0 dhcp
ifconfig -l eth0
These were my different attempts at invoking Qemu properly:
-netdev user,id=mynet0,hostfwd=tcp::8080-:80 -device e1000,netdev=mynet0 \
-netdev user,id=user.0 -device e1000,netdev=user.0 \
-nic user,ipv6=off,model=e1000,mac=52:54:98:76:54:32 \
And following this guide I tried to setup a tap, albeit without luck, I'd launch qemu with the following configuration:
-netdev tap,id=mynet0,ifname=tap0,script=no,downscript=no -device e1000,netdev=mynet0,mac=52:55:00:d1:55:01 \
Do you have any clue what step am I missing?
Where do you believe I could be failing in making eth0 supported?
Is the tap crucial?
Are you able to make this setup work on your side?
Update:
Very good suggestion #MiSimon, I hadn't realized that the HttpDxe driver wasn't being built with the OvmfPkg.
I have now added its INF to OvmfPkgX64.dsc and OvmfPkgX64.fdf.
Although, running drivers is displaying a duplicate entry:
0000000A D - - 1 - HttpDxe HttpDxe
0000000A ? - - - - HttpDxe HttpDxe
With respect to calling the http command, the error has progressed to:
Downloading 'http://httpbin.org/get'
Unable to download the file '/get' on 'eth0' - Unsupported
The debug log shows:
HttpNotify: Event - 0, EventStatus - Unsupported
Error: Could not retrieve the host address from DNS server.
The tool requires nearly all network drivers to be loaded.
Make sure your image contains the following drivers:
SnpDxe
MnpDxe
ArpDxe
Ip4Dxe/Ip6Dxe
Dhcp4Dxe/Dhcp6Dxe
Udp4Dxe/Udp6Dxe
DnsDxe
TcpDxe
HttpDxe
HttpUtilitiesDxe
All of them are can be found in EDK2 inside the NetworkPkg.
I am using Saucelabs to test my application on Mac, chrome configuration as I am using windows machine.
As per Saucelabs documentation, downloaded the Saucelabs Connect Proxy. Extracted the file and went to bin folder in command line and executed the below command
bin/sc -u <sauce_username> -k <sauce_accesskey> -x <sauce_data_center> -i <tunnel_id>
I got the message on the command line as "Sauce Connect is up, you may start your tests." Showed one tunnel is active on the SauceLabs my account under tunnel tab.
I started the session by going to Live --> Crossbrowser; selected the tunnel, localhost application url, browser(chrome 90) and Mac-Sierra and Start Session
Opened the application but it didn't show the feature which are on localhost.
Anyone, please help me on this, is there anything wrong i am doing in the proxy connection, because the same is working fine, if i directly open the application url on my windows machine with chrome.
I found the answer in the sauce labs documentation itself. The problem i am getting is related to SSL and here the solution.
If you don't want any domains to be SSL re-encrypted, you can specify all with the argument (i.e., -B all or --no-ssl-bump-domains all)
Now when run the below command to start the tunnel it resolve the issue.
bin/sc -u <sauce_username> -k <sauce_accesskey> -x <sauce_data_center> -i <tunnel_id> -B all
I successfully implemented this, which blocks all internet connections on my Linux machine UNLESS it connects via a specific VPN :
https://www.comparitech.com/blog/vpn-privacy/how-to-make-a-vpn-kill-switch-in-linux-with-ufw/
If I manually execute openvpn3 session-start --config ~/Desktop/config.ovpn, it successfully connects via the VPN.
I used to have this command in a script (that has #!/bin/bash as header) which ran at device bootup without any issues, UNTIL I configured ufw for the killswitch above (now ufw runs on device bootup).
I use openvpn3 so using instructions in the above tutorial for openvpn commands didn't work at all.
I even tried using a sleep in my bash script to get it to wait a while until after bootup. Doesn't work. But if I issue the connection command manually in the command prompt, it works.
Please help! I need it to connect automatically. Much appreciated!
After spending a whole day on this, I figured out a solution. I found an article that guided me : https://www.howtogeek.com/687970/how-to-run-a-linux-program-at-startup-with-systemd/
I set up a service item using systemd (systemctl) just for that command to connect. Here is what my entry looks like :
#/etc/systemd/system/connectvpn.service
[Unit]
Description=Connect VPN
After=ufw.service network.target
Requires=ufw.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/connect
#/usr/local/bin/connect
#!/bin/bash
openvpn3 session-start --config /home/xyz/Desktop/config.ovpn
Working nicely now, connects to the VPN on bootup.
NFS Mount is not working in my RHEL 7 AWS instance.
When I do a
mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/
I get the error:
mount.nfs: requested NFS version or transport protocol is not supported
Can anyone point me where I am wrong?
Thanks.
Check the nfs service is started or reboot the nfs service.
sudo systemctl status nfs-kernel-server
In my case this package was not running and the issue was in /etc/exports file where i was having same IP address for two machines.
So i commented one ip address for the machine and restarted nf-kernel-server using
sudo systemctl restart nfs-kernel-server and reload the machine.
It worked.
A precision which might be useful for the dump (like me): systemctl status nfs-server.service and systemctl start nfs-server.service must be executed on the server!
Some additional data
If, like me, you've deleted a VM without shutting it down right you might also need to manually edit the file /etc/exports because NFS is trying to connect to it and fails but doesn't continue with the next, it just dies.
After that you can manually restart as mentioned in other answers.
In my case, a simple reload didn't suffice. I had to perform a full restart:
sudo systemctl status nfs-kernel-server
In my case, it didn't work correctly with version NFS 4.1.
So in Vargantfile in each place where is type: 'nfs' I added coma and nfs_version: 4, nfs_udp: false
Here is more detailing explanation NFS
If you're giving a specific protocol to connect with, also check to make sure your NFS server has that protocol enabled.
I got this error when trying to start up a Vagrant box, and my nfs server was running. It turns out that the command Vagrant uses is:
mount -o vers=3,udp,rw,actimeo=1 192.168.56.1:/dir/on/host /vagrant
Which specifically asks for UDP. My server was running but it was not configured to enable connecting over UDP. After consulting /etc/nfs.conf, I created /etc/nfs.conf.d/10-enable-udp.conf with the following contents to enable udp:
[nfsd]
udp=y
The name of the file doesn't matter, as long as it's in the conf.d directory and ends in .conf. Depending on your distribution it may be configured differently. You can directly edit nfs.conf, but using a conf.d file is more likely to preserve the changes after upgrading your system.
Try to ping IP address of the server "ping " from client "ping , if you get reply then install nfs server on the host. Then edit /etc/exports file don't forget to add port along with IP address
I got the solution: make an entry in nfs server /etc/nfsmount.conf with Defaultvers=3 .
There will # Defaultvers=3 just unhash it and then mount on nfs client.
Issue will be resolved!
I am trying to access etcd from within a running Docker container. When I run
curl http://172.17.42.1:4001/v2/keys
I get
curl: (7) Failed to connect to 172.17.42.1 port 4001: Connection refused
I have four other hosts where this works fine, but every container on this machine has this problem. I'm really at a loss as to what's going on, and I don't know how to debug it.
My etcd environment variables are
ETCD_ADVERTISE_CLIENT_URLS=http://10.242.10.2:2379
ETCD_DISCOVERY=https://discovery.etcd.io/<token_removed>
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://10.242.10.2:2380
ETCD_LISTEN_CLIENT_URLS=http://10.242.10.2:2379,http://127.0.0.1:2379,http://0.0.0.0:4001
ETCD_LISTEN_PEER_URLS=http://10.242.10.2:2380
I can also access etcd from the host with
curl http://localhost:4001/v2/keys
So there seems to be some error when routing from the container out to the host. But I can't figure out what it is. Can anyone point me in the right direction?
I observed I had to use the --advertise-client-urls and --listen-client-urls. Like so:
./etcd --advertise-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001' --listen-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
Then I was able to successfully do
curl -L http://hostname:2379/version
from any machine that could reach that server and it worked.
It turns out etcd was only listening on localhost:4001 on that machine, which is why I couldn't access it from within a container. This is despite me configuring one of the listen client urls to 0.0.0.0:4001.
It turns out that I had run sudo systemctl enable etcd2, which caused it to run before the cloud-config service ran. As such, etcd started with default configuration instead of the one that I had specified in my cloud config. Running sudo systemctl disable etcd2 fixed the issue.