Ucarp update switch's arp cache - networking

I'm using ucarp over linux bonding for high availability and automatic failover of two servers.
Here are the commands I used on each server for starting ucarp :
Server 1 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.229 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
Server 2 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.242 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
and the content of the scripts :
vip-up.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr add "$2"/24 dev "$1"
vip-down.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr del "$2"/24 dev "$1"
Everything works well and the servers switch from one to another correctly when the master becomes unavailable.
The problem is when I unplug both servers from the switch for a too long time (approximatively 30 min). As they are unplugged they both think they are master,
and when I replug them, the one with the lowest ip address tries to stay master by sending gratuitous arps. The other one switches to backup as expected, but I'm unable to access the master through its virtual ip.
If I unplug the master, the second server goes from backup to master and is accessible through its virtual ip.
My guess is that the switch "forgets" about my servers when they are disconnected from too long, and when I reconnect them, it is needed to go from backup to master to update correctly switch's arp cache, eventhough the gratuitous arps send by master should do the work. Note that restarting ucarp on the master does fix the problem, but I need to restart it each time it was disconnected from too long...
Any idea why it does not work as I expected and how I could solve the problem ?
Thanks.

Related

Routing all console traffic through SOCKS

I'm running a couple of bash scripts from the console on my local computer, they need to access the internet, and I would like them to appear as if they're running on a server I own, so I set up a SOCKS connection:
ssh -D 8123 -f -C -q -N user#IP
but then when I check my IP with
curl https://ipinfo.io/ip
my IP is still the same.
What should I do so that all the traffic and all the scripts I run in the console use the socks tunnel once it has been created?
Use
export http_proxy=socks5://127.0.0.1:8123 https_proxy=socks5://127.0.0.1:8123
after you set up your socks tunnel

Find out which network interface belongs to docker container

Docker creates these virtual ethernet interfaces veth[UNIQUE ID] listed in ifconfig. How can I find out which interface belongs to a specific docker container?
I want to listen to the tcp traffic.
To locate interface
In my case getting value from container was like (check eth0 to):
$ docker exec -it my-container cat /sys/class/net/eth1/iflink
123
And then:
$ ip ad | grep 123
123: vethd3234u4#if122: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default
Check with tcpdump -i vethd3234u4
Reference about mysterious iflink from http://lxr.free-electrons.com/source/Documentation/ABI/testing/sysfs-class-net:
150 What: /sys/class/net/<iface>/iflink
151 Date: April 2005
152 KernelVersion: 2.6.12
153 Contact: netdev#vger.kernel.org
154 Description:
155 Indicates the system-wide interface unique index identifier a
156 the interface is linked to. Format is decimal. This attribute is
157 used to resolve interfaces chaining, linking and stacking.
158 Physical interfaces have the same 'ifindex' and 'iflink' values.
Based on the provided answer (which worked for me), I made this simple bash script:
#!/bin/bash
export containers=$(sudo docker ps --format "{{.ID}}|{{.Names}}")
export interfaces=$(sudo ip ad);
for x in $containers
do
export name=$(echo "$x" |cut -d '|' -f 2);
export id=$(echo "$x"|cut -d '|' -f 1)
export ifaceNum="$(echo $(sudo docker exec -it "$id" cat /sys/class/net/eth0/iflink) | sed s/[^0-9]*//g):"
export ifaceStr=$( echo "$interfaces" | grep $ifaceNum | cut -d ':' -f 2 | cut -d '#' -f 1);
echo -e "$name: $ifaceStr";
done
My answer more like improvement on that important topic because it didn't help to "Find out which network interface belongs to docker container", but, as author noticed, he "want to listen to the tcp traffic" inside docker container - I'll try to help on that one during your troubleshooting of network.
Considering that veth network devices are about network namespaces, it is useful to know that we can execute program in another namespace via nsenter tool as follow (remember - you need a privileged permission (sudo/root) for doing that):
Get ID of any container you are interested in capture the traffic, for example it will be 78334270b8f8
Then we need to take PID of that containerized application (I assume you are running only 1 network-related process inside container and want to capture its traffic. Otherwise, that approach is hard to be suitable):
sudo docker inspect 78334270b8f8 | grep -i pid
For example, output for pid will be 111380 - that's ID of your containerized app, you can check also it via ps command: ps aux | grep 111380 just in curiosity.
Next step is to check what network interfaces you have inside your container:
sudo nsenter -t 111380 -n ifconfig
This command will return you list of network devices in network namespace of the containerized app (you should not have ifconfig tool on board of your container, only on your node/machine)
For example, you need to capture traffic on interface eth2 and filter it to tcp destination port 80 (it may vary of course) with this command:
sudo nsenter -t 111380 -n tcpdump -nni eth2 -w nginx_tcpdump_test.pcap 'tcp dst port 80'
Remember, that in this case you do not need tcpdump tool to be installed inside your container.
Then, after capturing packets, .pcap file will be available on your machine/node and to read it use any tool you prefer tcpdump -r nginx_tcpdump_test.pcap
approach's pros:
no need to have network tools inside container, only on docker node
no need to search for map between network devices in container and node
cons:
you need to have privileged user on node/machine to run nsenter tool
One-liner of the solution from #pbaranski
num=$(docker exec -i my-container cat /sys/class/net/eth0/iflink | tr -d '\r'); ip ad | grep -oE "^${num}: veth[^#]+" | awk '{print $2}'
If you need to find out on a container that does not include cat then try this tool: https://github.com/micahculpepper/dockerveth
You can also read the interface names via /proc/PID/net/igmp like (container name as argument 1):
#!/bin/bash
NAME=$1
PID=$(docker inspect $NAME --format "{{.State.Pid}}")
while read iface id; do
[[ "$iface" == lo ]] && continue
veth=$(ip -br addr | sed -nre "s/(veth.*)#if$id.*/\1/p")
echo -e "$NAME\t$iface\t$veth"
done < <(</proc/$PID/net/igmp awk '/^[0-9]+/{print $2 " " $1;}')

Need to Run batch script in UNIX server and display the output through vbscript

I am currently developing the new VBScript to execute the Shell (through Putty software) in UNIX server,
Set shell = WScript.CreateObject("WScript.Shell")
shell.Exec D:\Putty.exe hostname -l username -pw password 1.sh
I am getting connection refused error.
when I run the below command without my script (1.sh)
shell.Exec D:\Putty.exe hostname -l username -pw password
Connection is getting established without any issues.
Also, I just wanted to extract the output, once extracted, the session should get closed automatically.
This doesn't work in putty.exe. Putty has however a dedicated program to do these kind of things, it's called plink.exe - there you can pass commands and read the output just as you would expect, and your example should work just like you specified it.
PuTTY Link: command-line connection utility
Release 0.63
Usage: plink [options] [user#]host [command]
("host" can also be a PuTTY saved session name)
Options:
-V print version information and exit
-pgpfp print PGP key fingerprints and exit
-v show verbose messages
-load sessname Load settings from saved session
-ssh -telnet -rlogin -raw -serial
force use of a particular protocol
-P port connect to specified port
-l user connect with specified username
-batch disable all interactive prompts
The following options only apply to SSH connections:
-pw passw login with specified password
-D [listen-IP:]listen-port
Dynamic SOCKS-based port forwarding
-L [listen-IP:]listen-port:host:port
Forward local port to remote address
-R [listen-IP:]listen-port:host:port
Forward remote port to local address
-X -x enable / disable X11 forwarding
-A -a enable / disable agent forwarding
-t -T enable / disable pty allocation
-1 -2 force use of particular protocol version
-4 -6 force use of IPv4 or IPv6
-C enable compression
-i key private key file for authentication
-noagent disable use of Pageant
-agent enable use of Pageant
-m file read remote command(s) from file
-s remote command is an SSH subsystem (SSH-2 only)
-N don't start a shell/command (SSH-2 only)
-nc host:port
open tunnel in place of session (SSH-2 only)
-sercfg configuration-string (e.g. 19200,8,n,1,X)
Specify the serial configuration (serial only)

Setting Up Docker Dnsmasq

I'm trying to set up a docker dnsmasq container so that I can have all my docker containers look up the domain names rather than having hard-coded IPs (if they are on the same host). This fixes an issue with the fact that one cannot alter the /etc/hosts file in docker containers, and this allows me to easily update all my containers in one go, by altering a single file that the dnsmasq container references.
It looks like someone has already done the hard work for me and created a dnsmasq container. Unfortunately, it is not "working" for me. I wrote a bash script to start the container as shown below:
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p='127.0.0.1:53:5353/udp' \
-d sroegner/dnsmasq
Before running that, I created the dnsmasq.hosts directory and inserted a single file within it called hosts.txt with the following contents:
192.168.1.3 database.mydomain.com
Unfortunately whenever I try to ping that domain from within:
the host
The dnsmasq container
another container on the same host
I always receive the ping: unknown host error message.
I tried starting the dnsmasq container without daemon mode so I could debug its output, which is below:
dnsmasq: started, version 2.59 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN
dnsmasq: reading /etc/resolv.dnsmasq.conf
dnsmasq: using nameserver 8.8.8.8#53
dnsmasq: read /etc/hosts - 7 addresses
dnsmasq: read /dnsmasq.hosts//hosts.txt - 1 addresses
I am guessing that I have not specified the -p parameter correctly when starting the container. Can somebody tell me what it should be for other docker containers to lookup the DNS, or whether what I am trying to do is actually impossible?
The build script for the docker dnsmasq service needs to be changed in order to bind to your server's public IP, which in this case is 192.168.1.12 on my eth0 interface
#!/bin/bash
NIC="eth0"
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
MY_IP=$(ifconfig $NIC | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 | awk '{ print $1}')
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p=$MY_IP:53:5353/udp \
-d sroegner/dnsmasq
On the host (in this case ubuntu 12), you need to update the resolv.conf or /etc/network/interfaces file so that you have registered your public IP (eth0 or eth1 device) as the nameserver.
You may want to set a secondary nameserver to be google for whenever the container is not running, by changing the line to be dns-nameservers xxx.xxx.xxx.xxx 8.8.8.8 E.g. there is no comma or another line.
You then need to restart your networking service sudo /etc/init.d/networking restart if you updated the /etc/network/interfaces file so that this auto updates the /etc/resolve.conf file that docker will copy to the container during the build.
Now restart all of your containers
sudo docker stop $CONTAINER_ID
sudo docker start $CONTAINER_ID
This causes their /etc/resolv.conf files update so they point to the new nameserver settings.
DNS lookups in all your docker containers (that you built since making the changes) should now work using your dnsmasq container!
As a side note, this means that docker containers on other hosts can also take advantage of your dnsmasq service on this host as long as their host's nameserver settings is set to using this server's public IP.

bidirectional encrypted communication using spiped for port forwarding

I would like to establish bidirectional encrypted communication between two machines using spiped (http://www.tarsnap.com/spiped.html) but I suspect that this is really a question about port forwarding... here's what I have working thus far (where my local machine is OS X Mavericks, and the remote is a Ubuntu 12.04 Virtualbox VM):
Remotely (listen on 8025 for external requests and redirect to 8000,
where nc displays on stdout):
remote % killall spiped
remote % spiped -d -s '[0.0.0.0]:8025' -t '[127.0.0.1]:8000' -k keyfile
remote % while true; do nc -l 8000; done
Then, locally (listen on 8001 locally and redirect to 8025, where it is sent to the remote machine):
local % killall spiped
local % spiped -e -s '[127.0.0.1]:8001' -t '[192.168.56.10]:8025' -k keyfile
Now when I do the following, "hello" is printed to stdout remotely:
local % echo hello | nc 127.0.0.1 8001
All of this is great. But what about sending data from the remote machine and receiving it locally? I naively assume I can do this remotely:
remote % echo hello | nc 127.0.0.1 8000
And read the data locally with
local % nc -l 8001
But nc does not receive any data locally. I assume I am fundamentally misunderstanding something. In the absence of specific answers, can anyone suggest resources to read up on relevant topics? I'm not looking for a solution using an ssh tunnel - I know how to do that.
In order to provide bi-directional communications with spiped you will need to setup the following on both machines:
A server daemon using the pre-shared key which forwards to the requested local service
A client which sends traffic using the same pre-shared key to the desired spiped port
One listens & one receives on both systems. For more information take a look a the source code for the client & for the server.
You can run the spiped service on both systems but each will require manual (or scripted) connections using the spipe client.
For example using the server (on both machines you would run the following):
spiped {-e | -d} -s <source socket> -t <target socket> -k <key file>
[-DfFj] [-n <max # connections>] [-o <connection timeout>] [-p <pidfile>]
[{-r <rtime> | -R}]
And on the clients wishing to communicate (bi-directionally) with each other you would have to manually invoke the client:
spipe -t <target socket> -k <key file> [-fj] [-o <connection timeout>]
Or as a real world example using your setup (two services bound to 8025 forwarding to nc on 8000).
remote % spiped -d -s '[0.0.0.0]:8025' -t '[127.0.0.1]:8000' -k keyfile
remote % while true; do nc -l 8000; done
local % spiped -d -s '[0.0.0.0]:8025' -t '[127.0.0.1]:8000' -k keyfile
local % while true; do nc -l 8000; done
Each (remote & local) run the following (nc bound locally to 8001 and sending to the server at 8025):
remote % spiped -e -s '[127.0.0.1]:8001' -t '[192.168.56.10]:8025' -k keyfile
local % spiped -e -s '[127.0.0.1]:8001' -t '[192.168.56.11]:8025' -k keyfile
Sending data to 8001 on both remote & local forwarding to local & remote
remote % echo "hello client" | nc 127.0.0.1 8001
local % echo "hello server" | nc 127.0.0.1 8001
Listening to each
remote % nc -l 8001
local % nc -l 8001
Seeing as how the software was designed to protect the transport layer of the tarsnap backup software which only requires the payloads to be encrypted TO the service.
Their example within the documentation for protecting the SSH daemon further illustrates this by making use of the 'ProxyCommand' option for SSH. Eg:
You can also use spiped to protect SSH servers from attackers: Since
data is authenticated before being forwarded to the target, this can
allow you to SSH to a host while protecting you in the event that
someone finds an exploitable bug in the SSH daemon -- this serves the
same purpose as port knocking or a firewall which restricts source IP
addresses which can connect to SSH. On the SSH server, run
dd if=/dev/urandom bs=32 count=1 of=/etc/ssh/spiped.key
spiped -d -s '[0.0.0.0]:8022' -t '[127.0.0.1]:22' -k /etc/ssh/spiped.key
then copy the server's /etc/ssh/spiped.key to
~/.ssh/spiped_HOSTNAME_key on your local system and add the lines
Host HOSTNAME ProxyCommand spipe -t %h:8022 -k ~/.ssh/spiped_%h_key
to the ~/.ssh/config file. This will cause "ssh HOSTNAME" to
automatically connect using the spipe client via the spiped daemon;
you can then firewall off all incoming traffic on port tcp/22.
For a detailed list of the command-line options to spiped and spipe,
see the README files in the respective subdirectories.

Resources