Boot command in openwrt - tcp

I am using the luci interface to put the command at startup, but as it emits output I do not know if it is recommended to put >> /dev/null
# Put your custom commands here that should be executed once
# the system init finished. By default this file does nothing.
#rtl_tcp -a 192.168.1.1
# or
#rtl_tcp -a 192.168.1.1 >> /dev/null
exit 0

Related

SSH between N number of servers using script

I have n number of servers like c0001.test.cloud.com, c0002.test.cloud.com, c0003.test.cloud.com and I want to do the ssh between these servers like:
from Server: c0001 do the ssh to c0002 and then exit the server.
Come back to c0001 do the ssh to c0003 and then exit the server.
So in this way it will execute the script without entering any input during runtime and we can have n number of servers.
I have written one script :
str1=c0001.test.cloud.com,c0002.test.cloud.com,c0003.test.cloud.com
string="$( cut -d ',' -f 2- <<< "$str1" )"
echo "$string"
for j in $(echo $string | sed "s/,/ /g")
do
ssh appAccount#j
done
But this script is not running fine. I have also checked it by passing parameters
like: -o StrictHostKeyChecking=no and <<'ENDSSH' but it is not working.
Assuming the number of commands you want to run are small, you could:
Create a script of commands that will run from c0001.test.cloud.com to each of the servers. For example, create a file on your local machine called commands.sh with:
hosts="c0002.test.cloud.com c0003.test.cloud.com"
for host in $hosts do
ssh -o StrictHostKeyChecking=no -q appAccount#$host <command 1> && <command 2>
done
On your local machine, ssh to c0001.test.cloud.com and execute the commands in commands.sh:
ssh -o StrictHostKeyChecking=no -q appAccount#c0001.test.cloud.com 'bash -s' < commands.sh
However, if your requirements become more complex, a more robust solution might be to use a cluster administration tool such as ClusterShell

Forwarding logs from file to journald

I have an application on the isolated machine. It writes logs to /var/log/app/log.txt for example. However, I want it to write logs to journald daemon. However, I can't change the way application run, because it is encapsulated.
I mean I can not do smth like app | systemd-cat
1) Am I right that all services started with systemd write logs to journald?
2) If so, will the children of process, started by systemd, will also write logs to journald?
3) Is there any way to tell journald to take logs from a specific file?
4) If not, are there any workarounds?
warning: this is not tested
You could mount bind /dev/stdout to log file in ExecStartPre
Example:
ExecStartPre=/use/sbin/mount --bind /dev/stdout  /var/log/app/log.txt
Or soft link /dev/stdout to log file in ExecStartPre
Example:
ExecStartPre=/use/bin/ln -s /dev/stdout  /var/log/app/log.txt
4) I can only try to help with a workaround:
MY_LOG_FILE=/var/log/app/log.txt
# Create a FIFO PIPE
PIPE=/tmp/my_fifo_pipe
mkfifo $PIPE
MY_IDENTIFIER="my_app_name" # just a label for later searching in journalctl
# Start logging to journal
systemd-cat -t $MY_IDENTIFIER -p info < $PIPE &
exec 3>$PIPE
tail -f $MY_LOG_FILE > $PIPE &
exec 3>&- #closing file descriptor 3 closes the fifo
This is the basic idea, you should now think about timings, when it's needed to have this started and when to be stopped.

Find out which network interface belongs to docker container

Docker creates these virtual ethernet interfaces veth[UNIQUE ID] listed in ifconfig. How can I find out which interface belongs to a specific docker container?
I want to listen to the tcp traffic.
To locate interface
In my case getting value from container was like (check eth0 to):
$ docker exec -it my-container cat /sys/class/net/eth1/iflink
123
And then:
$ ip ad | grep 123
123: vethd3234u4#if122: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default
Check with tcpdump -i vethd3234u4
Reference about mysterious iflink from http://lxr.free-electrons.com/source/Documentation/ABI/testing/sysfs-class-net:
150 What: /sys/class/net/<iface>/iflink
151 Date: April 2005
152 KernelVersion: 2.6.12
153 Contact: netdev#vger.kernel.org
154 Description:
155 Indicates the system-wide interface unique index identifier a
156 the interface is linked to. Format is decimal. This attribute is
157 used to resolve interfaces chaining, linking and stacking.
158 Physical interfaces have the same 'ifindex' and 'iflink' values.
Based on the provided answer (which worked for me), I made this simple bash script:
#!/bin/bash
export containers=$(sudo docker ps --format "{{.ID}}|{{.Names}}")
export interfaces=$(sudo ip ad);
for x in $containers
do
export name=$(echo "$x" |cut -d '|' -f 2);
export id=$(echo "$x"|cut -d '|' -f 1)
export ifaceNum="$(echo $(sudo docker exec -it "$id" cat /sys/class/net/eth0/iflink) | sed s/[^0-9]*//g):"
export ifaceStr=$( echo "$interfaces" | grep $ifaceNum | cut -d ':' -f 2 | cut -d '#' -f 1);
echo -e "$name: $ifaceStr";
done
My answer more like improvement on that important topic because it didn't help to "Find out which network interface belongs to docker container", but, as author noticed, he "want to listen to the tcp traffic" inside docker container - I'll try to help on that one during your troubleshooting of network.
Considering that veth network devices are about network namespaces, it is useful to know that we can execute program in another namespace via nsenter tool as follow (remember - you need a privileged permission (sudo/root) for doing that):
Get ID of any container you are interested in capture the traffic, for example it will be 78334270b8f8
Then we need to take PID of that containerized application (I assume you are running only 1 network-related process inside container and want to capture its traffic. Otherwise, that approach is hard to be suitable):
sudo docker inspect 78334270b8f8 | grep -i pid
For example, output for pid will be 111380 - that's ID of your containerized app, you can check also it via ps command: ps aux | grep 111380 just in curiosity.
Next step is to check what network interfaces you have inside your container:
sudo nsenter -t 111380 -n ifconfig
This command will return you list of network devices in network namespace of the containerized app (you should not have ifconfig tool on board of your container, only on your node/machine)
For example, you need to capture traffic on interface eth2 and filter it to tcp destination port 80 (it may vary of course) with this command:
sudo nsenter -t 111380 -n tcpdump -nni eth2 -w nginx_tcpdump_test.pcap 'tcp dst port 80'
Remember, that in this case you do not need tcpdump tool to be installed inside your container.
Then, after capturing packets, .pcap file will be available on your machine/node and to read it use any tool you prefer tcpdump -r nginx_tcpdump_test.pcap
approach's pros:
no need to have network tools inside container, only on docker node
no need to search for map between network devices in container and node
cons:
you need to have privileged user on node/machine to run nsenter tool
One-liner of the solution from #pbaranski
num=$(docker exec -i my-container cat /sys/class/net/eth0/iflink | tr -d '\r'); ip ad | grep -oE "^${num}: veth[^#]+" | awk '{print $2}'
If you need to find out on a container that does not include cat then try this tool: https://github.com/micahculpepper/dockerveth
You can also read the interface names via /proc/PID/net/igmp like (container name as argument 1):
#!/bin/bash
NAME=$1
PID=$(docker inspect $NAME --format "{{.State.Pid}}")
while read iface id; do
[[ "$iface" == lo ]] && continue
veth=$(ip -br addr | sed -nre "s/(veth.*)#if$id.*/\1/p")
echo -e "$NAME\t$iface\t$veth"
done < <(</proc/$PID/net/igmp awk '/^[0-9]+/{print $2 " " $1;}')

Ucarp update switch's arp cache

I'm using ucarp over linux bonding for high availability and automatic failover of two servers.
Here are the commands I used on each server for starting ucarp :
Server 1 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.229 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
Server 2 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.242 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
and the content of the scripts :
vip-up.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr add "$2"/24 dev "$1"
vip-down.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr del "$2"/24 dev "$1"
Everything works well and the servers switch from one to another correctly when the master becomes unavailable.
The problem is when I unplug both servers from the switch for a too long time (approximatively 30 min). As they are unplugged they both think they are master,
and when I replug them, the one with the lowest ip address tries to stay master by sending gratuitous arps. The other one switches to backup as expected, but I'm unable to access the master through its virtual ip.
If I unplug the master, the second server goes from backup to master and is accessible through its virtual ip.
My guess is that the switch "forgets" about my servers when they are disconnected from too long, and when I reconnect them, it is needed to go from backup to master to update correctly switch's arp cache, eventhough the gratuitous arps send by master should do the work. Note that restarting ucarp on the master does fix the problem, but I need to restart it each time it was disconnected from too long...
Any idea why it does not work as I expected and how I could solve the problem ?
Thanks.

Need to Run batch script in UNIX server and display the output through vbscript

I am currently developing the new VBScript to execute the Shell (through Putty software) in UNIX server,
Set shell = WScript.CreateObject("WScript.Shell")
shell.Exec D:\Putty.exe hostname -l username -pw password 1.sh
I am getting connection refused error.
when I run the below command without my script (1.sh)
shell.Exec D:\Putty.exe hostname -l username -pw password
Connection is getting established without any issues.
Also, I just wanted to extract the output, once extracted, the session should get closed automatically.
This doesn't work in putty.exe. Putty has however a dedicated program to do these kind of things, it's called plink.exe - there you can pass commands and read the output just as you would expect, and your example should work just like you specified it.
PuTTY Link: command-line connection utility
Release 0.63
Usage: plink [options] [user#]host [command]
("host" can also be a PuTTY saved session name)
Options:
-V print version information and exit
-pgpfp print PGP key fingerprints and exit
-v show verbose messages
-load sessname Load settings from saved session
-ssh -telnet -rlogin -raw -serial
force use of a particular protocol
-P port connect to specified port
-l user connect with specified username
-batch disable all interactive prompts
The following options only apply to SSH connections:
-pw passw login with specified password
-D [listen-IP:]listen-port
Dynamic SOCKS-based port forwarding
-L [listen-IP:]listen-port:host:port
Forward local port to remote address
-R [listen-IP:]listen-port:host:port
Forward remote port to local address
-X -x enable / disable X11 forwarding
-A -a enable / disable agent forwarding
-t -T enable / disable pty allocation
-1 -2 force use of particular protocol version
-4 -6 force use of IPv4 or IPv6
-C enable compression
-i key private key file for authentication
-noagent disable use of Pageant
-agent enable use of Pageant
-m file read remote command(s) from file
-s remote command is an SSH subsystem (SSH-2 only)
-N don't start a shell/command (SSH-2 only)
-nc host:port
open tunnel in place of session (SSH-2 only)
-sercfg configuration-string (e.g. 19200,8,n,1,X)
Specify the serial configuration (serial only)

Resources