Socat: run script in bidirectional tunnel - tcp

I am running a tunnel like this:
socat TCP-LISTEN:9090,fork TCP:192.168.1.3:9090
I would like to run a script to execute code with the strings passing through the tunnel.
The script does not change the strings, only processes strings independently but allows passage without changing between both ends.
Is this possible?

You should be able to even alter the communication using this approach:
Prepare a helper script helper.sh which gets executed for each connection:
#!/bin/bash
./inFilter.sh | socat - TCP:192.168.1.3:9090 | ./outFilter.sh
And start listening by using:
socat TCP-LISTEN:9090,fork EXEC:"./helper.sh"
The scripts inFilter.sh and outFilter.sh are processing the client and the server parts of the communication.
Example inFilter.sh:
#!/bin/bash
while read -r l ; do echo "IN(${l})" ; done
Example outFilter.sh:
#!/bin/bash
while read -r l ; do echo "OUT(${l})" ; done
This method should work for line-based text communication (as almost everything is line-buffered).
To make it work for binary protocols wrapping all processes with stdbuf -i0 -o0 might help (see .e.g here) and getting rid of shell is probably a good idea in this case.
Good luck!

Related

Sending AT command through bash script

I'm testing some satellital modem with a USB-to-serial (RS232) converter. I have already tested the "connection", and it works. I'm using minicom and am able to capture data sent from one terminal (running one bash script that echoes random numbers) to another.
To make this modem send things, I must send it AT commands to it. What is the best way to do it? Should I just echo the AT command from my bash script? Or is there any better way?
#!/bin/bash
while true;
do
number=$RANDOM
echo $number >/dev/ttyUSB0
sleep 4
done
Thank you for your time!
Since you're talking with a modem, the general way to talk with the modem is to use ModemManager, an application that handles the communication with the modem for you.
If you are unable to use ModemManager or you must use bash for some reason, note that commands must end with a \r\n to be used by the modem. The best way that I have found to do that is to use /bin/echo as follows:
/bin/echo -n -e "AT\r\n" > /dev/ttyUSB0
This ensures that echo will convert the escape characters to the carriage return and newline appropriately.
I got a working answer from for this from here:
https://unix.stackexchange.com/questions/97242/how-to-send-at-commands-to-a-modem-in-linux
First using socat.
echo "AT" | socat - /dev/ttyUSB2,crnl
worked great.
Then I used atinout
echo AT | atinout - /dev/ttyACM0 -
Ultimately I chose socat, because atinout isn't in any repository.

Using file locks with rsync

From the rsync manual documentation I see that by using the option rsync-path, it is possible to specify what program is to be run on the remote machine to start up rsync. In particular, the program could be a wrapper script which calls the actual rsync command in the middle, but which does some actions before and/or after the rsync invocation. One possible interesting use would be to acquire/release a lock (e.g., a flock), so that the operations of rsync at the remote end could be co-ordinated with another process at the far end which is contending for write access to the same files. There could be multiple rsync processes simultaneously holding the shared lock (I am aware of potential for starvation but am not concerned about that right now). The 'writer' process I'm dealing with would just be changing a few hard-links, so it would not block the rsync process for any significant lengh of time.
I have looked at other co-ordination approaches, e.g., implementing a custom remote locking protocol between the client and server, but they all involve more development work and/or are unsatisfactory for other reasons, which is why I am interested in the wrapper/(f)lock approach.
My questions are:
1) Is this a reasonable way to solve the problem of co-ordinating rsync 'readers' with another, 'writer' process accessing the same directory?
2) Can you also put a wrapper around rsync when using the inetd (or xinetd) daemon approach to running rsync, by adding a line something like the following to /etc/inetd.conf (as per the rsyncd.conf man page):
rsync stream tcp nowait root /usr/bin/rsync rsyncd --daemon
but replacing /usr/bin/rsync with the path to your rsync-lookalike wrapper, which in this case would be a C/C++ -code program which seizes a lock, forks off rsync, waits for rsync to complete, then releases the lock.
Thanks,
Tom
One potential catch with the wrapper approach: the remote process seems to be called with extra arguments, which are appended to whatever command line you specify with --rsync-path. So if you need to pass arguments something like the following style is needed.
#! /bin/sh
lock_target=$1
shift
if ! lockfile ${lock_target}.lock ; then exit 1 ; fi
trap "rm -f ${lock_target}.lock" EXIT HUP TERM INT
/usr/bin/rsync "$#"
Thanks to the question and the comments. Armed with your ideas I solved it (for me) using --rsync-path but without any wrapper scrips on the remote host, simply by putting all payload script into --rsync-path, with a few tricks.
This particular example uses rsync to pull data from remote host while holding a flock on the remote host, e.g. remote host dumps data periodically while also holding a flock, so dump and pull must not be interleaved.
Points to note
rsync will append its arguments to the end of whatever command you specify in "--rsync-path", so command needs to cope with that, and for that I rely on bash shell features on both pulling and remote hosts.
any pre and post processing on remote host must not write to STDOUT because that will corrupt rsync protocol and rsync will bail. Any error output should go to STDERR and it will turn up on pulling host as rsync STDERR output. This is why '1>&2' in all the error handling.
this probably relies on remote command spawned by rsync to run by bash because I think the good old sh does not support arrays. This works for me between RHEL7 boxes. Possible work around proposed at the end.
With that in mind, here is my simplified concept only rehash (I've not run this particular script, my full solution has extra layers that distract attention from the main point).
The script on the pulling host:
#!/bin/bash
function rsync_wrap() {
{
flock --exclusive --timeout ${LOCK_TIMEOUT} 100 || {
echo "Failed to lock: ${LOCK_TIMEOUT}" 1>&2
return 1
}
# call real rsync with original arguments
rsync "$#"
exit_code=$?
if [ ${exit_code} -eq 0 ]; then
# Do clean up when success
# rm -f "${LOCK_FILE}"
# rm -rf /eg/purge/data
else
# Do clean up when failed
fi
# Note, return is important, do not let it fall out
return ${exit_code}
} 100<"${LOCK_FILE}"
echo "Failed to open lock file: ${LOCK_FILE}" 1>&2
return 1
}
# Define vars
LOCK_FILE=/var/somedir/name.lock; # or /dev/shm/name.lock
LOCK_TIMEOUT=600; #in seconds
# Build remote command, define vars and functions inside the command
remote_cmd="
# this approach deals with crazy chars in variables and function code
$( declare -p LOCK_FILE )
$( declare -p LOCK_TIMEOUT )
$( declare -f rsync_wrap )
rsync_wrap "
local_cmd=(
rsync
-a
--rsync-path="${remote_cmd}"
# I want to handle network timeouts in SSH, not in rsync,
# because rsync does not know that waiting for lock is expected
-e "ssh -o BatchMode=yes -o ServerAliveCountMax=3 -o ServerAliveInterval=30 ${IDENTITY_FILE:+ -i '${IDENTITY_FILE}'}"
/remote/source/path
/local/destination/path/
)
# Do it
"${local_cmd[#]}"
If remote side executes --rsync-path in something other than bash then maybe the whole remote command could be wrapped in something like:
local_cmd="bash -c '${local_cmd//\'/\'\\\'\'}'"
As per comments to the original post, it is indeed feasible to use wrapper approach to implement (f)locks around rsync at the server end.

Alternative ways to issue multiple commands on a remote machine using SSH?

It appears that in this question, the answer was to separate statements with semicolons. However that could become cumbersome if we get into complex scripting with multiple if statements and complex quoted strings. I would think.
I imagine another alternative would be to simply issue multiple SSH commands one after the other, but again that would be cumbersome, plus I'm not set up for public/private key authentication so this would be asking for passwords a bunch of times.
What I'd ideally like is much similar to the interactive shell experience: at one point in the script you ssh into#the_remote_server and it prompts for the password, which you type in (interactively) and then from that point on until your script issues the "exit" command, all commands in the script are interpreted on the remote machine.
Of course this doesn't work:
ssh user#host.com
cd some/dir/on/remote/machine
tar -xzf my_tarball.tgz
cd some/other/dir/on/remote
cp -R some_directory somewhere_else
exit
Is there another alternative? I suppose I could take that part right out of my script and stick it into a script on the remote host. Meh. Now I'm maintaining two scripts. Plus I want a little configuration file to hold defaults and other stuff and I don't want to be maintaining that in two places either.
Is there another solution?
Use a heredoc.
ssh user#host.com << EOF
cd some/dir/on/remote/machine
tar -xzf my_tarball.tgz
cd some/other/dir/on/remote
cp -R some_directory somewhere_else
EOF
Use heredoc syntax, like
ssh user#host.com <<EOD
cd some/dir/on/remote/machine
...
EOD
or pipe, like
echo "ls -al" | ssh user#host.com

killall on process of same name

I would like to use killall on a process of the same name from which killall will be executed without killing the process spawning the killall.
So in more detail, say I have process foo, and process foo is running. I want to be able to run "foo -k", and have the new foo kill the old foo, without killing itself.
pgrep foo | grep -v $$ | xargs kill
If you don't have pgrep, you'll have to come up with some other way of generating the list of PIDs of interest. Some options are:
Use ps with appropriate options, followed by some combination of grep, sed and/or awk to match the processes and extract the PIDs.
killall can send a signal 0 instead of SIGTERM; the standard semantics of this is that it doesn't send a signal, but just determines if the process is alive or not. Perhaps you can use killall to select the process list and get it to print the PIDs of the matching ones that are alive. This would also probably require a bit of post-processing with sed.
There may be something along the lines of Linux's /proc filesystem with pseudo-files holding system data that you could grovel through. Again, grep/awk/sed are your friends here.
If you truly need particular details on how to do this, comment or send me mail, and I'll try expanding some of these options in more detail.
[Edits: added further options for those without pgrep.]
This seems to work on OS X:
killall -s foo | perl -ne 'system $_ unless /\b'$PPID'\b/'
killall -s lists what it would do, one PID at a time. Do what it would do except for killing yourself.
The usual way to solve this is to have foo write its process ID to a file, say something like /var/run/foo.pid when it is run in daemon mode. Then you can have the non-daemon version read the PID from the PID file and call kill(2) on it directly. This is usually how apache and the like handle it. Of course the newer OSX daemons go through launchd(8) instead, but there are still a few that use good old fashioned signals.

A standard Unix command-line tool for piping to a socket

I have some applications, and standard Unix tools sending their output to named-pipes in Solaris, however named pipes can only be read from the local storage (on Solaris), so I can't access them from over the network or place the pipes on an NFS storage for networked access to their output.
Which got me wondering if there was an analogous way to forward the output of command-line tools directly to sockets, say something like:
mksocket mysocket:12345
vmstat 1 > mysocket 2>&1
Netcat is great for this. Here's a page with some common examples.
Usage for your case might look something like this:
Server listens for a connection, then sends output to it:
server$ my_script | nc -l 7777
Remote client connects to server on port 7777, receives data, saves to a log file:
client$ nc server 7777 >> /var/log/archive
netcat (also known as nc) is exactly what you're looking for. It's getting to be reasonably standard, but not available on all systems.
socat seems to be a beefed-up version of netcat, with lots more features, but less commonly available.
On Linux, you can also use /dev/tcp/<host>/<port>. See the Advanced Bash-Scripting Guide for more information.
netcat will help establish a pipe over the network.
You may want to use one of:
ssh: secure (encrypted), already installed out-of-the-box on Solaris - but you have to set up a keypair for non-interactive sessions
e.g. vmstat 2>&1 | ssh -i private.key oss#remote.node "cat >vmstat.out"
netcat: simple to set up - but insecure and open to attacks
see http://www.debian-administration.org/articles/58 etc.
Everyone is on the right track with netcat. But I want to add that if you are piping into nc and expecting a response, you will need to use the -q <seconds> option. From the manual:
-q seconds
after EOF on stdin, wait the specified number of seconds and then quit. If seconds is negative, wait forever.
For instance, if you want to interact with your SSH Agent you can do something like this:
echo -en '\x00\x00\x00\x01\x0b' | nc -q 1 -U $SSH_AUTH_SOCK | strings
A more complete example is at https://gist.github.com/RichardBronosky/514dbbcd20a9ed77661fc3db9d1f93e4
* I stole this from https://ptspts.blogspot.com/2010/06/how-to-use-ssh-agent-programmatically.html

Resources