Using awk command and comparing with a command output in another machine - unix

I am looking to check if a random UID on one machine is present on another machine and printing if it exists. I am pretty new to awk and have hit a roadblock.Here is how I am approaching the problem:
Pick a random line in /etc/passwd, get the 3rd column which is the UID; ssh to another machine , get the /etc/passwd contents ,check if the reference UID from the first machine is present in the 3rd column of any line and print it.
I am only able to reach up until the point where I get the reference UID. How do I use this value, ssh into another machine and compare if it exists:
shuf -n 1 /etc/passwd | awk '{print $3}' <the reference UID> <ssh 10.0.0.0> cat /etc/passwd <compare if reference UID is present>

Here's one way of doing that:
awk -vuid=$(shuf -n 1 /etc/passwd | awk -F: '{print $3}') -F: '$3 == uid' <( ssh host cat /etc/passwd)
Breaking it down:
This part stores uid : -vuid=$(shuf -n 1 /etc/passwd | awk -F: '{print $3}')
This will print the line if 3rd field matches our uid: '$3 == uid'
here we're treating output of ssh command as a file: <( ssh host cat /etc/passwd)

Related

Send grepped tail output to netcat

I am trying to run the following command, and nothing is getting sent to netcat
tail -F file.txt | grep test | nc host 9999
If I remove the grep, the tail successfully is followed and sent to netcat.
If I just run the following, data comes back, so I know that data should be getting sent to the nc pipe:
tail -F file.txt | grep test
Any ideas?
UPDATE
I added the following to unbuffer the piped output and nothing goes through:
tail -F file.txt | stdbuf -o0 grep test | nc host 9999
When I turn on line buffering, output is cut off
tail -F file.txt | grep --line-buffered test | nc host 9999
Where
workid: ID:ITEST_HGR1-EMS12103.1A156BB6CEB1:10F76E5D
is sent as
workid: ID:ITEST_HGR1-EMS12103.1A156BB6CEB1:10F7
You need to change the default buffering behavior of grep. If you're on a GNU grep, you can use grep --line-buffered, or try unbuffer.

Line count in UNIX by reading file name from another file

I have a file FILE1.TXT. It contains only one file name FILE2.TXT.
How will I find the record count / line count of FILE2.TXT using only FILE1.TXT? What I have already tried is:
cat FILE1.TXT | wc -l
But the above command did not work.
Actually, I need to display the output as below:
File name is FILE2.TXT and the count is 2.
What I have already tried is (using the below statement inside a script file):
echo "File name is "`cat FILE1.TXT`" and the count is " `wc -l < $(cat FILE1.TXT)`
But the above command did not work and gave error
syntax error at line 1: `(' unexpected
For a POSIX-compliant shell:
wc -l $(cat FILE1.txt)
or, with Bash:
wc -l $(<FILE1.txt)
These will both report the file name (but will work if there are multiple file names in FILE1.txt). If you don't want the file name reported (but there's only one name in the file), you could use:
wc -l < $(cat FILE1.txt)
wc -l < $(<FILE1.txt)
file=$(cat FILE1.txt | grep -o "FILE2.txt")
cat "$file" | wc -l

Passing Local IP as argument when running command line application in Unix

I have a command line application which I use and also have to pass my local ip address as an argument, like:
jekyll --url 'http://192.168.1.2:3000' --pygments --safe --server 3000 --auto
I would like to make the url argument get my ip automatically, since I am always on different networks and get different loal ip addresses.
so I can use this alias in my .bashrc
alias jkl="jekyll --url 'http://$IP:3000' --pygments --safe --server 3000 --auto"
where $IP would be my local ip adress acquired dynamically.
Is there any way to do it?
First, use double quotes instead of single quotes around your $IP variable or else it won't interpolate the value
#!/bin/bash
# tested on bash 4
while read -r line
do
case "$line" in
"inet "* )
line="${line/inet /}"
line="${line%% *}"
if [[ ! $line =~ ^(127|172) ]] ;then
IP="$line"
echo "IP: $IP"
fi
;;
esac
done < <(ifconfig)
echo jekyll --url "http://$IP:3000" --pygments --safe --server 3000 --auto
Note that you will have a few different IPs in the output. Choose the one that fits your requirement most.
A computer does not necessarily have "a local IP address", there are often several. For instance, you typically have the localhost address (127.0.0.1), and one or more "true" externally visible addresses. It's hard for an automated solution to know which one to pick.
One easy solution is perhaps to hard-code the "eth0" interface (or whatever the name is of your most typical interface).
On Linux, you could use something like this:
$ ifconfig | grep -A1 eth0 | cut -d: -f2 | cut -d ' ' -f1 | grep \\.
192.168.0.8
So to stuff this into a variable (assuming bash) you would use
MY_IP=$(ifconfig | grep -A1 eth0 | cut -d: -f2 | cut -d ' ' -f1 | grep \\.)
Note that this hard-codes the interface name as eth0.

unix command to extract part of a hostname

I would like to extract the first part of this hostname testsrv1
from testsrv1.main.corp.loc.domain.com in UNIX, within a shell script.
What command can I use? It would be anything before the first period .
Do you have the server name in a shell variable? Are you using a sh-like shell? If so,
${SERVERNAME%%.*}
will do what you want.
You can use cut:
echo "testsrv1.main.corp.loc.domain.com" | cut -d"." -f1
To build upon pilcrow's answer, no need for new variable, just use inbuilt $HOSTANME.
echo $HOSTNAME-->my.server.domain
echo ${HOSTNAME%%.*}-->my
Tested on two fairly different Linux's.
2.6.18-371.4.1.el5, GNU bash, version 3.2.25(1)-release (i386-redhat-linux-gnu)
3.4.76-65.111.amzn1.x86_64, GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
try the -s switch:
hostname -s
I use command cut, awk, sed or bash variables
Operation
Via cut
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | cut -d. -f1
testsrv1
[flying#lempstacker ~]$
Via awk
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | awk -v FS='.' '{print $1}'
testsrv1
[flying#lempstacker ~]$
Via sed
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | sed -r 's#([^.]*).(.*)#\1#g'
testsrv1
[flying#lempstacker ~]$
Via Bash Variables
[flying#lempstacker ~]$ hostName='testsrv1.main.corp.loc.domain.com'
[flying#lempstacker ~]$ echo ${hostName%%.*}
testsrv1
[flying#lempstacker ~]$
You could have used "uname -n" to just get the hostname only.
You can use IFS to split text by whichever token you want. For domain names, we can use the dot/period character.
#!/usr/bin/env sh
shorthost() {
# Set IFS to dot, so that we can split $# on dots instead of spaces.
local IFS='.'
# Break up arguments passed to shorthost so that each domain zone is
# a new index in an array.
zones=($#)
# Echo out our first zone
echo ${zones[0]}
}
If this is in your script then, for instance, you'll get test when you run shorthost test.example.com. You can adjust this to fit your use case, but knowing how to break the zones into the array is the big thing here, I think.
I wanted to provide this solution, because I feel like spawning another process is overkill when you can do it easily and completely within your shell with IFS. One thing to watch out for is that some users will recommend doing things like hostname -s, but that doesn't work in the BSD userland. For instance, MacOS users don't have the -s flag, I don't think.
Assuming the variable $HOSTNAME exists, so try echo ${HOSTNAME%%.*} to get the top-most part of the full-qualified hostname. Hope it helps.
If interested, the hint is from the below quoted partial /etc/bashrc on a REHL7 host:
if [ -e /etc/sysconfig/bash-prompt-screen ]; then
PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen
else
PROMPT_COMMAND='printf "\033k%s#%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"'
fi
;; ... ```

Suppress find & grep "cannot open" output

I was given this syntax by user phi
find . | awk '!/((\.jpeg)|(\.jpg)|(\.png))$/ {print $0;}' | xargs grep "B206"
I would like to suppress the output of grep: can't open..... and find: cannot open lines from the results.sample output to be ignored:
grep: can't open ./cisc/.xdbhist
find: cannot open ./cisc/.ssh
Have you tried redirecting stderr to /dev/null ?
2>/dev/null
So the above redirects stream no.2 (which is stderr) to /dev/null. That's shell dependent, but the above should work for most. Because find and grep are different processes, you may have to do it for both, or (perhaps) execute in a subshell. e.g.
find ... 2>/dev/null | xargs grep ... 2>/dev/null
Here's a reference to some documentation on bash redirection. Unless you're using csh, this should work for most.
The option flag grep -s will suppress these messages for the grep command

Resources