Commandline subtitution on ksh is not being assigned to varibale - unix

var2=$(echo "{$1}" | grep 'Objects that are still invalid after the validation:' | cut -d : -f2 | sed 's/ //g')
echo $var2
the above commandline substitution is not working ksh, the variable is blank each time, have tried below command too
var2="$(echo "{$1}" | grep 'Objects that are still invalid after the validation:' | cut -d : -f2 | sed 's/ //g')"
var2=`echo "{$1}" | grep 'Objects that are still invalid after the validation:' | cut -d : -f2 | sed 's/ //g'`
var2=`echo "$1" | grep 'Objects that are still invalid after the validation:' | cut -d : -f2 | sed 's/ //g'`
please hep me resolve the issue. The command is being used on remote server after ssh. The commands are working on the remote server if executed directly on the server without ssh.

What is supposed to be in $1 ? First issue is that it ought to be written as either $1 or as ${1}. Writing it as {$1} is plain wrong.
Then there is the useless use of grep and cut. The following works:
var2=$(echo ${1} | sed -ne 's/ //g' -e 's/Objectsthatarestillinvalidafterthevalidation:\(.*\)/\1/p')

Related

pipe to kill command

I am trying this out in termnial:
/tmp ps x | grep -m1 firefox | cut -d' ' -f1 | kill -KILL
kill: not enough arguments
How can I pipe the pid to kill?
I tried this, didn't work
/tmp ps x | grep -m1 firefox | kill -KILL $(cut -d' ' -f1)
cut: -: Input/output error
kill: not enough arguments
You can use xargs. This reads the output of command1 and use it as the arguments to run command2:
command1 | xargs command2
In your case
ps x | grep -m1 firefox | cut -d' ' -f1 | xargs kill -KILL

How to perform a static count of loaded packages in R?

I'd like to search a directory structure to count the number of times I've loaded various R packages. The source is contained in .org and .R files. I'm willing to assume that "library(" is the first non-blank entry on any line I care about, and I'm willing to assume that there is at most only one such call per line.
find . -regex ".*/.*\.org" -print
gets me a list of .org files, and
find . -regex ".*\.\(org\|R\)$" -print
gets me a list of .org and .R files (thanks to https://unix.stackexchange.com/questions/15308/how-to-use-find-command-to-search-for-multiple-extensions).
Given a particular file,
grep -h "library(" file | sed 's/library(//' | sed 's/)//'
gets me the package name. I'd like to hook them together and then possibly redirect the output to a file, from which I can use R to calculate frequencies.
The seemingly straightforward
find . -regex ".*/.*\.org" -print | xargs -0 grep -h "library(" | sed 's/library(//' | sed 's/)//'
doesn't work; I get
find . -regex ".*/.*\.org" -print | xargs -0 grep -h "library(" | sed 's/library(//' | sed 's/)//'
Usage: /usr/bin/grep [OPTION]... PATTERN [FILE]...
Try '/usr/bin/grep --help' for more information.
and I'm not sure what to do next.
I also tried
find . -regex ".*/.*\.org" -exec grep -h "library(" "{}" "\;"
and got
find . -regex ".*/.*\.org" -exec grep -h "library(" "{}" "\;"
find: missing argument to `-exec'
It seems simple. What am I missing?
UPDATE: Adding -t to the above xargs shows me the first command:
grep -h library ./dirname/filename.org
followed by, presumably, a list of all the matching files with paths relative to the PWD. Actually, that works if I only search for .org files; if I add .R files, too, I get "xargs: argument line too long". I think that means xargs is passing the entire list of files as the argument to one invocation of grep.
find ... -print | xargs OK
find ... -print0 | xargs -0 OK
find ... -print0 | xargs broken
find ... -print | xargs -0 broken (what you used)
Also, please don't:
grep -h "library(" | sed 's/library(//' | sed 's/)//'
when this is faster:
grep -h "library(" | sed -e 's/library(//' -e 's/)//'
and this is even faster, and more interesting:
grep -h "library(" | grep -o '(.*)' | tr -d ' ()'

How to find most frequent user agent in nginx access.log

In order to counter a botnet attack, I am trying to analyze a nginx access.log file to find which user agents are the most frequent, so that I can find the culprits and deny them. How can I do that?
Try something like this on your access log, replace with the path to your access log, also keep in mind that some log files would get zipped and new one would be created
sudo awk -F" " '{print $1}' /var/log/nginx/access.log | sort | uniq -dc
EDIT:
Sorry I just noticed you wanted user agent instead of IP
sudo awk -F"\"" '{print $6}' /var/log/nginx/access.log | sort | uniq -dc
To sort ascending append | sort -nr and to limit to 10 append | head -10
so the final total line would be
sudo awk -F"\"" '{print $6}' /var/log/nginx/access.log | sort | uniq -dc | sort -nr | head -10
To get user agent
sudo awk -F'"' '/GET/ {print $6}' /var/log/nginx-access.log | cut -d' ' -f1 | sort | uniq -c | sort -rn
awk(1) - selecting full User-Agent string of GET requests
cut(1) - using first word from it
sort(1) - sorting
uniq(1) - count
sort(1) - sorting by count, reversed

problem in a shell command

i am trying the following command on the command line
ps -u `id | cut -f2 -d"=" | cut -f1 -d"("` -f | grep ppLSN | awk '{print $9}' | awk '{FS="=";print $2}' | grep KLMN | wc -l
the value of teh command is returned as 7.
but when i am putting the same command inside a script abc_sh like below
ps -u `id | cut -f2 -d"=" | cut -f1 -d"("` -f | grep ppLSN | awk '{print $9}' | awk '{FS="=";print $2}' | grep $XYZ | wc -l
and i am calling the script on the command line as abc_sh XYZ=KLMN and it does not work and returns 0
the problem is with the grep in the command grep $XYZ
could anybody please tell why this is not working?
Because your $1 variable (first argument to the script) is set to XYZ=KLMN.
Just use abc_sh KLMN and grep $1 instead of grep $XYZ.
(Assuming we are talking about bash here)
The other alternative is defining a temporary environment variable in which case you would have to call it like this: XYZ=KLMN abc_sh
EDIT:
Found what you were using, you have to use set -k (see SHELL BUILTIN COMMANDS in the BASH manual)
-k All arguments in the form of assignment statements are
placed in the environment for a command, not just those
that precede the command name.
So
vinko#parrot:~$ more abc
#!/bin/bash
echo $XYZ
vinko#parrot:~$ set -k
vinko#parrot:~$ ./abc XYZ=KLMN
KLMN
vinko#parrot:~$ set +k
vinko#parrot:~$ ./abc XYZ=KLMN
vinko#parrot:~$
So, the place where this was working probably has set -k in one of the startup scripts (bashrc or profile.)
Try any of these to set a temporary environment variable:
XYZ=KLMN abc_sh
env XYZ=KLMN abc_sh
(export XYZ=KLMN; abc_sh)
you are using so many commands chained together....
ps -u `id -u` -f | awk -v x="$XYZ" -v p="ppLSN" '$0~p{
m=split($9,a,"=")
if(a[2]~x){count++}
}
END{print count}'
Call this script:
#!/bin/ksh
ps -u $(id -u) -o args | grep $XYZ | cut -f2- -d " "
Like this:
XYZ=KLMN abc_sh

Multiple grep search/ignore patterns

I usually use the following pipeline to grep for a particular search string and yet ignore certain other patterns:
grep -Ri 64 src/install/ | grep -v \.svn | grep -v "file"| grep -v "2\.5" | grep -v "2\.6"
Can this be achieved in a succinct manner? I am using GNU grep 2.5.3.
Just pipe your unfiltered output into a single instance of grep and use an extended regexp to declare what you want to ignore:
grep -Ri 64 src/install/ | grep -v -E '(\.svn|file|2\.5|2\.6)'
Edit: To search multiple files maybe try
find ./src/install -type f -print |\
grep -v -E '(\.svn|file|2\.5|2\.6)' | xargs grep -i 64
Edit: Ooh. I forgot to add the simple trick to stop a cringeable use of multiple grep instances, namely
ps -ef | grep something | grep -v grep
Replacing that with
ps -ef | grep "[s]omething"
removes the need of the second grep.
Use the -e option to specify multiple patterns:
grep -Ri 64 src/install/ | grep -v -e '\.svn' -e file -e '2\.5' -e '2\.6'
You might also be interested in the -F flag, which indicates that patterns are fixed strings instead of regular expressions. Now you don't have to escape the dot:
grep -Ri 64 src/install/ | grep -vF -e .svn -e file -e 2.5 -e 2.6
I noticed you were grepping out ".svn". You probably want to skip any directories named ".svn" in your initial recursive grep. If I were you, I would do this instead:
grep -Ri 64 src/install/ --exclude-dir .svn | grep -vF -e file -e 2.5 -e 2.6
you can use awk instead of grep
awk '/64/&&!/(\.svn|file|2\.[56])/' file
You maybe want to use ack-grep which allow to exclude with perl regexp as well and avoid all the VC directories, great for grepping source code.
The following script will remove all files except a list of files:
echo cleanup_all $#
if [[ $# -eq 0 ]]; then
FILES=`find . -type f`
else
EXCLUDE_FILES_EXP="("
for EXCLUDED_FILE in $#
do
EXCLUDE_FILES_EXP="$EXCLUDE_FILES_EXP./$EXCLUDED_FILE|"
done
# strip last char
EXCLUDE_FILES_EXP="${EXCLUDE_FILES_EXP%?}"
EXCLUDE_FILES_EXP="$EXCLUDE_FILES_EXP)"
echo exluded files expression : $EXCLUDE_FILES_EXP
FILES=`find . -type f | egrep -v $EXCLUDE_FILES_EXP`
fi
echo removing $FILES
for FILE in $FILES
do
echo "cleanup: removing file $FILE"
rm $FILE
done

Resources