Handling file permissions in UNIX using awk - unix

I want to know which permission is given to a file using a shell script. So i used the below code to test for a file. But it shows nothing in output. I just wanted to know where i have made the mistake. Please help me.
The file "1.py" has all read write and execute files enabled.
ls -l 1.py | awk ' {if($1 -eq "-rwxrwxrwx")print 'True'; }'

The single quotes (') around True should be double quotes ("), and awk uses == for string comparison.
However, depending on what you're trying to do, it might be cleaner to use the Bash builtin tests:
if [ -r 1.py -a -x 1.py ]; then
echo "Yes, we can read (-r) and (-a) execute (-x) the file"
else
echo "No, we can't."
fi
This avoids having to parse ls output. For a longer list of checks, see tldp.org.

in awk, you shouldn't write shell test, e.g. [[ ... -eq ...]], you should do it in awk way:
if($1=="whatever")...

you could use
ls -l 1.py | awk '{if ($1 == "-rwxrwxrwx") print "True" }'

Related

I need to make a unix script to read first word from a file

I need to make a unix script to read first word from a file, and if that is "Mon,Tue....Sat,Sun" then it will print echo 0 or else echo 1
I was trying with grep command but it didn't worked
This could even be done without grep or awk, using just bash builtins (assuming your shell is bash - this should also work in ksh and also zsh, and maybe in sh, but not csh, where the syntax is quite a bit different):
read firstword otherstuff < myfile.txt
case "${firstword}" in
Sun|Mon|Tue|Wed|Thu|Fri|Sat) echo 1;;
*) echo 0;;
esac
You could also use regexp matching to avoid the case statement (this is definitely bash-only, though):
if [[ "${firstword}" =~ ^(Sun|Mon|Tue|Wed|Thu|Fri|Sat)$ ]]; then
That's just a matter of preference, though...
When you need to parse input for words awk is better then grep, it still can do what grep does, but also can precess every line with simple scripts.
This is my take on solution:
awk 'NR==1{c=0} $1~/Sun|Mon|Tue|Wed|Thu|Fri|Sat/{c=1} END{print c}' test.txt
I encourage you to learn more about awk in this (short) tutorial
Try egrep command like:
head -1 myfile.txt | egrep '^[Sun|Mon|Tue|Wed|Thu|Fri|Sat] .*'; echo $?

using awk to get column values and then running another command on values and printing them

I've always used Stack Overflow to get help with issues but this is my first post. I am new to UNIX scripting and I was given a task to get values of column two and then run a command on them. The command I am suppose to run is 'echo -n "$2" | openssl dgst -sha1;' which is a function to hash a value. My problem is not hashing one value, but hashing them all and then printing them. Can someone maybe help me figure this out? This is how I am starting but I think the path I am going is wrong.
NOTE: this is a CSV text file and I know I need to use AWK command for this.
awk 'BEGIN { FS = "," } ; { print $2 }'
while [ "$2" != 0 ];
do
echo -n "$2" | openssl dgst -sha1
done
This prints the second column in it's entirety and also print some type of hashed value.
Sorry for the long first post, just trying to be as specific as possible. Thanks!
You don't really need awk just for extracting the second column. You can do by using bash read built in and setting the IFS to the delimiter.
while IFS=, read -ra line; do
[[ ${line[1]} != 0 ]] && echo "${line[1]}" | openssl dgst -sha1
done < inputFile
You should probably post some sample input data and the error you are getting so that someone can debug your existing code better.
This will do the trick:
$ awk '{print $2}' file | xargs -n1 openssl dgst -sha1
Use awk to print the second field in the file and xargs with the -n1 to pass each record separately to openssl.
If by CSV you mean each record is seperated by a comma then you need to add -F, to awk.
$ awk -F, '{print $2}' file | xargs -n1 openssl dgst -sha1

unable to run awk command as a shell script

i am trying to create a shell script to search for a specific index in a multiline csv file.
the code i am trying is:
#!/bin/sh
echo "please enter the line no. to search: "
read line
echo "please enter the index to search at: "
read index
awk -F, 'NR=="$line"{print "$index"}' "$1"
the awk command I try to use on the shell works absolutely fine. But when I am trying to create a shell script out of this command, it fails and gives no output. It reads the line no. and index. and then no output at all.
is there something I am doing wrong?
I run the file at the shell by typing:
./fetchvalue.sh newfile.csv
Your quoting is not going to work. Try this:
awk -F, 'NR=="'$line'"{print $'$index'}' "$1"
Rather than going through quoting hell, try this:
awk -F, -v line=$line -v myindex=$index 'NR==line {print $myindex}' "$1"
(Index is a reserved word in awk, so I gave it a slightly differet name)

Unix - Need to cut a file which has multiple blanks as delimiter - awk or cut?

I need to get the records from a text file in Unix. The delimiter is multiple blanks. For example:
2U2133 1239
1290fsdsf 3234
From this, I need to extract
1239
3234
The delimiter for all records will be always 3 blanks.
I need to do this in an unix script(.scr) and write the output to another file or use it as an input to a do-while loop. I tried the below:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then
int_1=0
else
int_2=0
fi
done < awk -F' ' '{ print $2 }' ${Directoty path}/test_file.txt
test_file.txt is the input file and file1.txt is a lookup file. But the above way is not working and giving me syntax errors near awk -F
I tried writing the output to a file. The following worked in command line:
more test_file.txt | awk -F' ' '{ print $2 }' > output.txt
This is working and writing the records to output.txt in command line. But the same command does not work in the unix script (It is a .scr file)
Please let me know where I am going wrong and how I can resolve this.
Thanks,
Visakh
The job of replacing multiple delimiters with just one is left to tr:
cat <file_name> | tr -s ' ' | cut -d ' ' -f 2
tr translates or deletes characters, and is perfectly suited to prepare your data for cut to work properly.
The manual states:
-s, --squeeze-repeats
replace each sequence of a repeated character that is
listed in the last specified SET, with a single occurrence
of that character
It depends on the version or implementation of cut on your machine. Some versions support an option, usually -i, that means 'ignore blank fields' or, equivalently, allow multiple separators between fields. If that's supported, use:
cut -i -d' ' -f 2 data.file
If not (and it is not universal — and maybe not even widespread, since neither GNU nor MacOS X have the option), then using awk is better and more portable.
You need to pipe the output of awk into your loop, though:
awk -F' ' '{print $2}' ${Directory_path}/test_file.txt |
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done
The only residual issue is whether the while loop is in a sub-shell and and therefore not modifying your main shell scripts variables, just its own copy of those variables.
With bash, you can use process substitution:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done < <(awk -F' ' '{print $2}' ${Directory_path}/test_file.txt)
This leaves the while loop in the current shell, but arranges for the output of the command to appear as if from a file.
The blank in ${Directory path} is not normally legal — unless it is another Bash feature I've missed out on; you also had a typo (Directoty) in one place.
Other ways of doing the same thing aside, the error in your program is this: You cannot redirect from (<) the output of another program. Turn your script around and use a pipe like this:
awk -F' ' '{ print $2 }' ${Directory path}/test_file.txt | while read readline
etc.
Besides, the use of "readline" as a variable name may or may not get you into problems.
In this particular case, you can use the following line
sed 's/ /\t/g' <file_name> | cut -f 2
to get your second columns.
In bash you can start from something like this:
for n in `${Directoty path}/test_file.txt | cut -d " " -f 4`
{
grep -c $n ${Directory path}/file*.txt
}
This should have been a comment, but since I cannot comment yet, I am adding this here.
This is from an excellent answer here: https://stackoverflow.com/a/4483833/3138875
tr -s ' ' <text.txt | cut -d ' ' -f4
tr -s '<character>' squeezes multiple repeated instances of <character> into one.
It's not working in the script because of the typo in "Directo*t*y path" (last line of your script).
Cut isn't flexible enough. I usually use Perl for that:
cat file.txt | perl -F' ' -e 'print $F[1]."\n"'
Instead of a triple space after -F you can put any Perl regular expression. You access fields as $F[n], where n is the field number (counting starts at zero). This way there is no need to sed or tr.

Shell Script, Search File for String

I'm writing a shell script that opens a file and needs to find a tag like ##FIND_ME##. The string I'm searching for is a constant (and there is only ever one instance of it.)
Once I locate that string, I need it to start a new search for a different string from that point forward.
My *nix skills are a little rusty, should try to implement this using grep, awk, or sed?
awk '/FINDME/{f=1}f&&/NEWSEARCH/{print}' file
shell
f=0
while read -r line
do
case "$line" in
*FINDME* ) f=1;;
esac
if [ "$f" -eq 1 ] ;then
case "$line" in
*NEWSEARCH*) echo "found next tag in: $line";;
esac
fi
done <"file"

Resources