While loop skips the first line in output - unix

I'm using the below command in Terminal on a Mac to read a file of email addresses and convert them to a MD5 hash.
tr -d " " < em.txt | tr '[:upper:]' '[:lower:]' | while read line; do
(echo -n $line | md5); done | awk '{print $1}' > hashes1.txt
This produces a file of hashes that are 1 row shorter than the original input file. But I can't figure out why.
This code does a few things, below.
Converts an email address to all lower case
Converts the email address to a MD5 Hash
Outputs a list of new email addresses to a hashes1.txt file
Thanks in advance!

Your tr command is wrong : it should be :
tr -d " " < em.txt |
tr '[[:upper:]]' '[[:lower:]]' |
while IFS= read -r line; do
echo -n "$line" | md5 | awk '{print $1}' >> hashes1.txt
done
or
while IFS= read -r line; do
echo -n "$line" | md5 | awk '{print $1}' >> hashes1.txt
done < <(tr -d " " < em.txt | tr '[[:upper:]]' '[[:lower:]]')
Changed the file feeding place too.
And ensure your file don't have strange characters with
od -c file
if yes, install dos2unix, then :
dos2unix file
or using perl :
perl -i -pe 's/\r//g' file

Related

Not able to read file content with sed command

I am trying to read the below file line by line to perform the below operations
Extract the name of the file/directory alone and assign it one variable,
Extract the permission available in the line and add comma between the permission. Then assign it to another variable,
At last applying setfacl logic as shown in the output section.
File
# file: /disk1/script_1/ user::rwx group::r-x group:service:r-x mask::r-x other::r-x
# file: /disk1/script_1//hello.txt user::rw- group::r-- other::r--
# file: /disk1/script_1//bkp_10.txt user::rwx group::r-x other::r-x
Code
input="bkp_23.txt"
while IFS= read -r line;
do
echo $line
file_name=`sed -e 's/# file:\(.*\)/\1/g' "$line" | awk '{print $1}'`
echo $file_name
file_perm=`sed -e 's/# file:\(.*\)/\1/g' "$line" | awk '{$1=""}{print}' | tr ' ' ',' | awk
'{sub(",","")}1'`
echo $file_perm
echo "setfacl -m "$file_perm" "$file_name" executing"
done <"$input"
Output
setfacl -m user::rwx,group::r-x,group:service:r-x,mask::r-x,other::r-x /disk1/script_1/
setfacl -m user::rw-,group::r--,other::r-- /disk1/script_1//hello.txt
setfacl -m user::rwx,group::r-x,other::r-x /disk1/script_1//bkp_10.txt
Error
sed: can't read # file: /disk1/script_1/ user::rwx group::r-x group:service:r-x mask::r-x other::r-x: No such file or directory
$ cat input
# file: /disk1/script_1/ user::rwx group::r-x group:service:r-x mask::r-x other::r-x
# file: /disk1/script_1//hello.txt user::rw- group::r-- other::r--
# file: /disk1/script_1//bkp_10.txt user::rwx group::r-x other::r-x
$ while read _ _ path perms; do perms="$(echo "$perms" | tr -s ' ' ,)"; echo path="$path", perms="$perms"; done < input
path=/disk1/script_1/, perms=user::rwx,group::r-x,group:service:r-x,mask::r-x,other::r-x
path=/disk1/script_1//hello.txt, perms=user::rw-,group::r--,other::r--
path=/disk1/script_1//bkp_10.txt, perms=user::rwx,group::r-x,other::r-x
Try to echo the line content along with sed logic like this
file_name=$(echo "$line" | sed 's/# file:\(.*\)/\1/g' | awk '{print $1}')
file_perm=$(echo "$line" | sed -e 's/# file:\(.*\)/\1/g' | awk '{$1=""}{print}' | tr ' ' ',' | awk '{sub(",","")}1')

SED command use for writing back to the same file

I have the below code which adds Logger.info line after every function definition which I need to run on a python script which is the requirement.
The only question is this has to be written back to the same file so the new file has all these looger.info statements below each function definition.
e.g. the file abc.py has currently below code :
def run_func(sql_query):
return run_func(sql_query)
and the code below should create the same abc.py file but with all the logger.info added to this new file
def run_func(sql_query):
LOGGER.info (''MIPY_INVOKING run_func function for abc file in directory'
return run_func(sql_query)
I am not able to write the sed in this file to the new file (with same file name) so that the original file gets replaced by same file name and so that I have all the logger.info statements in there.
for i in $(find * -name '*.py');
do echo "#############################################" | tee -a auto_logger.log
echo "File Name : $i" | tee -a auto_logger.log
echo "Listing the python files in the current script $i" | tee -a auto_logger.log
for j in $(grep "def " $i | awk '{print $2}' | awk -F"(" '{print $1}');
do
echo "Function name : $j" | tee -a auto_logger.log
echo "Writing the INVOKING statements for $j function definition" | tee -a auto_logger.log
grep "def " $i |sed '/):/w a LOGGER.info (''INVOKING1 '"$j"' function for '"$i"' file in sam_utilities'')'
if [ $? -ne 0 ] ; then
echo " Auto Logger for $i filename - Not Executed Successfully" | tee -a auto_logger.log
else
echo "Auto Logger for $i filename - Executed Successfully" | tee -a auto_logger.log
fi
done
done

Append output of a command to file without newline

I have the following line in a unix script:
head -1 $line | cut -c22-29 >> $file
I want to append this output with no newline, but rather separated with commas. Is there any way to feed the output of this command to printf? I have tried:
head -1 $line | cut -c22-29 | printf "%s, " >> $file
I have also tried:
printf "%s, " head -1 $line | cut -c22-29 >> $file
Neither of those has worked. Anyone have any ideas?
You just want tr in your case
tr '\n' ','
will replace all the newlines ('\n') with commas
head -1 $line | cut -c22-29 | tr '\n' ',' >> $file
An very old topic, but even now i have been needed to do this (on limited command resources) and that one (replied) command havent been working for me due to its length.
Appending to a file can be done also by using file-descriptors:
touch file.txt (create new blank file),
exec 100<> file.txt (new fd with id 100),
echo -n test >&100 (echo test to new fd)
exec 100>&- (close new fd)
Appending starting from specyfic character can be done by reading file from certain point eg.
exec 100 <> file.txt - new descriptor
read -n 4 < &100 - read 4 characters
echo -n test > &100 - append echo test to a file starting from forth character.
exec 100>&- - (close new fd)

How do I Get the distinct List of Special Characters from a File using GREP or SED?

I have a file which contains about 30000 Records delimited by '|'. I need to get a distinct list of special characters only from the file.
For Eg:
123|fasdf|%df&|pap,came|!
234|%^&asdf|34|'":|
My output should be:
|%&,!^'":
Any help would be greatly appreciated.
Thanks,
Velraj.
grep -o '[|%&,!^":]' input | sort -u
You have to list all your special characters inside brackets.
This will return each unique special character on its own line. If you really need a string with these characters you have to remove newlines afterwards, e.g.:
grep -o '[|%&,!^":]' input | sort -u | tr -d '\n'
UPDATE:
If you need to remove all characters which are not from 'a-zA-Z0-9' set then you can use this one:
grep -o '[^a-zA-Z0-9]' input | sort -u | tr -d '\n'
echo "123|fasdf|%df&|pap,came|! 234|%^&asdf|34|'\":|" \
| { tr -d '[[:alnum:]]'; printf "\n"; } \
| sed 's/\(.\)/\1_/g' \
| awk -v 'RS=_' '{print $0}' \
| sort -u \
| awk '{printf $0}END{printf "\n"}'
output
!"%&',:^||
You can replace the first line echo .... with cat fileName

Unix script to delete file if it contains single line

Consider I have a file abcde.txt which may contain one or more lines of text. I want a script that will DELETE the file if it contains single line.
Something like, if 'wc -l abscde.txt' = 1 then rm abscde.txt
My system : Solaris
Here's a simple bash script:
#!/bin/bash
LINECOUNT=`wc -l abscde.txt | cut -f1 -d' '`
if [[ $LINECOUNT == 1 ]]; then
rm -f abscde.txt
fi
delifsingleline () {
if [ $(cat $1 | wc -l) = "1" ]
then
echo "Deleting $1"
echo "rm $1"
fi
}
Lightly tested on zsh. Should work on bash as well.
This is (mostly) just a reformat of Ben's answer:
wc -l $PATH | grep '^1 ' > /dev/null && rm -f $PATH

Resources