need help in string Concatenate in bash - unix

I have two files System_Names & system_appendix_names . I want to Concatenate each & every line of one file to other file`s line and save output to other file.
root#bt:~/kevin/new# cat system_appendix_names
adm
-adm
_adm
root#bt:~/kevin/new#cat System_Names
help
not
now
give
you
haha
what
where
if
I made following script.
#!/usr/bin/env bash
cat System_Names | while read line1
do
cat system_appendix_names | while read line2
do
out="${line1}${line2}"
echo "$out" >> 1.txt
done
done
Output of script:
root#bt:~/kevin/new# cat 1.txt
helpadm
help-adm
help_adm
notadm
not-adm
not_adm
nowadm
now-adm
now_adm
giveadm
give-adm
give_adm
youadm
you-adm
you_adm
hahaadm
haha-adm
haha_adm
whatadm
what-adm
what_adm
whereadm
where-adm
where_adm
ifadm
if-adm
if_adm
Above script work for small amount of lines in file.Actually i have files which have more lines. SO i tried it, but can not Concatenate string line by line.

Related

How can I remove specific characters in certain lines in a file?

How do I cut characters from column 5 to 7 of the lines 3 onwards?
I am trying to use sed/cut.
For example, If I have
this is amazing1 this is amazing11
this is amazing2 this is amazing21
this is amazing3 this is amazing31
this is amazing4 this is amazing41
this is amazing5 this is amazing51
this is amazing6 this is amazing61
this is amazing7 this is amazing71
Output should look like:
this is amazing1 this is amazing11
this is amazing2 this is amazing21
this amazing3 this is amazing31
this amazing4 this is amazing41
this amazing5 this is amazing51
this amazing6 this is amazing61
this amazing7 this is amazing71
The characters is are removed from lines 3 and onwards.
sed -E '3,$s/(....).../\1/' file
I'd just use awk for clarity, portability, etc.:
$ awk 'NR>2{$0=substr($0,1,4) substr($0,8)} 1' file
this is amazing1 this is amazing11
this is amazing2 this is amazing21
this amazing3 this is amazing31
this amazing4 this is amazing41
this amazing5 this is amazing51
this amazing6 this is amazing61
this amazing7 this is amazing71
or using variables populated with the values from your question:
$ awk -v n=3 -v beg=5 -v end=7 'NR>=n{$0=substr($0,1,beg-1) substr($0,end+1)} 1' file
this is amazing1 this is amazing11
this is amazing2 this is amazing21
this amazing3 this is amazing31
this amazing4 this is amazing41
this amazing5 this is amazing51
this amazing6 this is amazing61
this amazing7 this is amazing71
In two steps:
head -n2 infile; tail -n+3 infile | cut --complement -c5-7
The first command prints the first two lines unmodified; the second command pipes the lines starting with the third one to cut, where character 5 to 7 are removed (requires GNU cut).
If you need to do something with the output, like store it in a file, you have to group these commands before redirecting:
{
head -n2 infile
tail -n+3 infile | cut --complement -c5-7
} > outfile
If you want to use sed:
sed '1,2!s/^\(\w*\)\s*\w*\(.*\)$/\1\2/' file
DETAILS
1,2!s - Don't do substitutions on line 1 and 2.
/^\(\w*\)\s*\w*\(.*\)$/ - The matching pattern.
/\1\2/ - Restore the groups of 1 and 2.
file - Your input file.

To print GunZip FileName and selected rows -Continuation :

Would like to print first 2 rows from all the files located in the directory along with File Name.
All are *.gz extension files. Having around 100 files in that directory.
sample_jan.csv.gz
10,Jan,100
30,Jan,300
50,Jan,500
sample_feb.csv.gz
10,Feb,200
20,Feb,400
40,Feb,800
60,Feb,1200
Expected Output:
Filename:sample_jan.csv.gz
10,Jan,100
30,Jan,300
Filename:sample_feb.csv.gz
10,Feb,200
20,Feb,400
Tried below command where as Filename appears Blank
zcat sample_jan.csv.gz | awk 'FNR==1{print "Filename:" FILENAME} FNR<3' > Output.txt
Filename:-
10,Jan,100
30,Jan,300
Tried below command where as Filename appears Wrong
awk 'FNR==1{print "Filename:" FILENAME} FNR<3' <(gzip -dc sample_jan.csv.gz) > Output.txt
Filename:/dev/fd/63
10,Jan,100
30,Jan,300
Looking for your suggestions, dont have perl & python.
You can use this one-liner,
for file in *.gz; do echo "Filename: $file"; zcat "$file" | head -2 ; done

How to get the search count for a particular string from each and every line in a file using Unix?

I am trying to search for a particular string in a Unix file from each and every line and error out those records. Can someone let me how can I improve my code which is as below. Also please share your thoughts if you have a better solution.
v_filename=$1;
v_new_file="new_file";
v_error_file="error_file";
echo "The input file name is $var1"
while read line
do
echo "Testing $line"
v_cnt_check=`grep ',' $line | wc -l`
echo "Testing $v_cnt_check"
# if [ $v_cnt_check > 2 ]; then
# echo $line >> $v_error_file
# else
# echo $line >> $v_new_file
# fi
done < $v_filename
Input:
1,2,3
1,2,3,4
1,2,3
Output:
(New file)
1,2,3
1,2,3
(Error file)
1,2,3,4
awk -F ',' -v new_file="$v_new_file" -v err_file="$v_error_file" \
'BEGIN { OFS="," }
NF == 3 { print >new_file }
NF != 3 { print >err_file }' $v_filename
The first line sets the file name variables and sets the field separator to comma. The second line sets the output field separator to comma too. The third line prints lines with 3 fields to the new file; the fourth line prints lines with other than 3 fields to the error file.
Note that your code would be excruciatingly slow on big files because it executes two processes per line. This code has only one process operating on the whole file — which will be really important if the input grow to thousand or millions or more lines.
From the grep manpage:
General Output Control
-c, --count
Suppress normal output; instead print a count of matching lines for each input file. With the -v, --invert-match option (see below), count non-
matching lines. (-c is specified by POSIX.)
You could do something like:
grep --count "your pattern" v_filename
to get the number of occurrences. If you just want the number of lines with your pattern, replace the grep shown above with:
grep "your pattern" v_filename | wc -l

how to copy the dynamic file name and append some string while copying into other directory in unix

I have many files like ABC_Timestamp.txt , RAM_Timestamp.txthere timestamp will be different everytime. I want to copy this file into other directory but while copying I want append one string at the end of the file , so the format will be ABC_Timestamp.txt.OK and RAM_Timestamp.txt.OK. How to append the string in dynamic file. Please suggest.
My 2 pence:
(cat file.txt; echo "append a line"; date +"perhaps with a timestamp: %T") > file.txt.OK
Or more complete for your filenames:
while sleep 3;
do
for a in ABC RAM
do
(echo "appending one string at the end of the file" | cat ${a}_Timestamp.txt -) > ${a}_Timestamp.txt.OK
done
done
Execute this on command line.
ls -1|awk '/ABC_.*\.txt/||/RAM_.*\.txt/
{old=$0;
new="/new_dir/"old".OK";
system("cp "old" "new); }'
Taken from here
You can say:
for i in *.txt; do cp "${i}" targetdirectory/"${i}".OK ; done
or
for i in ABC_*.txt RAM_*.txt; do cp "${i}" targetdirectory/"${i}".OK ; done
How about first dumping the names of the file in another file and then moving file one by one.
find . -name "*.txt" >fileNames
while read line
do
newName="${line}appendText"
echo $newName
cp $line $newName
done < fileNames

How to grep a particular position line from the result?

I grep a pattern from a directory and the 4 lines before that pattern, I need to further grep the top line from each result , but not getting how to do .
Please suggest regarding this.
The problem explained with example :
in a directory 'direktory'
there are multiple files with different name like 20130611 and 2013400 etc..
the data wrote in the files, which I am interested in is like this :
[
My name is
.....
......
......
Name has been written above
]
now in every instance "Name has been written above" is written in the unit of lines but the value keep on changing in place of "My name is" so I want to grep this particular line from every occurrence .
Please suggest some method to get the result.
Thanks in advance.
a#x:/tmp$ cat namefile
[
My name is
.....
......
......
Name has been written above
]
a#x:/tmp$ cat namefile | grep -B 4 "Name has been written above" | head -1
My name is
Where "4" can be replaced by N i.e. number of lines the target data lies above the grepped line
Try something like
for file in $(ls <wherever>)
do
# Tell the user which file we're looking at
echo ""
echo $file
echo ""
# Output the first line of the file
head -1 $file
# Output the line continaing <pattern> and the four
# preceding lines
<your grep command here>
done

Resources