Text replace on unix - unix

I would like to replace these statement by the following, on a Unix system, does some one know how I can do that ?
/www/docs/syndrome.ms.fcm
by
$_SERVER['DOCUMENT_ROOT']

Yes, with sed :
sed -i "s#/www/docs/syndrome.ms.fcm#\$_SERVER['DOCUMENT_ROOT']#g" $(
grep -l "/www/docs/syndrome.ms.fcm" *files
)
If you don't have the -i switch :
for f in $(grep -l "/www/docs/syndrome.ms.fcm" *files); do
sed "s#/www/docs/syndrome.ms.fcm#\$_SERVER['DOCUMENT_ROOT']#g" "$f" > newfile &&
mv newfile "$f"
done

Related

recovering deleted script running in background

I have script which is running in bg(nohup) but it was accidently deleted,but that is continue running now I need to edit the code which is already deleted.
How can I now get that code.I assume somewhere it should be as it is running.
Try this :
#!/bin/bash
if [[ ! $1 || $1 == -h || $1 == --help ]]; then
echo -e "Usage:\n\n\t$0 '[path/]<file name>'"
exit 1
fi
files=(
$(file 2>/dev/null /proc/*/fd/* |
grep "(deleted)'$" |
sed -r 's#(:.*broken\s+symbolic\s+link\s+to\s+.|\(deleted\).$)# #g' |
grep "$1" |
cut -d' ' -f1
)
)
if [[ ${files[#]} ]]; then
for f in ${files[#]}; do
echo "fd $f match... Try to copy this fd to another place quickly!"
done
else
echo >&2 "No matching fd found..."
exit 2
fi
Not tested on non GNU-Linux

Not getting expected file result using awk

#!/bin/bash
delete_file () {
for file in processor_list.txt currnet_username.txt unique_username.txt
do
if [ -e $file ] ;then
rm $file
fi
done
}
delete_file
ps -elf > processor_list.txt ; chmod 755 processor_list.txt
awk '{print $3}' processor_list.txt > currnet_username.txt ; chmod 755 currnet_username.txt
sort -u currnet_username.txt > unique_username.txt ;chmod 755 unique_username.txt
while read line ; do
if [ -e $line.txt ] ;then
rm $line.txt
fi
grep $line processor_list.txt >$line.sh ;chmod 755 $line.sh
awk '{if($4 == "$line") print $0;}' $line.sh > ${line}1.txt ; #mv ${line}1.txt $line.txt;chmod 755 $line.txt
done < unique_username.txt
I'm a beginner of unix shell scripting. please suggested, i am not getting expected results in ${line}1.txt.
For example, I have two UID like kplus , kplustp. what is my requirement is find "kplus" string from ps -elf command and create a file as same name like kplus.txt and redirect or move the data whatever found data using grep command.
But I am getting kplus and kplustp data in kplus.txt file. I need only kplus value based on UID column from ps –elf in kplus.txt file.
This is wrong way to read variable using awk
awk '{if($4 == "$line") print $0;}' $line.sh
Use:
awk '{if($4 == var) print $0;}' var="$line" $line.sh
Or shorten to
awk '$4==var' var="$line" $line.sh
default action is {print $0} if no action is specified.
If you need to search for the text $line escape the $ in regex
awk '$4==/\$line/' $line.sh
or in text it should work directly
awk '$4=="$line"' $line.sh

Match specific lines in sed command

How can I match a set of specific lines for a substitution command?
(incorrect):
sed -e'71,116,211s/[ ]+$//' ...
I want to strip trailing whitespace on lines 71, 116 and 211 only
You could try something like:
awk 'NR== 71 || NR == 116 || NR == 211 {sub(/ *$/,"",$0)}{print $0}'
or
sed '71s/ *$//;116s///;211s///'
sed '71bl;116bl;211bl;b;:l;s/[ ][ ]*$//' input
For any specified line, this script jumps to the label l. Other lines will jump to the end of the script with the bare branch.
And an awk solution:
awk -v k="71,116,221" 'BEGIN{split(k,a,",")}
(NR in a) { sub(/ *$/,"",$0) }1' input
This might work for you (GNU sed):
sed -r '/\s+$/!b;71s///;116s///;221s///' file
or perhaps:
sed -e '/ *$/!b' -e '71s///' -e '116s///' -e '221s///' file
or as has been said already:
sed -e '71ba' -e '116ba' -e '221ba' -e 'b' -e ':a' -e 's/ *$//' file

Passing text to variable in KSH. Not Working

Hi I am struggling to solve this simple program. I am not able to pass the value from the text file to the variable.
I am stuck at this: value=$( sed -n "${line}p" rpt1.txt|awk {$3}
O/P:
1.sh[15]: test: argument expected
CODE:
wc `find /arbor/custom/gur/fold1`|grep -vi "total"| tee rpt1.txt
total1=`wc -l rpt1.txt`
wc `find /arbor/custom/gur/fold2`|grep -vi "total"| tee rpt2.txt
total2=`wc -l rpt2.txt`
line=1
if [ $line -le $total1 ]
then
value=$( sed -n "${line}p" rpt1.txt|awk {$3} )
if [ $value -eq 512 ];
then
sed -n "${line}p" rpt1.txt|awk '{print $4}'| tee direc.txt
fi
line =$line+1
else
echo "loop over"
fi
Shouldn't there be a print in front of $3 in the suspect line?

Unix script to delete file if it contains single line

Consider I have a file abcde.txt which may contain one or more lines of text. I want a script that will DELETE the file if it contains single line.
Something like, if 'wc -l abscde.txt' = 1 then rm abscde.txt
My system : Solaris
Here's a simple bash script:
#!/bin/bash
LINECOUNT=`wc -l abscde.txt | cut -f1 -d' '`
if [[ $LINECOUNT == 1 ]]; then
rm -f abscde.txt
fi
delifsingleline () {
if [ $(cat $1 | wc -l) = "1" ]
then
echo "Deleting $1"
echo "rm $1"
fi
}
Lightly tested on zsh. Should work on bash as well.
This is (mostly) just a reformat of Ben's answer:
wc -l $PATH | grep '^1 ' > /dev/null && rm -f $PATH

Resources