I am not able to capture the output of sed & cut using together in a variable. Below is the code snippet of a script:
max=$(sed -n '1,${/$i/p;q;}' $file | cut -d "," -f2)
When I print the value of max it is showing blank. But the code line is working fine when I execute it in terminal only like below:
sed -n '1,${/$i/p;q;}' $file | cut -d "," -f2
I am not able to understand why the assignment is failing. Could anyone please help me out here?
Regards,
Sayantan
As stated in the comments (and worked out for OP):
In single quotes $i is not a variable, but end-of-line followed by the character i after the end-of-line (impossible).
Related
I would like to extract text that falls between two | signs in a file with multiple lines. For instance, I want to extract P16 from sp|P16|SM2. I have found a possible answer here. However, I cannot apply the answer to my case. I am using the following:
sed -n '/|/,/|/ p' filename
or this by escaping the | sign:
sed -n '/\|/,/\|/ p' filename
But what I receive as result are all the lines in the file unchanged even though I am using -n to suppress automatic printing of pattern space. Any ideas what I am missing?
[EDIT]:
I can get the desired result using the following. However, I would like an explanation why the above mentioned is not working:
sed 's/^sp|//' filename | sed 's/|.*//'
the tool for this task is cut
$ echo "sp|P16|SM2" | cut -d'|' -f2
P16
awk is better choice for column based data:
awk -F'|' '{print $2}'
will give you P16
sed one-liner:
The following sed one-liner will only leave the 2nd column for you:
kent$ echo "sp|P16|SM2"|sed 's/[^|]*|//;s/|[^|]*//'
P16
Or using grouping:
kent$ echo "sp|P16|SM2"|sed 's/.*|\([^|]*\)|.*/\1/'
P16
Short explanation why your two commands didn't work:
1) sed -n '/|/,/|/ p' filename
This sed will print lines between two lines which containing |
2) sed -n '/\|/,/\|/ p' filename
Sed takes BRE as default. If you escape the |, you gave them special meaning, the logical OR. again, the /pat1/,/pat2/ address was wrong usage for your case, it checks lines, not within a line.
I'm trying to write a script to swap out text in a file:
sed s/foo/bar/g myFile.txt > myFile.txt.updated
mv myFile.txt.updated myFile.txt
I evoke the sed program, which swaps out text in myFile.txt and redirects the changed lines of text to a second file. mv then moves .updated txt file to myFile.txt, overwriting it. That command works in the shell.
I wrote:
#!/bin/sh
#First, I set up some descriptive variables for the arguments
initialString="$1"
shift
desiredChange="$1"
shift
document="$1"
#Then, I evoke sed on these (more readable) parameters
updatedDocument=`sed s/$initialString/$desiredChange/g $document`
#I want to make sure that was done properly
echo updated document is $updatedDocument
#then I move the output in to the new text document
mv $updatedDocument $document
I get the error:
mv: target `myFile.txt' is not a directory
I understand that it thinks my new file's name is the first word of the string that was sed's output. I don't know how to correct that. I've been trying since 7am and every quotation, creating a temporary file to store the output in (disastrous results), IFS...everything so far gives me more and more unhelpful errors. I need to clear my head and I need your help. How can I fix this?
Maybe try
echo $updatedDocument > $document
Change
updatedDocument=`sed s/$initialString/$desiredChange/g $document`
to
updatedDocument=${document}.txt
sed s/$initialString/$desiredChange/g $document
Backticks will actually put the entire piped output of the sed command into your variable value.
An even faster way would be to not use updatedDocument or mv at all by doing an in-place sed:
sed -i s/$initialString/$desiredChange/g $document
The -i flag tells sed to do the replacement in-place. This basically means creating a temp file for the output and replacing your original file with the temp file once it is done, pretty much exactly as you are doing.
#!/bin/sh
#First, I set up some descriptive variables for the arguments
echo "$1" | sed #translation of special regex char like . * \ / ? | read -r initialString
echo "$2" | sed 's|[\&/]|\\&|g' | read -r desiredChange
document="$3"
#Then, I evoke sed
sed "s/${initialString}/${desiredChange}/g" ${document} | tee ${document}
don't forget that initialString and desiredChange are pattern interpreted as regex, so a trnaslation is certainly needed
sed #translation of special regex char like . * \ / ? is to replace by the correct sed (discuss on several post on the site)
I am having a log file a.log and i need to extract a piece of information from it.
To locate the start and end line numbers of the pattern i am using the following.
start=$(sed -n '/1112/=' file9 | head -1)
end=$(sed -n '/true/=' file9 | head -1)
i need to use the variables (start,end) in the following command:
sed -n '16q;12,15p' orig-data-file > new-file
so that the above command appears something like:
sed -n '($end+1)q;$start,$end'p orig-data-file > new-file
I am unable to replace the line numbers with the variables. Please suggest the correct syntax.
Thanks,
Rosy
When I realized how to do it, I was looking for anyway to get line number into a file containing the requested info, and display the file from that line to EOF.
So, this was my way.
with
PATTERN="pattern"
INPUT_FILE="file1"
OUTPUT_FILE="file2"
line number of first match of $PATTERN into $INPUT_FILE can be retrieved with
LINE=`grep -n ${PATTERN} ${INPUT_FILE} | awk -F':' '{ print $1 }' | head -n 1`
and the outfile will be the text from that $LINE to EOF. This way:
sed -n ${LINE},\$p ${INPUT_FILE} > ${OUTPUT_FILE}
The point here, is the way how can variables be used with command sed -n:
first witout using variables
sed -n 'N,$p' <file name>
using variables
LINE=<N>; sed -n ${LINE},\$p <file name>
Remove the single quotes thus. Single quotes turn off the shell parsing of the string. You need shell parsing to do the variable string replacements.
sed -n '('$end'+1)q;'$start','$end''p orig-data-file > new-file
I need to get the records from a text file in Unix. The delimiter is multiple blanks. For example:
2U2133 1239
1290fsdsf 3234
From this, I need to extract
1239
3234
The delimiter for all records will be always 3 blanks.
I need to do this in an unix script(.scr) and write the output to another file or use it as an input to a do-while loop. I tried the below:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then
int_1=0
else
int_2=0
fi
done < awk -F' ' '{ print $2 }' ${Directoty path}/test_file.txt
test_file.txt is the input file and file1.txt is a lookup file. But the above way is not working and giving me syntax errors near awk -F
I tried writing the output to a file. The following worked in command line:
more test_file.txt | awk -F' ' '{ print $2 }' > output.txt
This is working and writing the records to output.txt in command line. But the same command does not work in the unix script (It is a .scr file)
Please let me know where I am going wrong and how I can resolve this.
Thanks,
Visakh
The job of replacing multiple delimiters with just one is left to tr:
cat <file_name> | tr -s ' ' | cut -d ' ' -f 2
tr translates or deletes characters, and is perfectly suited to prepare your data for cut to work properly.
The manual states:
-s, --squeeze-repeats
replace each sequence of a repeated character that is
listed in the last specified SET, with a single occurrence
of that character
It depends on the version or implementation of cut on your machine. Some versions support an option, usually -i, that means 'ignore blank fields' or, equivalently, allow multiple separators between fields. If that's supported, use:
cut -i -d' ' -f 2 data.file
If not (and it is not universal — and maybe not even widespread, since neither GNU nor MacOS X have the option), then using awk is better and more portable.
You need to pipe the output of awk into your loop, though:
awk -F' ' '{print $2}' ${Directory_path}/test_file.txt |
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done
The only residual issue is whether the while loop is in a sub-shell and and therefore not modifying your main shell scripts variables, just its own copy of those variables.
With bash, you can use process substitution:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done < <(awk -F' ' '{print $2}' ${Directory_path}/test_file.txt)
This leaves the while loop in the current shell, but arranges for the output of the command to appear as if from a file.
The blank in ${Directory path} is not normally legal — unless it is another Bash feature I've missed out on; you also had a typo (Directoty) in one place.
Other ways of doing the same thing aside, the error in your program is this: You cannot redirect from (<) the output of another program. Turn your script around and use a pipe like this:
awk -F' ' '{ print $2 }' ${Directory path}/test_file.txt | while read readline
etc.
Besides, the use of "readline" as a variable name may or may not get you into problems.
In this particular case, you can use the following line
sed 's/ /\t/g' <file_name> | cut -f 2
to get your second columns.
In bash you can start from something like this:
for n in `${Directoty path}/test_file.txt | cut -d " " -f 4`
{
grep -c $n ${Directory path}/file*.txt
}
This should have been a comment, but since I cannot comment yet, I am adding this here.
This is from an excellent answer here: https://stackoverflow.com/a/4483833/3138875
tr -s ' ' <text.txt | cut -d ' ' -f4
tr -s '<character>' squeezes multiple repeated instances of <character> into one.
It's not working in the script because of the typo in "Directo*t*y path" (last line of your script).
Cut isn't flexible enough. I usually use Perl for that:
cat file.txt | perl -F' ' -e 'print $F[1]."\n"'
Instead of a triple space after -F you can put any Perl regular expression. You access fields as $F[n], where n is the field number (counting starts at zero). This way there is no need to sed or tr.
I am using the below code for replacing a string
inside a shell script.
echo $LINE | sed -e 's/12345678/"$replace"/g'
but it's getting replaced with $replace instead of the value of that variable.
Could anybody tell what went wrong?
If you want to interpret $replace, you should not use single quotes since they prevent variable substitution.
Try:
echo $LINE | sed -e "s/12345678/${replace}/g"
Transcript:
pax> export replace=987654321
pax> echo X123456789X | sed "s/123456789/${replace}/"
X987654321X
pax> _
Just be careful to ensure that ${replace} doesn't have any characters of significance to sed (like / for instance) since it will cause confusion unless escaped. But if, as you say, you're replacing one number with another, that shouldn't be a problem.
you can use the shell (bash/ksh).
$ var="12345678abc"
$ replace="test"
$ echo ${var//12345678/$replace}
testabc
Not specific to the question, but for folks who need the same kind of functionality expanded for clarity from previous answers:
# create some variables
str="someFileName.foo"
find=".foo"
replace=".bar"
# notice the the str isn't prefixed with $
# this is just how this feature works :/
result=${str//$find/$replace}
echo $result
# result is: someFileName.bar
str="someFileName.sally"
find=".foo"
replace=".bar"
result=${str//$find/$replace}
echo $result
# result is: someFileName.sally because ".foo" was not found
Found a graceful solution.
echo ${LINE//12345678/$replace}
Single quotes are very strong. Once inside, there's nothing you can do to invoke variable substitution, until you leave. Use double quotes instead:
echo $LINE | sed -e "s/12345678/$replace/g"
Let me give you two examples.
Using sed:
#!/bin/bash
LINE="12345678HI"
replace="Hello"
echo $LINE | sed -e "s/12345678/$replace/g"
Without Using sed:
LINE="12345678HI"
str_to_replace="12345678"
replace_str="Hello"
result=${str//$str_to_replace/$replace_str}
echo $result
Hope you will find it helpful!
echo $LINE | sed -e 's/12345678/'$replace'/g'
you can still use single quotes, but you have to "open" them when you want the variable expanded at the right place. otherwise the string is taken "literally" (as #paxdiablo correctly stated, his answer is correct as well)
To let your shell expand the variable, you need to use double-quotes like
sed -i "s#12345678#$replace#g" file.txt
This will break if $replace contain special sed characters (#, \). But you can preprocess $replace to quote them:
replace_quoted=$(printf '%s' "$replace" | sed 's/[#\]/\\\0/g')
sed -i "s#12345678#$replace_quoted#g" file.txt
I had a similar requirement to this but my replace var contained an ampersand. Escaping the ampersand like this solved my problem:
replace="salt & pepper"
echo "pass the salt" | sed "s/salt/${replace/&/\&}/g"
use # if you want to replace things like /. $ etc.
result=$(echo $str | sed "s#$oldstr#$newstr#g")
the above code will replace all occurrences of the specified replacement term
if you want, remove the ending g which means that the only first occurrence will be replaced.
Use this instead
echo $LINE | sed -e 's/12345678/$replace/g'
this works for me just simply remove the quotes
I prefer to use double quotes , as single quptes are very powerful as we used them if dont able to change anything inside it or can invoke the variable substituion .
so use double quotes instaed.
echo $LINE | sed -e "s/12345678/$replace/g"