Variable pattern matching awk - unix

I have an input file "myfile" and a shell variable "VAR" whose
contents are shown below:
# cat myfile
Hello World
Hello Universe
# echo $VAR
World
I need to process all lines in "myfile" which contains pattern "World"
using awk. It works if i do as shown below:
# awk '/World/ { print $0 }' myfile
But i could not use VAR to do the same operation. I tried the
following even knowing that these will not work:
# awk '/$VAR/ { print $0 }' myfile
and
# awk -v lvar=$VAR '/lvar/ { print $0 }' myfile
and
# awk -v lvar=$VAR 'lvar { print $0 }' myfile
Please let me know how to match the contents of VAR in awk.
Thanks in advance

You could use:
awk -v lvar="$VAR" '$0~lvar {print}' myfile
Or (works but not recommended):
awk "/$VAR/ {print}" myfile

This is probably what you want:
awk -v lvar="$VAR" '$0~lvar' myfile
Never do
awk "/$VAR/" myfile
or any other syntax that expands shell variables inline within the awk script such that they become part of the script as it can produce all sorts of bizarre failures and error messages depending on how the shell variable is populated since with that syntax there is no variable inside awk but the shell variable contents are instead integrated into the script as if you'd hard-coded it.
Always do the above using -v or
awk 'BEGIN{lvar=ARGV[1]; delete ARGV[1]} $0~lvar' "$VAR" myfile
depending on your requirements for the shell expanding backslashes. See http://cfajohnson.com/shell/cus-faq-2.html#Q24 for details.
Only do
awk '$0~lvar' lvar="$VAR" myfile
if you are processing multiple files, need to change initial values of the variable between files, and do not have gawk for BEGINFILE or need to populate those values from environment variables.
Note that all of the above do a regexp match on lvar, if you need a string match instead than use awk -v lvar="$VAR" 'index($0,lvar)'.

If you export your variable, it becomes a simpler solution:
export VAR
awk '$0~ENVIRON["VAR"]' myfile
Takes out any issues of reprocessing the variable's value.

Related

How to read a value from recursive xml attribute in Unix using sed/awk/grep only

I have config.xml. Here I need to retrieve the value of the attribute from the xpath
/domain/server/name
I can only use grep/sed/awk. Need Help
The content of the xml is below where I need to retrieve the Server Name only.
<domain>
<server>
<name>AdminServer</name>
<port>1234</port>
</server>
<server>
<name>M1Server</name>
<port>5678</port>
</server>
<machine>
<name>machine01</name>
</machine>
<machine>
<name>machine02</name>
</machine>
</domain>
The output should be :
AdminServer
M1Server
I tried to do,
sed -ne '/<\/name>/ { s/<[^>]*>(.*)<\/name>/\1/; p }' config.xml
sed is only for simple substitutions on individual lines, doing anything else with sed is strictly for mental exercise, not for real code. That's not what you are trying to do so you shouldn't even be considering sed. Just use awk:
$ awk -F'[<>]' 'p=="server" && $2=="name"{print $3} {p=$2}' file
AdminServer
M1Server
That will work with any awk on any UNIX box. If that's not all you need then edit your question to provide more truly representative sample input and expected output.
Try this command. Name your xml and supply that file as an input.
awk '/<server>/,/<\/server>/' < name.xml | grep "name" | cut -d ">" -f2 | cut -d "<" -f1
OutPut:
AdminServer
M1Server
Based on your sample Input_file shown, could you please try following.
awk -F"[><]" '/<\/server>/{a="";next} /<server>/{a=1;next} a && /<name>/{print $3}' Input_file
sed -n '/<server>/{n;s/\s*<[^>]*>//gp}'
for example. for the first match
1. /<server>/
match the line that contains "<server>" got " <server>"
2. n
the "n" command will go to next line. after executed "n" command got " <name>AdminServer</name>"
3.s/\s*<[^>]*>//gp
replece all "\s*<[^>]*>" as "". then print the pattern space
type "info sed" for more sed command
You can get the desired output with just sed:
sed -n 's:.*<name>\(.*\)</name>.*:\1:p' config.xml
I feel dirty parsing XML in awk.
The following finds the correct depth of entry with the right tag name. It does not verify the path, though it depends on the elements you specified. While this works on your example data, it makes certain ugly assumptions and it's not guaranteed to work elsewhere:
awk -F'[<>]' '$2~/^(domain|server|name)$/{n++} $1~/\// {n--} n==3&&$2=="name"{print $3}' input.xml
A better solution would be to parse the XML itself.
$ awk -F'[<>]' -v check="domain.server.name" '$2~/^[a-z]/ { path=path "." $2; closex="</"$2">" } $0~closex { sub(/\.[^.]$/,"",path) } substr(path,2)==check {print path " = " $3}' input.xml
.domain.server.name = AdminServer
Here it is split out for easier commenting.
$ awk -F'[<>]' -v check="domain.server.name" '
# Split fields around pointy brackets. Supply a path to check.
$2~/^[a-z]/ { # If we see an open tag,
path=path "." $2 # append the current tag to our path,
closex="</"$2">" # compose a close tag which we'll check later.
}
$0~closex { # If we see a close tag,
sub(/\.[^.]$/,"",path) # truncate the path.
}
substr(path,2)==check { # If we match the given path,
print path " = " $3 # print the result.
}
' input.xml
Note that this solution barfs horribly if you feed it badly formatted XML. The recognition of tags could be improved, but may be sufficient if you have consistently formatted XML. It may barf horribly for other reasons too. Do not do this. Install the correct tools to parse XML properly.

Retrieving a variable name that starts with a specific string

I have a variable name that appears in multiple locations of a text file. This variable will always start with the same string but not always end with the same characters. For example, it can be var_name or var_name_TEXT.
I'm looking for a way to extract the first occurrence in the text file of this string starting with var_name and ending with , (but I don't want the comma in the output).
Example1: var_name, some_other_var, another_one, ....
Output: var_name
Example2: var_name_TEXT, some_other_var, another_one, ...
Output: var_name_TEXT
grep -oPm1 '\bvar_name[^, ]*(?=,)' file | head -1
match and output only variables starting with var_name and ending with comma, do not include comma in the output, quit after the first line of match and pick the first match on that line (if there are more than one)
ps. you have to include space in the regex as well.
I suggest with GNU grep:
grep -o '\bvar_name[^,]*' file | head -n 1
All you need is (GNU awk):
$ awk 'match($0,/\<var_name[^,]*/,a){print a[0]; exit}' file
var_name_TEXT
To print the field only (i.e., var_name or var_name_TEXT only; not the line containing it) you could use awk:
awk -F, '{for (i=1;i<=NF;i++) if ($i~/^var_name/) print $i}' file
If you actually have spaces before or after the commas (as you show in your example) you can change to awk field separator:
awk -F"[, ]+" '{for (i=1;i<=NF;i++) if ($i~/^var_name/) print $i}' file
You can also use GNU grep with a word boundary assertion:
grep -o '\bvar_name[^,]*' file
Or GNU awk:
awk '/\<var_name/' file
If you want only one considered, add exit to awk or -m 1 to grep to exit after the first match.

awk getline not accepting external variable from a file

I have a file test.sh from which I am executing the following awk command.
awk -f x.awk < result/output.txt >>difference.txt
x.awk
while (getline < result/$bld/$DeviceType)
the variable DeviceType and bld are available in test.sh.
I have declared them as export type.
export DeviceType=$line
Even then while executing test.sh file, the script stops at following line
awk -f x.awk < result/output.txt >>difference.txt
and I am getting
awk: x.awk:4: (FILENAME=- FNR=116) fatal: division by zero attempted
error.
The awk script is read by awk, not touched by the shell. Inside an awk script, $bld means 'the field designated by the number in the variable bld' (that's the awk variable bld).
You can set awk variables on the command line (officially with the -v option):
awk -v bld="$bld" -v dev="$DeviceType" -f x.awk < result/output.txt >> difference.txt
Whether that does what you want is still debatable. Most likely you need x.awk to contain something like:
BEGIN { file = sprintf("result/%s/%s", bld, dev); }
{ while ((getline < file) > 0) print }
awk is not shell just like C is not shell. You should not expect to be able to access shell variables within an awk program any more than you can access shell variables within a C program.
To pass the VALUE of shell variables to an awk script, see http://cfajohnson.com/shell/cus-faq-2.html#Q24 for details but essentially:
awk -v awkvar="$shellvar" '{ ... use awkvar ...}'
is usually the right approach.
Having said that, whatever you're trying to do it looks like the wrong approach. If you are considering using getline, make sure to read http://awk.freeshell.org/AllAboutGetline first and understand all of the caveats but if you tell us what it is you're trying to do with sample input and expected output we can almost certainly help you come up with a better approach that has nothing to do with getline.

Unix command to prepend text to a file

Is there a Unix command to prepend some string data to a text file?
Something like:
prepend "to be prepended" text.txt
printf '%s\n%s\n' "to be prepended" "$(cat text.txt)" >text.txt
sed -i.old '1s;^;to be prepended;' inFile
-i writes the change in place and take a backup if any extension is given. (In this case, .old)
1s;^;to be prepended; substitutes the beginning of the first line by the given replacement string, using ; as a command delimiter.
Process Substitution
I'm surprised no one mentioned this.
cat <(echo "before") text.txt > newfile.txt
which is arguably more natural than the accepted answer (printing something and piping it into a substitution command is lexicographically counter-intuitive).
...and hijacking what ryan said above, with sponge you don't need a temporary file:
sudo apt-get install moreutils
<<(echo "to be prepended") < text.txt | sponge text.txt
EDIT: Looks like this doesn't work in Bourne Shell /bin/sh
Here String (zsh only)
Using a here-string - <<<, you can do:
<<< "to be prepended" < text.txt | sponge text.txt
This is one possibility:
(echo "to be prepended"; cat text.txt) > newfile.txt
you'll probably not easily get around an intermediate file.
Alternatives (can be cumbersome with shell escaping):
sed -i '0,/^/s//to be prepended/' text.txt
If it's acceptable to replace the input file:
Note:
Doing so may have unexpected side effects, notably potentially replacing a symlink with a regular file, ending up with different permissions on the file, and changing the file's creation (birth) date.
sed -i, as in Prince John Wesley's answer, tries to at least restore the original permissions, but the other limitations apply as well.
Here's a simple alternative that uses a temporary file (it avoids reading the whole input file into memory the way that shime's solution does):
{ printf 'to be prepended'; cat text.txt; } > tmp.txt && mv tmp.txt text.txt
Using a group command ({ ...; ...; }) is slightly more efficient than using a subshell ((...; ...)), as in 0xC0000022L's solution.
The advantages are:
It's easy to control whether the new text should be directly prepended to the first line or whether it should be inserted as new line(s) (simply append \n to the printf argument).
Unlike the sed solution, it works if the input file is empty (0 bytes).
The sed solution can be simplified if the intent is to prepend one or more whole lines to the existing content (assuming the input file is non-empty):
sed's i function inserts whole lines:
With GNU sed:
# Prepends 'to be prepended' *followed by a newline*, i.e. inserts a new line.
# To prepend multiple lines, use '\n' as part of the text.
# -i.old creates a backup of the input file with extension '.old'
sed -i.old '1 i\to be prepended' inFile
A portable variant that also works with macOS / BSD sed:
# Prepends 'to be prepended' *followed by a newline*
# To prepend multiple lines, escape the ends of intermediate
# lines with '\'
sed -i.old -e '1 i\
to be prepended' inFile
Note that the literal newline after the \ is required.
If the input file must be edited in place (preserving its inode with all its attributes):
Using the venerable ed POSIX utility:
Note:
ed invariably reads the input file as a whole into memory first.
To prepend directly to the first line (as with sed, this won't work if the input file is completely empty (0 bytes)):
ed -s text.txt <<EOF
1 s/^/to be prepended/
w
EOF
-s suppressed ed's status messages.
Note how the commands are provided to ed as a multi-line here-document (<<EOF\n...\nEOF), i.e., via stdin; by default string expansion is performed in such documents (shell variables are interpolated); quote the opening delimiter to suppress that (e.g., <<'EOF').
1 makes the 1st line the current line
function s performs a regex-based string substitution on the current line, as in sed; you may include literal newlines in the substitution text, but they must be \-escaped.
w writes the result back to the input file (for testing, replace w with ,p to only print the result, without modifying the input file).
To prepend one or more whole lines:
As with sed, the i function invariably adds a trailing newline to the text to be inserted.
ed -s text.txt <<EOF
0 i
line 1
line 2
.
w
EOF
0 i makes 0 (the beginning of the file) the current line and starts insert mode (i); note that line numbers are otherwise 1-based.
The following lines are the text to insert before the current line, terminated with . on its own line.
This will work to form the output. The - means standard input, which is provide via the pipe from echo.
echo -e "to be prepended \n another line" | cat - text.txt
To rewrite the file a temporary file is required as cannot pipe back into the input file.
echo "to be prepended" | cat - text.txt > text.txt.tmp
mv text.txt.tmp text.txt
Prefer Adam's answer
We can make it easier to use sponge. Now we don't need to create a temporary file and rename it by
echo -e "to be prepended \n another line" | cat - text.txt | sponge text.txt
Probably nothing built-in, but you could write your own pretty easily, like this:
#!/bin/bash
echo -n "$1" > /tmp/tmpfile.$$
cat "$2" >> /tmp/tmpfile.$$
mv /tmp/tmpfile.$$ "$2"
Something like that at least...
Editor's note:
This command will result in data loss if the input file happens to be larger than your system's pipeline buffer size, which is typically 64 KB nowadays. See the comments for details.
In some circumstances prepended text may available only from stdin.
Then this combination shall work.
echo "to be prepended" | cat - text.txt | tee text.txt
If you want to omit tee output, then append > /dev/null.
Another way using sed:
sed -i.old '1 {i to be prepended
}' inFile
If the line to be prepended is multiline:
sed -i.old '1 {i\
to be prepended\
multiline
}' inFile
Solution:
printf '%s\n%s' 'text to prepend' "$(cat file.txt)" > file.txt
Note that this is safe on all kind of inputs, because there are no expansions. For example, if you want to prepend !##$%^&*()ugly text\n\t\n, it will just work:
printf '%s\n%s' '!##$%^&*()ugly text\n\t\n' "$(cat file.txt)" > file.txt
The last part left for consideration is whitespace removal at end of file during command substitution "$(cat file.txt)". All work-arounds for this are relatively complex. If you want to preserve newlines at end of file.txt, see this: https://stackoverflow.com/a/22607352/1091436
As tested in Bash (in Ubuntu), if starting with a test file via;
echo "Original Line" > test_file.txt
you can execute;
echo "$(echo "New Line"; cat test_file.txt)" > test_file.txt
or, if the version of bash is too old for $(), you can use backticks;
echo "`echo "New Line"; cat test_file.txt`" > test_file.txt
and receive the following contents of "test_file.txt";
New Line
Original Line
No intermediary file, just bash/echo.
Another fairly straight forward solution is:
$ echo -e "string\n" $(cat file)
% echo blaha > blaha
% echo fizz > fizz
% cat blaha fizz > buzz
% cat buzz
blaha
fizz
You can do that easily with awk
cat text.txt|awk '{print "to be prepended"$0}'
It seems like the question is about prepending a string to the file not each line of the file, in this case as suggested by Tom Ekberg the following command should be used instead.
awk 'BEGIN{print "to be prepended"} {print $0}' text.txt
If you like vi/vim, this may be more your style.
printf '0i\n%s\n.\nwq\n' prepend-text | ed file
For future readers who want to append one or more lines of text (with variables or even subshell code) and keep it readable and formatted, you may enjoy this:
echo "Lonely string" > my-file.txt
Then run
cat <<EOF > my-file.txt
Hello, there!
$(cat my-file.txt)
EOF
Results of cat my-file.txt:
Hello, there!
Lonely string
This works because the read of my-file.txt happens first and in a subshell. I use this trick all the time to append important rules to config files in Docker containers rather than copy over entire config files.
you can use variables
Even though a bunsh of answers here work pretty well, I want to contribute this one-liner, just for completeness. At least it is easy to keep in mind and maybe contributes to some general understanding of bash for some people.
PREPEND="new line 1"; FILE="text.txt"; printf "${PREPEND}\n`cat $FILE`" > $FILE
In this snippe just replace text.txt with the textfile you want to prepend to and new line 1 with the text to prepend.
example
$ printf "old line 1\nold line 2" > text.txt
$ cat text.txt; echo ""
old line 1
old line 2
$ PREPEND="new line 1"; FILE="text.txt"; printf "${PREPEND}\n`cat $FILE`" > $FILE
$ cat text.txt; echo ""
new line 1
old line 1
old line 2
$
# create a file with content..
echo foo > /tmp/foo
# prepend a line containing "jim" to the file
sed -i "1s/^/jim\n/" /tmp/foo
# verify the content of the file has the new line prepened to it
cat /tmp/foo
I'd recommend defining a function and then importing and using that where needed.
prepend_to_file() {
file=$1
text=$2
if ! [[ -f $file ]] then
touch $file
fi
echo "$text" | cat - $file > $file.new
mv -f $file.new $file
}
Then use it like so:
prepend_to_file test.txt "This is first"
prepend_to_file test.txt "This is second"
Your file contents will then be:
This is second
This is first
I'm about to use this approach for implementing a change log updater.
With ex,
ex - $file << PREPEND
-1
i
prepended text
.
wq
PREPEND
The ex commands are
-1 Go to the very beginning of the file
i Begin insert mode
. End insert mode
wq Save (write) and quit

Unix - Need to cut a file which has multiple blanks as delimiter - awk or cut?

I need to get the records from a text file in Unix. The delimiter is multiple blanks. For example:
2U2133 1239
1290fsdsf 3234
From this, I need to extract
1239
3234
The delimiter for all records will be always 3 blanks.
I need to do this in an unix script(.scr) and write the output to another file or use it as an input to a do-while loop. I tried the below:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then
int_1=0
else
int_2=0
fi
done < awk -F' ' '{ print $2 }' ${Directoty path}/test_file.txt
test_file.txt is the input file and file1.txt is a lookup file. But the above way is not working and giving me syntax errors near awk -F
I tried writing the output to a file. The following worked in command line:
more test_file.txt | awk -F' ' '{ print $2 }' > output.txt
This is working and writing the records to output.txt in command line. But the same command does not work in the unix script (It is a .scr file)
Please let me know where I am going wrong and how I can resolve this.
Thanks,
Visakh
The job of replacing multiple delimiters with just one is left to tr:
cat <file_name> | tr -s ' ' | cut -d ' ' -f 2
tr translates or deletes characters, and is perfectly suited to prepare your data for cut to work properly.
The manual states:
-s, --squeeze-repeats
replace each sequence of a repeated character that is
listed in the last specified SET, with a single occurrence
of that character
It depends on the version or implementation of cut on your machine. Some versions support an option, usually -i, that means 'ignore blank fields' or, equivalently, allow multiple separators between fields. If that's supported, use:
cut -i -d' ' -f 2 data.file
If not (and it is not universal — and maybe not even widespread, since neither GNU nor MacOS X have the option), then using awk is better and more portable.
You need to pipe the output of awk into your loop, though:
awk -F' ' '{print $2}' ${Directory_path}/test_file.txt |
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done
The only residual issue is whether the while loop is in a sub-shell and and therefore not modifying your main shell scripts variables, just its own copy of those variables.
With bash, you can use process substitution:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done < <(awk -F' ' '{print $2}' ${Directory_path}/test_file.txt)
This leaves the while loop in the current shell, but arranges for the output of the command to appear as if from a file.
The blank in ${Directory path} is not normally legal — unless it is another Bash feature I've missed out on; you also had a typo (Directoty) in one place.
Other ways of doing the same thing aside, the error in your program is this: You cannot redirect from (<) the output of another program. Turn your script around and use a pipe like this:
awk -F' ' '{ print $2 }' ${Directory path}/test_file.txt | while read readline
etc.
Besides, the use of "readline" as a variable name may or may not get you into problems.
In this particular case, you can use the following line
sed 's/ /\t/g' <file_name> | cut -f 2
to get your second columns.
In bash you can start from something like this:
for n in `${Directoty path}/test_file.txt | cut -d " " -f 4`
{
grep -c $n ${Directory path}/file*.txt
}
This should have been a comment, but since I cannot comment yet, I am adding this here.
This is from an excellent answer here: https://stackoverflow.com/a/4483833/3138875
tr -s ' ' <text.txt | cut -d ' ' -f4
tr -s '<character>' squeezes multiple repeated instances of <character> into one.
It's not working in the script because of the typo in "Directo*t*y path" (last line of your script).
Cut isn't flexible enough. I usually use Perl for that:
cat file.txt | perl -F' ' -e 'print $F[1]."\n"'
Instead of a triple space after -F you can put any Perl regular expression. You access fields as $F[n], where n is the field number (counting starts at zero). This way there is no need to sed or tr.

Resources