Display empty line for non existing fields with jq - jq

I have the following json data:
{"jsonrpc":"2.0","result":[],"id":1}
{"jsonrpc":"2.0","result":[{"hostmacroid":"2392","hostid":"10953","macro":"{$GATEWAY}","value":"10.25.230.1"}],"id":1}
{"jsonrpc":"2.0","result":[{"hostmacroid":"1893","hostid":"12093","macro":"{$GATEWAY}","value":"10.38.118.1"}],"id":1}
{"jsonrpc":"2.0","result":[{"hostmacroid":"2400","hostid":"14471","macro":"{$GATEWAY}","value":"10.25.230.1"}],"id":1}
{"jsonrpc":"2.0","result":[{"hostmacroid":"799","hostid":"10798","macro":"{$GATEWAY}","value":"10.36.136.1"}],"id":1}
{"jsonrpc":"2.0","result":[],"id":1}
{"jsonrpc":"2.0","result":[{"hostmacroid":"1433","hostid":"10857","macro":"{$GATEWAY}","value":"10.38.24.129"}],"id":1}
{"jsonrpc":"2.0","result":[{"hostmacroid":"842","hostid":"13159","macro":"{$GATEWAY}","value":"10.38.113.1"}],"id":1}
{"jsonrpc":"2.0","result":[],"id":1}
I am trying to extract the value of the "value" field from each line. jq -r '.result[].value' <jsonfile> works perfectly but it does not take into account the JSON lines where there is no "value" field. I would like it to print an empty line for them. Is this possible with jq?

You can use this:
jq -r '.result[].value // "" ' a.json
This uses the or operator //. If .result[].value is present, the value will get printed, otherwise an empty line gets printed.

This would work:
jq -r '.result | if length > 0 then .[0].value else "" end'

Since false // X and null // X produce X, .result[].value // "" may not be what you want in all cases.
To achieve the stated goal as I understand it, you could use the following filter:
.result[] | if has("value") then .value else "" end

Related

Need of awk command explaination

I want to know how the below command is working.
awk '/Conditional jump or move depends on uninitialised value/ {block=1} block {str=str sep $0; sep=RS} /^==.*== $/ {block=0; if (str!~/oracle/ && str!~/OCI/ && str!~/tuxedo1222/ && str!~/vprintf/ && str!~/vfprintf/ && str!~/vtrace/) { if (str!~/^$/){print str}} str=sep=""}' file_name.txt >> CondJump_val.txt
I'd also like to know how to check the texts Oracle, OCI, and so on from the second line only. 
The first step is to write it so it's easier to read
awk '
/Conditional jump or move depends on uninitialised value/ {block=1}
block {
str=str sep $0
sep=RS
}
/^==.*== $/ {
block=0
if (str!~/oracle/ && str!~/OCI/ && str!~/tuxedo1222/ && str!~/vprintf/ && str!~/vfprintf/ && str!~/vtrace/) {
if (str!~/^$/) {
print str
}
}
str=sep=""
}
' file_name.txt >> CondJump_val.txt
It accumulates the lines starting with "Conditional jump ..." ending with "==...== " into a variable str.
If the accumulated string does not match several patterns, the string is printed.
I'd also like to know how to check the texts Oracle, OCI, and so on from the second line only.
What does that mean? I assume you don't want to see the "Conditional jump..." line in the output. If that's the case then use the next command to jump to the next line of input.
/Conditional jump or move depends on uninitialised value/ {
block=1
next
}
perhaps consolidate those regex into a single chain ?
if (str !~ "oracle|OCI|tuxedo1222|v[f]?printf|vtrace") {
print str
}
There are two idiomatic awkisms to understand.
The first can be simplified to this:
$ seq 100 | awk '/^22$/{flag=1}
/^31$/{flag=0}
flag'
22
23
...
30
Why does this work? In awk, flag can be tested even if not yet defined which is what the stand alone flag is doing - the input is only printed if flag is true and flag=1 is only executed when after the regex /^22$/. The condition of flag being true ends with the regex /^31$/ in this simple example.
This is an idiom in awk to executed code between two regex matches on different lines.
In your case, the two regex's are:
/Conditional jump or move depends on uninitialised value/ # start
# in-between, block is true and collect the input into str separated by RS
/^==.*== $/ # end
The other 'awkism' is this:
block {str=str sep $0; sep=RS}
When block is true, collect $0 into str and first time though, RS should not be added in-between the last time. The result is:
str="first lineRSsecond lineRSthird lineRS..."
both depend on awk being able to use a undefined variable without error

remove a substring from a string

I want to remove , from a string in jq. Take the following example, how to remove , when outputting 1,2?
$ jq -r .x <<< '{"x":"1,2"}'
1,2
To remove specific positions from a string, use the indices you want to keep:
jq -r '.x | .[:1] + .[2:]' <<< '{"x":"1,2"}'
12
Demo
To remove one occurrence at any position, use sub to replace with the empty string
jq -r '.x | sub(","; "")' <<< '{"x":"1,2,3"}'
12,3
Demo
To remove all occurrences, use gsub the same way
jq -r '.x | sub(","; "")' <<< '{"x":"1,2,3"}'
123
Demo
You didn't make clear what output you wanted. A literal reading suggests you want 12, but I find it more likely that you want each of the comma-separated items to be output on separate lines. The following achieves this:
jq -r '.x | split(",")[]'
For the provided input, this outputs
1
2
Demo on jqplay
You can use sub.
Filter
.x | sub(","; " ")
Input
{"x":"1,2"}
Output
1 2
Demo
https://jqplay.org/s/IaViogZTsI

finding first and last occurrence of a string using awk or sed

I couldn't find what I am looking for online so I hope someone can help me here. I have a file with the following lines:
CON/Type/abc.sql
CON/Type/bcd.sql
CON/Table/last.sql
CON/Table/first.sql
CON/Function/abc.sql
CON/Package/foo.sql
What I want to do is to find the first occurrence of Table, print a new string and then find last occurrence and print another string. For example, output should look like this:
CON/Type/abc.sql
CON/Type/bcd.sql
set define on
CON/Table/last.sql
CON/Table/first.sql
set define off
CON/Function/abc.sql
CON/Package/foo.sql
As you can see, after finding first occurrence of Table I printed "set define on" before the first occurrence. For the last occurrence I printed "set define off" after last match of Table. Can someone help me write an awk script? Using sed would be okay too.
Note: The lines with Table can appear in the first line of the file or middle or last. In this case they appear in the middle of the rest of the lines.
$ awk -F/ '$2=="Table"{if (!f)print "set define on";f=1} f && $2!="Table"{print "set define off";f=0} 1' file
CON/Type/abc.sql
CON/Type/bcd.sql
set define on
CON/Table/last.sql
CON/Table/first.sql
set define off
CON/Function/abc.sql
CON/Package/foo.sql
How it works
-F/
Set the field separator to /
$2=="Table"{if (!f)print "set define on";f=1}
If the second field is Table, then do the following: (a) if flag f is zero, then print set define on; (b) set flag f to one (true).
f && $2!="Table"{print "set define off";f=0}
If flag f is true and the second field is not Table, then do the following: (a) print set define off; (b) set flag f to zero (false).
1
Print the current line.
Alternate Version
As suggested by Etan Reisner, the following does the same thing with the logic slightly reorganized, eliminating the need for the if statement:
awk -F/ '$2=="Table" && !f {print "set define on";f=1} $2!="Table" && f {print "set define off";f=0} 1' file

ive been searching this to get a sense but i am still confused

i'm confused about the $symbol for unix.
according to the definition, it states that it is the value stored by the variable following it. i'm not following the definition - could you please give me an example of how it is being used?
thanks
You define a variable like this:
greeting=hello
export name=luc
and use like this:
echo $greeting $name
If you use export that means the variable will be visible to subshells.
EDIT: If you want to assign a string containing spaces, you have to quote it either using double quotes (") or single quotes ('). Variables inside double quotes will be expanded whereas in single quotes they won't:
axel#loro:~$ name=luc
axel#loro:~$ echo "hello $name"
hello luc
axel#loro:~$ echo 'hello $name'
hello $name
In case of shell sctipts. When you assign a value to a variable you does not need to use $ simbol. Only if you want to acces the value of that variable.
Examples:
VARIABLE=100000;
echo "$VARIABLE";
othervariable=$VARIABLE+10;
echo $othervariable;
The other thing: if you use assignment , does not leave spaces before and after the = simbol.
Here is a good bash tutorial:
http://linuxconfig.org/Bash_scripting_Tutorial
mynameis.sh:
#!/bin/sh
finger | grep "`whoami` " | tail -n 1 | awk '{FS="\t";print $2,$3;}'
finger: prints all logged in user example result:
login Name Tty Idle Login Time Office Office Phone
xuser Forname Nickname tty7 3:18 Mar 9 07:23 (:0)
...
grep: filter lines what containing the given string (in this example we need to filter xuser if our loginname is xuser)
http://www.gnu.org/software/grep/manual/grep.html
whoami: prints my loginname
http://linux.about.com/library/cmd/blcmdl1_whoami.htm
tail -n 1 : shows only the last line of results
http://unixhelp.ed.ac.uk/CGI/man-cgi?tail
the awk script: prints the second and third column of the result: Forname, Nickname
http://www.staff.science.uu.nl/~oostr102/docs/nawk/nawk_toc.html

sort out selected records based on key in unix

my input file is like this.
01,A,34
01,A,35
01,A,36
01,A,37
02,A,40
02,A,41
02,A,42
02,A,45
my output needs to be
01,A,37
01,A,36
01,A,35
02,A,45
02,A,42
02,A,41
i.e select only top three records (top value based on 3rd column) based on key(1st and 2nd column)
Thanks in advance...
You can use a simple bash script to do this provided the data is as shown.
pax$ cat infile
01,A,34
01,A,35
01,A,36
01,A,37
02,A,40
02,A,41
02,A,42
02,A,45
pax$ ./go.sh
01,A,37
01,A,36
01,A,35
02,A,45
02,A,42
02,A,41
pax$ cat go.sh
keys=$(sed 's/,[^,]*$/,/' infile | sort -u)
for key in ${keys} ; do
grep "^${key}" infile | sort -r | head -3
done
The first line gets the full set of keys, constructed from the first two fields by removing the final column with sed then sorting the output and removing duplicates with sort. In this particular case, the keys are 01,A, and 02,A,.
It the extracts the relevant data for each key (the for loop in conjunction with grep), sorting in descending order with sort -r, and getting only the first three (for each key) with head.
Now, if your key is likely to contain characters special to grep such as . or [, you'll need to watch out.
With Perl:
perl -F, -lane'
push #{$_{join ",", #F[0,1]}}, $F[2];
END {
for $k (keys %_) {
print join ",", $k, $_
for (sort { $b <=> $a } #{$_{$k}})[0..2]
}
}' infile

Resources