I'm trying to format a string as JSON using jq and I noticed differing behaviors on bash vs zsh; specifically when zsh runs jq directly, the outcome is different than when it runs it in subshell: \n input gets output as \\n in first case, vs as\n in the latter.
I'm puzzled and not sure what's going on there:
Is this a known zsh behavior?
Is this a jq bug?
Or does it work as designed and I'm missing something?
BTW: Use newline with jq suggests to use printf %b to obtain \n instead of \\n, which works for bash,. but discrepancy in zsh between the modes is still there.
$ jq --version
jq-1.6
# ---
# Using \n directly
bash-3.2$ jq --null-input --compact-output --raw-output --monochrome-output --arg test 'A\nB' '{test: $test}'
{"test":"A\\nB"}
bash-3.2$ OUT=$(jq --null-input --compact-output --raw-output --monochrome-output --arg test 'A\nB' '{test: $test}'); echo $OUT
{"test":"A\\nB"}
zsh-5.8.1> jq --null-input --compact-output --raw-output --monochrome-output --arg test 'A\nB' '{test: $test}'
{"test":"A\\nB"}
zsh-5.8.1> OUT=$(jq --null-input --compact-output --raw-output --monochrome-output --arg test 'A\nB' '{test: $test}'); echo $OUT
{"test":"A\nB"}
# -----
# Using `printf %b` to convert `\n` to real newline
bash-3.2$ jq --null-input --compact-output --raw-output --monochrome-output --arg test "$(printf %b 'A\nB')" '{test: $test}'
{"test":"A\nB"}
bash-3.2$ OUT=$(jq --null-input --compact-output --raw-output --monochrome-output --arg test "$(printf %b 'A\nB')" '{test: $test}'); echo $OUT
{"test":"A\nB"}
zsh-5.8.1> jq --null-input --compact-output --raw-output --monochrome-output --arg test "$(printf %b 'A\nB')" '{test: $test}'
{"test":"A\nB"}
zsh-5.8.1> OUT=$(jq --null-input --compact-output --raw-output --monochrome-output --arg test "$(printf %b 'A\nB')" '{test: $test}'); echo $OUT
{"test":"A
B"}
printf behavior is the same between both shells, as are all the shell expansions related to your jq invocation -- but default echo behavior differs.
You can avoid this by switching from echo to printf.
% OUT=$(jq --null-input --compact-output --raw-output --monochrome-output --arg test "$(printf %b 'A\nB')" '{test: $test}')
% printf '%s\n' "$OUT"
{"test":"A\nB"}
Related
I don't understand why the sub() command does not replace "%s" with "my string" in the first jq command below. How to make it work?
$ jq -r --arg format '|%s|' '$format | sub("%s"; .desc)' <<< '{"desc": "my string"}'
||
$ jq -r --arg format '|%s|' '$format | sub("%s"; "my string")' <<< '{"x": "y"}'
|my string|
$ jq -r .desc <<< '{"desc": "my string"}'
my string
You have lost the input context. Save it in a variable (eg. . as $dot) to reference it later (eg. $dot.desc):
$ jq -r --arg format '|%s|' '. as $dot | $format | sub("%s"; $dot.desc)' <<< '{"desc": "my string"}'
|my string|
You can use null input -n :
jq -nr --arg format '|%s|' '$format | sub("%s"; input.desc)' <<< '{"desc": "my string"}'
{
"a": "jdsdjhsandks"
}
How can I compute modular hash of a field using JQ expression?
jq does not implement hash functions, you have to export the data, apply an external tool and re-import the hash.
For instance, if your JSON lived in a file called input.json and you were using bash to call jq, you could do:
# Export the data
data="$(jq -r '.a' input.json)"
# Apply an external tool
md5sum="$(printf '%.32s' "$(md5sum <<< "${data}")")"
# Re-import the hash
jq --arg md5sum "${md5sum}" '.a_md5 = $md5sum' input.json
or without using variables
jq --arg md5sum "$(
printf '%.32s' "$(
md5sum <<< "$(
jq -r '.a' input.json
)"
)"
)" '.a_md5 = $md5sum' input.json
I am working on automating CUPS (Create user provided services) in Cloud Foundry. I have a cups.sh file which contains the corresponding cf cups commands to be executed for a particular application. Below is sample:
cf cups service-A -p '{"uri": "https://sample uri"}'
cf cups service-B -p '{"uri": "https://sample uri","id": "abcd","token": "xyz"}'
I am trying to write a script will perform below use case:
Parse the cups.sh, line by line and extract the service name (e.g. service-A) and the argument following -p (e.g: '{"uri": "https://sample uri","id": "abcd","token": "xyz"}').
I am currently using below script:
File=cups.sh
sed -e 's/[[:space:]]*#.*// ; /^[[:space:]]*$/d' "$File" | while read line
do
temp=$(echo $line | cut -d' ' -f4)
echo $temp
done
This is not accurate as it returns me "https://sample uri"}'. Is there a more accurate way of extracting the argument -p and using it for further operations?
Extract what we need:
$> sed -n -r '/^\s*cf\s+cups\s+.+.*-p\s+'"'"'.+'"'"'/{s/\s*cf\s+cups\s+(.+)\s.*-p\s+('"'"'.+'"'"').*$/\1 \2/;p}' cups.sh
service-A '{"uri": "https://sample uri"}'
service-B '{"uri": "https://sample uri","id": "abcd","token": "xyz"}'
Now, form commands to push all results into 2 arrays (_srv and _cmd):
$> sed -n -r '/^\s*cf\s+cups\s+.+.*-p\s+'"'"'.+'"'"'/{s/\s*cf\s+cups\s+(.+)\s.*-p\s+('"'"'.+'"'"').*$/_srv+=('"'"'\1'"'"') _cmd+=(\2)/;p}' cups.sh
_srv+=('service-A') _cmd+=('{"uri": "https://sample uri"}')
_srv+=('service-B') _cmd+=('{"uri": "https://sample uri","id": "abcd","token": "xyz"}')
Finally, put everything in a bash file
#!/bin/bash
_fil=cups.sh
_srv=()
_cmd=()
eval `sed -n -r '/^\s*cf\s+cups\s+.+.*-p\s+'"'"'.+'"'"'/{s/\s*cf\s+cups\s+(.+)\s.*-p\s+('"'"'.+'"'"').*$/_srv+=('"'"'\1'"'"') _cmd+=(\2)/;p}' "$_fil"`
# test
_len=${#_srv[#]}
for (( i=-1;++i<_len; )); do
echo ${_srv[$i]} ${_cmd[$i]}
done
^\s*cf\s+cups\s+: search for lines starting with cf cups ...
\s+(.+)\s: extract the third column (delimited by \s (spaces)) as \1
-p\s+('"'"'.+'"'"'): extract the stuff between '' right after -p as \2 ('' included)
Is this what you are looking for?
$ # only lines with successful substitutions will be printed
$ sed -n 's/.*service-A -p //p' ip.txt
'{"uri": "https://sample uri"}'
$ sed -n 's/.*service-B -p //p' ip.txt
'{"uri": "https://sample uri","id": "abcd","token": "xyz"}'
$ # to save results in variable
$ a=$(sed -n 's/.*service-A -p //p' ip.txt)
$ echo "$a"
'{"uri": "https://sample uri"}'
With awk
$ awk -F"'" -v sq="'" '/service-A/{print sq $2 sq}' ip.txt
'{"uri": "https://sample uri"}'
To pass search term as variable
$ st='service-A'
$ sed -n 's/.*'"$st"' -p //p' ip.txt
'{"uri": "https://sample uri"}'
$ awk -F"'" -v s="$st" -v sq="'" '$0 ~ s{print sq $2 sq}' ip.txt
'{"uri": "https://sample uri"}'
I have this file :
933|Mahinda|Perera|male|1989-12-03|2010-03-17T13:32:10.447+0000|192.248.2.123|Firefox
1129|Carmen|Lepland|female|1984-02-18|2010-02-28T04:39:58.781+0000|81.25.252.111|Internet Explorer
4194|Hồ ChÃ|Do|male|1988-10-14|2010-03-17T22:46:17.657+0000|103.10.89.118|Internet Explorer
8333|Chen|Wang|female|1980-02-02|2010-03-15T10:21:43.365+0000|1.4.16.148|Internet Explorer
8698|Chen|Liu|female|1982-05-29|2010-02-21T08:44:41.479+0000|14.103.81.196|Firefox
8853|Albin|Monteno|male|1986-04-09|2010-03-19T21:52:36.860+0000|178.209.14.40|Internet Explorer
10027|Ning|Chen|female|1982-12-08|2010-02-22T17:59:59.221+0000|1.2.9.86|Firefox
and with this order
./tool.sh --browsers -f <file>
i want to count the number of the browsers in specific order , for example :
Chrome 143
Firefox 251
Internet Explorer 67
i use this command :
if [ "$1" == "--browsers" -a "$2" == "-f" -a "$4" == "" ]
then
awk -F'|' '{print $8}' $3 | sort | uniq -c | awk ' {print $2 , $3 , $1} '
fi
but it works only for 3 arguments. How to make it work for many arguments? for example a browser with 4 words or more
Seems like an awk one-liner to count your browsers:
$ awk -F'|' '{a[$8]++} END{for(i in a){printf("%s %d\n",i,a[i])}}' inputfile
Firefox 3
Internet Explorer 4
This increments elements of an array, then at the end of the file steps through the array and prints the totals. If you want the output sorted, you can just pipe it through sort. I don't see a problem with multiple words in a browser name.
try this:
awk -F"|" '{print $8}' in | sort | uniq -c | awk '{print $2,$1}'
where in is the input file.
output
[myShell] ➤ awk -F"|" '{print $8}' in | sort | uniq -c | awk '{print $2,$1}'
Firefox 3
Internet 4
also for parsing argument is better to use getopts
i.e.
#!/bin/bash
function usage {
echo "usage: ..."
}
while getopts b:o:h opt; do
case $opt in
b)
fileName=$OPTARG
echo "filename[$fileName]"
awk -F"|" '{print $8}' $fileName | sort | uniq -c | awk '{print $2,$1}'
;;
o)
otherargs=$OPTARG
echo "otherargs[$otherargs]"
;;
h)
usage && exit 0
;;
?)
usage && exit 2
;;
esac
done
output
[myShell] ➤ ./arg -b in
filename[in]
Firefox 3
Internet 4
Your final Awk hard-codes two fields; just continue with $4, $5, $6 etc to print more fields. However, this will add a spurious space for each comma.
Better yet, since the first field is fixed width (because that's the output format from uniq -c), you can do print substr($0,8), $1
I'd do it in perl:
#!/bin/perl
use strict;
use warnings;
use Data::Dumper;
my %count_of;
while ( <> ) {
chomp;
$count_of{(split /\|/)[7]}++;
}
print Dumper \%count_of;
This can be cut down to a one liner:
perl -F'\|' -lane '$c{$F[7]++}; END{ print "$_ => $c{$_}" for keys %c }'
I figured it out.
GREPOUT=`grep "NOTE: Table $TABLE created," $LOGFILE | awk '{print $6}'`
NIW=`grep "SYMBOLGEN: Macro variable NIW resolves to" $LOGFILE | awk '{print $0}'`
if [ "$GREPOUT" -gt "0" ]; then
echo "$NIW" |\
$MAILX -s "SUCESSFUL BATCH RUN: $PROG $RPTDATE" $MAILLIST
fi
from the body of the sent email
SYMBOLGEN: Macro variable NIW resolves to 8
My script runs a SAS code and sends out an email after it completes.
I'm looking to print the contents of a table or list of macro variables in the email.
The SAS code has a %put all; statement at the end so all macro variables are listed in the log.
Thanks.
#If it's gotten this far, we can safely grab the number of rows
#of output from $LOGFILE.
GREPOUT=`grep "NOTE: Table $TABLE created," $LOGFILE | awk '{print $6}'`
NIW=`grep "GLOBAL NIW" $LOGFILE | '(print $6)'`
if [ "$GREPOUT" -gt "0" ]; then
#echo "$GREPOUT rows found in $TABLE." |\
echo "$NIW NIW" |\
$MAILX -s "SUCESSFUL BATCH RUN: $PROG $RPTDATE" $MAILLIST
else
echo "$GREPOUT rows found in $TABLE." |\
$MAILX -s "SUCESSFUL BATCH RUN: $PROG $RPTDATE" $MAILLIST
fi