jq parsing date to timestamp - datetime
I have the following script:
curl -s -S 'https://bittrex.com/Api/v2.0/pub/market/GetTicks?marketName=BTC-NBT&tickInterval=thirtyMin&_=1521347400000' | jq -r '.result|.[] |[.T,.O,.H,.L,.C,.V,.BV] | #tsv | tostring | gsub("\t";",") | "(\(.))"'
This is the output:
(2018-03-17T18:30:00,0.00012575,0.00012643,0.00012563,0.00012643,383839.45768188,48.465051)
(2018-03-17T19:00:00,0.00012643,0.00012726,0.00012642,0.00012722,207757.18765437,26.30099514)
(2018-03-17T19:30:00,0.00012726,0.00012779,0.00012698,0.00012779,97387.01596624,12.4229077)
(2018-03-17T20:00:00,0.0001276,0.0001278,0.00012705,0.0001275,96850.15260027,12.33316229)
I want to replace the date with timestamp.
I can make this conversion with date in the shell
date -d '2018-03-17T18:30:00' +%s%3N
1521325800000
I want this result:
(1521325800000,0.00012575,0.00012643,0.00012563,0.00012643,383839.45768188,48.465051)
(1521327600000,0.00012643,0.00012726,0.00012642,0.00012722,207757.18765437,26.30099514)
(1521329400000,0.00012726,0.00012779,0.00012698,0.00012779,97387.01596624,12.4229077)
(1521331200000,0.0001276,0.0001278,0.00012705,0.0001275,96850.15260027,12.33316229)
This data is stored in MySQL.
Is it possible to execute the date conversion with jq or another command like awk, sed, perl in a single command line?
Here is an all-jq solution that assumes the "Z" (UTC+0) timezone.
In brief, simply replace .T by:
((.T + "Z") | fromdate | tostring + "000")
To verify this, consider:
timestamp.jq
[splits("[(),]")]
| .[1] |= ((. + "Z")|fromdate|tostring + "000") # milliseconds
| .[1:length-1]
| "(" + join(",") + ")"
Invocation
jq -rR -f timestamp.jq input.txt
Output
(1521311400000,0.00012575,0.00012643,0.00012563,0.00012643,383839.45768188,48.465051)
(1521313200000,0.00012643,0.00012726,0.00012642,0.00012722,207757.18765437,26.30099514)
(1521315000000,0.00012726,0.00012779,0.00012698,0.00012779,97387.01596624,12.4229077)
(1521316800000,0.0001276,0.0001278,0.00012705,0.0001275,96850.15260027,12.33316229)
Here is an unportable awk solution. It is not portable because it relies on the system date command; on the system I'm using, the relevant invocation looks like: date -j -f "%Y-%m-%eT%T" STRING "+%s"
awk -F, 'BEGIN{OFS=FS}
NF==0 { next }
{ sub(/\(/,"",$1);
cmd="date -j -f \"%Y-%m-%eT%T\" " $1 " +%s";
cmd | getline $1;
$1=$1 "000"; # milliseconds
printf "%s", "(";
print;
}' input.txt
Output
(1521325800000,0.00012575,0.00012643,0.00012563,0.00012643,383839.45768188,48.465051)
(1521327600000,0.00012643,0.00012726,0.00012642,0.00012722,207757.18765437,26.30099514)
(1521329400000,0.00012726,0.00012779,0.00012698,0.00012779,97387.01596624,12.4229077)
(1521331200000,0.0001276,0.0001278,0.00012705,0.0001275,96850.15260027,12.33316229)
Solution with sed :
sed -e 's/(\([^,]\+\)\(,.*\)/echo "(\$(date -d \1 +%s%3N),\2"/g' | ksh
test :
<commande_curl> | sed -e 's/(\([^,]\+\)\(,.*\)/echo "(\$(date -d \1 +%s%3N),\2"/g' | ksh
or :
<commande_curl> > results_curl.txt
cat results_curl.txt | sed -e 's/(\([^,]\+\)\(,.*\)/echo "(\$(date -d \1 +%s%3N),\2"/g' | ksh
Related
How to coerce AWK to evaluate string as math expression?
Is there a way to evaluate a string as a math expression in awk? balter#spectre3:~$ echo "sin(0.3) 0.3" | awk '{print $1,sin($2)}' sin(0.3) 0.29552 I would like to know a way to also have the first input evaluated to 0.29552.
You can just create your own eval function which calls awk again to execute whatever command you want it to: $ cat tst.awk { print eval($1), sin($2) } function eval(str, cmd,line,ret) { cmd = "awk \047BEGIN{print " str "; exit}\047" if ( (cmd | getline line) > 0 ) { ret = line } close(cmd) return ret } $ echo 'sin(0.3) 0.3' | awk -f tst.awk 0.29552 0.29552 $ echo '4*7 0.3' | awk -f tst.awk 28 0.29552 $ echo 'tolower("FOO") 0.3' | awk -f tst.awk foo 0.29552
awk lacks an eval(...) function. This means that you cannot do string to code translation based on input after the awk program initializes. Ok, perhaps it could be done, but not without writing your own parsing and evaluation engine in awk. I would recommend using bc for this effort, like [edwbuck#phoenix ~]$ echo "s(0.3)" | bc -l .29552020666133957510 Note that this would require sin to be shortened to s as that's the bc sine operation.
Here's a simple one liner! math(){ awk "BEGIN{printf $1}"; } Examples of use: math 1+1 Yields "2" math 'sqrt(25)' Yeilds "5" x=100; y=5; math "sqrt($x) + $y" Yeilds "15"
With gawk version 4.1.2 : echo "sin(0.3) 0.3" | awk '{split($1,a,/[()]/);f=a[1];print #f(a[2]),sin($2)}' It's ok with tolower(FOO) too.
You can try Perl as it has eval() function. $ echo "sin(0.3)" | perl -ne ' print eval ' 0.29552020666134 $ For the given input, $ echo "sin(0.3) 0.3" | perl -ne ' /(\S+)\s+(\S+)/ and print eval($1), " ", $2 ' 0.29552020666134 0.3 $
awk to sort two fields:
Would like to sort Input.csv file based on fields $1 and $5 and generate country wise A-Z order. While doing sort need to consider country name either from $1 or $5 if any of the fields are blank. Input.csv Country,Amt,Des,Details,Country,Amt,Des,Network,Details abc,10,03-Apr-14,Aug,abc,10,DL,ABC~XYZ,Sep ,,,,mno,50,DL,ABC~XYZ,Sep abc,10,22-Jan-07,Aug,abc,10,DL,ABC~XYZ,Sep jkl,40,11-Sep-13,Aug,,,,, ,,,,ghi,30,AL,DEF~PQZ,Sep abc,10,03-Apr-14,Aug,abc,10,MN,ABC~XYZ,Sep abc,10,19-Feb-14,Aug,abc,10,MN,ABC~XYZ,Sep def,20,02-Jul-13,Aug,,,,, def,20,02-Aug-13,Aug,,,,, Desired Output.csv Country,Amt,Des,Details,Country,Amt,Des,Network,Details abc,10,03-Apr-14,Aug,abc,10,DL,ABC~XYZ,Sep abc,10,22-Jan-07,Aug,abc,10,DL,ABC~XYZ,Sep abc,10,03-Apr-14,Aug,abc,10,MN,ABC~XYZ,Sep abc,10,19-Feb-14,Aug,abc,10,MN,ABC~XYZ,Sep def,20,02-Jul-13,Aug,,,,, def,20,02-Aug-13,Aug,,,,, ,,,,ghi,30,AL,DEF~PQZ,Sep jkl,40,11-Sep-13,Aug,,,,, ,,,,mno,50,DL,ABC~XYZ,Sep I have tried below command but not getting desired output. Please suggest.. head -1 Input.csv > Output.csv; sort -t, -k1,1 -k5,5 <(tail -n +2 Input.csv) >> Output.csv
awk to the rescue! $ awk -F, '{print ($1==""?$5:$1) "\t" $0}' file | sort | cut -f2- Country,Amt,Des,Details,Country,Amt,Des,Network,Details abc,10,03-Apr-14,Aug,abc,10,DL,ABC~XYZ,Sep abc,10,03-Apr-14,Aug,abc,10,MN,ABC~XYZ,Sep abc,10,19-Feb-14,Aug,abc,10,MN,ABC~XYZ,Sep abc,10,22-Jan-07,Aug,abc,10,DL,ABC~XYZ,Sep def,20,02-Aug-13,Aug,,,,, def,20,02-Jul-13,Aug,,,,, ,,,,ghi,30,AL,DEF~PQZ,Sep jkl,40,11-Sep-13,Aug,,,,, ,,,,mno,50,DL,ABC~XYZ,Sep here the header starting with uppercase and data is lowercase. If this is not a valid assumption special handling of header required as you did above or better with awk $ awk -F, 'NR==1{print; next} {print ($1==""?$5:$1) "\t" $0 | "sort | cut -f2-"}' file
Is this what you want? (Omitted first line) cat file_containing_your_lines | awk 'NR != 1' | sed "s/,/\t/g" | sort -k 1 -k 5 | sed "s/\t/,/g"
Remove duplicated string stored in variable
I have a variable $var with this content: var=word1,word2,word3,word1,word3 and I need to delete duplicate words and the results is required stored in the same variable $var.
Try var="word1,word2,word3,word1,word3" list=$(echo $var | tr "," "\n") var=($(printf "%s\n" "${list[#]}" | sort | uniq -c | sort -rnk1 | awk '{ print $2 }')) echo "${var[#]}"
If open to perl then: $ var="word1,word2,word3,word1,word3" $ var=$(perl -F, -lane'{$h{$_}++ or push #a, $_ for #F; print join ",", #a}' <<< "$var") $ echo "$var" word1,word2,word3
df -h unix command output
how can I get the df -h command output into excel format or csv file. df -k | tr -s " " | sed 's/ /, /g' | sed '1 s/, / /g' | column -t df -h | column -t I have tried as above but the format is not right. I'm not able to load the format into a excel or a table. Can you please help
try this: df -k | tr -s " " | sed 's/ /, /g' | sed '1 s/, / /g' and see this
cygwin help trimming output
ping google.com -n 10 | grep Minimum | sed s/^\ \ \ \ // will output: Minimum = 29ms, Maximum = 49ms, Average = 32ms I want to trim from the space after the = to the the , in Minimum So then it would only show: 29ms
One way using awk: ping google.com -n 10 | awk '/Minimum =/ { sub(",","",$3); print $3 }'
$ echo "Minimum = 29ms, Maximum = 49ms, Average = 32ms" | awk '{print $3}' | sed s/,// 29ms So this should work, but might not be the most elegant expression of your requirement. ping google.com -n 10 | grep Minimum | awk '{print $3}' | sed s/,// You could also use cut instead of awk.