If I do:
$ jq -cn 'now | localtime'
[2022,3,12,21,9,29.65448808670044,2,101]
It correctly gives the "broken down time" representation of current local time.
But If I do:
$ jq -cn 'now | localtime | mktime | localtime'
[2022,3,13,7,10,36,3,102]
It gives back a "broken down time" representation that is different than current local time.
I think when output of localtime is converted to seconds since unix epoch by mktime it is converted wrongly because it wrongly assumes GMT time?
If I do:
$ jq -cn 'now | gmtime | mktime | localtime'
Now this gives correct results (gives "broken down time" representation of current local time).
Am I correct? Thanks.
Yes.
From the jq docs:
The mktime builtin consumes "broken down time" representations of time output by gmtime and strptime.
You originally passed a local time, but it expects a UTC time. As you surmised, this is why your original code failed and the latter code worked. jq's mktime is the inverse of gmtime.[1]
$ jq -rn 'now | ., ( gmtime | mktime )'
1649770973.430903
1649770973
jq does not appear to provide a means to convert from a local time to epoch time.[2]
This differs from the behaviour of C's mktime. C's mktime expects a local time, making it the inverse of localtime.
In C, mktime can serve both roles. While it normally converts from local time, it can also convert from UTC by setting the local time zone to UTC.
Related
How to use jq to convert seconds since Unix epoch to a time string in human readable format but adjusted to Sydney, Australia time zone?
I tried filter:
now | strftime("%Y-%m-%dT%H:%M:%SZ")
But I don't know how to adjust the time format string to convey Sydney, Australia time zone.
Possibly I need to replace "Z" with the relevant time zone?
Both of the following convert to the time zone indicated by the TZ environment variable var:
localtime | strftime(...)
strflocaltime(...)
For example,
$ jq -nr 'now | strftime("%FT%T")'
2022-02-14T06:14:07
$ jq -nr 'now | gmtime | strftime("%FT%T")'
2022-02-14T06:14:07
$ jq -nr 'now | localtime | strftime("%FT%T")'
2022-02-14T02:14:07
$ jq -nr 'now | strflocaltime("%FT%T")'
2022-02-14T02:14:07
That uses your local time, as determined by TZ environment variable. Adjust as needed.
$ TZ=America/Halifax jq -nr 'now | strflocaltime("%FT%T")'
2022-02-14T02:14:07
$ TZ=America/Toronto jq -nr 'now | strflocaltime("%FT%T")'
2022-02-14T01:14:07
$ TZ=America/Vancouver jq -nr 'now | strflocaltime("%FT%T")'
2022-02-14T22:14:07
If you want to convert to different time zones in a single run of jq, you're out of luck. jq doesn't support converting to/from time zones other than UTC and this time zone.
Tested with both 1.5 and 1.6.
If you are using jq v1.6, use strflocaltime instead of strftime which displays the time in your timezone.
jq -n 'now | strflocaltime("%Y-%m-%dT%H:%M:%S")'
Demo
From the manual:
The strftime(fmt) builtin formats a time (GMT) with the given format. The strflocaltime does the same, but using the local timezone setting.
If your timezone is different, set the shell's TZ variable accordingly before calling jq
TZ=Australia/Sydney jq -n 'now | strflocaltime("%Y-%m-%dT%H:%M:%S")'
I have to distinguish between the following two paths.
shorter: https://www.example.com/
longer: https://www.example.com/foo/
In Bash script, using Bash built-in literals as follows returns only longer one.
$ url1=https://www.example.com/
$ url2=https://www.example.com/foo/
$ cut -d/ -f4 <<<${url1%/*} # this returns nothing
>$
$ cut -d/ -f4 <<<${url2%/*} # this returns last part of path
>$ foo
So it could be identified longer one in Bash script,
but now I have to define same filter for JSON value handled in jq.
If jq can write like the following, my goal can be achieved...
jq '. | select( .url | (cut -d/ -f4 <<< ${url2%/*})!=null) )'
But can not do that. How can do that?
jq has many string-handling functions -- one could do worse than checking the jq manual. For the task at hand, using a regex function would probably be best, but since you mentioned cut -d/ -f4, it might be of interest to note that much the same effect can be achieved by:
split("/")[3]
For the last non-trivial part you could consider:
sub("/ *$";"") | split("/")[-1]
Here is what my log file look like
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:05","k":"7322","h":"178","s":-53.134575556764}
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:06","k":"2115","h":"178","s":-53.134575556764}
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:07","k":"1511","h":"178","s":-53.134575556764}
There are multiple log files with similar entries and they are updated every second.
here "t" : "20:50:05" is Time.
What I want to do is, get all logs between specific time from all files from the end of the files.
I tried with tail files*.log | grep -e "20:50:07 | 20:50:05" but it does not return anything.
How do I get get all log entries between given time, starting from the end of file from all logs files?
If you're looking for a range for records, and the format of the lines is consistent, the easiest way is probably to isolate the time field, strip out the colons, and leverage the power of arithmetic operators.
A one-liner awk solution, for example:
tail files*.log | awk -v from="205006" -v to="205007" -F"\"" '{ timeasint=$4; gsub(":","",timeasint); if (timeasint >= from && timeasint <= to) print $0 }'
would get you:
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:06","k":"2115","h":"178","s":-53.134575556764}
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:07","k":"1511","h":"178","s":-53.134575556764}
Of course you couldn't span across midnight (i.e., 25:59:59 - 00:00:01), but for that you'd need dates as well as times in your log anyway.
If you had dates, my suggestion would be converting them to epoch stamps (using date -d "string" or some other suitable method) and comparing the epoch stamps as integers.
I have the following CHECK that works fine in Linux (and Unix) to grep for a specific error "java.lang.OutOfMemoryError" the logs of up to -5 min from current date and time:
CHECK=$(awk -v d1="$(date --date="-5 min" "+[%-m/%-d/%y %H:%M:%S:%3N EEST]")" '$0 > d1' trace.log | grep "java.lang.OutOfMemoryError")
The logs have always the following timestamp format at each line (example):
[4/18/16 12:23:57:998 EEST]
I am trying to do the same at Solaris, but I get the following error response:
date: illegal option -- date=-5 min
usage: date [-u] mmddHHMM[[cc]yy][.SS]
date [-u] [+format]
date -a [-]sss[.fff]
I have been looking for a solution but I cannot find a convenient one yet. I have found some info for date and time manipulation with perl and python, but they are not easy to use and transform to do the same as I need above (especially perl appears to quite tricky, I am not familiar). So, maybe perl and python are not the answer here.
Can you please help me to do the job ?
Thank you.
How to add 1 hour in unix timestamp
date +%Y%m%d_%H%M
I need to add 1 hour in above format
A literal answer to your question is to use
date --date="next hour" +%Y%m%d_%H%M
but I'm guessing that you actually want to display the time in another timezone. To display in UTC:
date --utc +%Y%m%d_%H%M
and in another timezone, e.g.
TZ="Europe/Stockholm" date +%Y%m%d_%H%M
assuming of course that system clock is setup correctly.
De GNU date makes live easy. Without GNU date you can manipulate your timezone:
echo "$(TZ=GMT+1 date +%Y%m%d_%H%M)"
Be careful with Daylight Saving Time.
You can remember this trick when you to get the date (without time) of yesterday. Just adding 24 hours to the timezone can give problems during the Daylight Saving Time. You can use a trick to find yesterday:
You are sure that yesterday is 20 or 30 hours ago. Which one? Well, the most recent one that is not today.
echo "$(TZ=GMT+30 date +%Y-%m-%d)\n$(TZ=GMT+20 date +%Y-%m-%d)" |
grep -v $(date +%Y-%m-%d) | tail -1
Above command is for ksh. When you use bash, you want echo -e.
Or use printf:
printf "%s\n%s\n" "$(TZ=GMT+30 date +%Y-%m-%d)" "$(TZ=GMT+20 date +%Y-%m-%d)" |
grep -v $(date +%Y-%m-%d) | tail -1