I fetched timestamp sysibm.sysdummy. If the timestamp is 2015-08-21.23.35.45.45287 ,I need to cut it till the minutes only i.e. 2015-08-21.23.35. Can any one tell me how to cut this using unix commands?
When you want to cut it, use
echo '2015-08-21.23.35.45.45287' | cut -d"." -f1-3
Cut might not be the best solution, consider the DB2 expression (#data_henrik) or the shell built-in
a="2015-08-21.23.35.45.45287"; echo ${a:0:16}
The DB2 expression values(substr(timestamp(current timestamp),1,16)) would return the timestamp up to the minutes. It would be delivered by DB2, not by Unix.
Related
Here is what my log file look like
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:05","k":"7322","h":"178","s":-53.134575556764}
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:06","k":"2115","h":"178","s":-53.134575556764}
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:07","k":"1511","h":"178","s":-53.134575556764}
There are multiple log files with similar entries and they are updated every second.
here "t" : "20:50:05" is Time.
What I want to do is, get all logs between specific time from all files from the end of the files.
I tried with tail files*.log | grep -e "20:50:07 | 20:50:05" but it does not return anything.
How do I get get all log entries between given time, starting from the end of file from all logs files?
If you're looking for a range for records, and the format of the lines is consistent, the easiest way is probably to isolate the time field, strip out the colons, and leverage the power of arithmetic operators.
A one-liner awk solution, for example:
tail files*.log | awk -v from="205006" -v to="205007" -F"\"" '{ timeasint=$4; gsub(":","",timeasint); if (timeasint >= from && timeasint <= to) print $0 }'
would get you:
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:06","k":"2115","h":"178","s":-53.134575556764}
[BCDID::16 T::LIVE_ANALYZER GCDID::16] {"t":"20:50:07","k":"1511","h":"178","s":-53.134575556764}
Of course you couldn't span across midnight (i.e., 25:59:59 - 00:00:01), but for that you'd need dates as well as times in your log anyway.
If you had dates, my suggestion would be converting them to epoch stamps (using date -d "string" or some other suitable method) and comparing the epoch stamps as integers.
Using unix commands, how would I be able to take website information and place it inside a variable?
I have been practicing with curl -sS which allows me to strip out the download progress output and just print the downloaded data (or any possible error) in the console. If there is another method, I would be glad to hear it.
But so far I have a website and I want to get certain information out of it, so I am using curl and cut like so:
curl -sS "https://en.wikipedia.org/wiki/List_of_Olympic_medalists_in_judo?action=raw | cut -c"19-"
How would I put this into a variable? My attempts have not been successful so far.
Wrap any command in $(...) to capture the output in the shell, which you could then assign to a variable (or do anything else you want with it):
var=$(curl -sS "https://en.wikipedia.org/wiki/List_of_Olympic_medalists_in_judo?action=raw | cut -c"19-")
I have a shell script that will check a file is how many days old. I did stat -f "%m%t%Sm %N" "$file" . But I want to store this into a variable and then compare current time and file created time ?
Assuming you're using bash, you can capture the output of commands with something like:
fdate=$(stat -f "%m%t%Sm %N" "$file")
and then do whatever you will with the results:
echo ${fdate}
That's assuming the command itself works in the first place. If you are, you can ignore the text below.
The GNU stat program uses -f to specify you want to query the filesystem rather than a file and the other options you have don't seem to make sense in the context of your question.
Using Gnu stat, you can get the time since the last file update(1) as:
ageInSeconds=$(($(date -u +%s) - $(stat --printf "%Y" "file")))
The subtracts the last modification time of the file from the current time (both expressed as seconds since the epoch) to give you the age in seconds.
To turn that into days, assuming you're not overly concerned about the possible error from leap seconds (an error of, at most, one part in about 15.7 million, or 0.000006%), you can just divide it by 86,400:
ageInDays=$((($(date -u +%s) - $(stat --printf "%Y" "file")) / 86400))
(1) Note that, although stat purports to have a %W format specifier that gives the birth of the file, this doesn't always work (it returns zero). You could check that first if you're really interested in when the file was created rather than last updated but you may have to be prepared to accept the possibility the information is not available. I've used last modification time above since, frequently, it's used for things like detecting changes.
I've been bashing my head into the wall for some time on this one.
I have a date string in my (ksh) script. It's not the current time, it's some arbitrary time.
How can I convert that date string into a Unix timestamp? I'm working on SunOS 5.10 and AIX, which don't have the date -d option, nor the MacOS date -j -f options.
I only need to do this conversion in one place in my code, so ideally I'd like to do it in one line, but if I have to create a function then so be it
I've messed around with Python and Perl to achieve this in one line. Python came the closest, but I couldn't get it to account for time zone, which I would really like. Any help would be much appreciated.
I lost whatever I had been trying to do with Python earlier, but looking back at it I've found a solution:
python -c 'import datetime, time;print time.mktime(datetime.datetime.strptime("08/22/2014/16", "%m/%d/%Y/%H").timetuple())'
This particular command outputs 1408737600, which is 4pm on August 22 2014 on the east coast.
aman#gmail.com,"08OCT2012"
abc#gmail.com,"11JUL2012"
def#gmail.com,"16DEC2010"
abc#gmail.com,"16MAR2011"
aman#gmail.com,"21APR2011"
abc#apple.com,"12DEC2010"
xyz#fb.com,"06MAR2011"
Want to sort above csv using unix sort command 1st by email address and then by date
I have tried something like
sort -k1 -k212 -k23M -k26 file.csv
But it didn't work out. Anybody has any idea how to sort this csv?
You may need sort -t, to indicate the delimiter is a comma ,.
Then, something like this should work:
sort -t, -k1 -k2 file.csv
Anyway, to sort by date you should firstly do some -> to UNIX stamp conversion in your date field.
You can't. Always use ISO8601 in tabular data because it is the only true format which can be lexically sorted (big endian).