Informix FROM_UNIXTIME alternative - datetime

I was searching for a way to group data by interval (ex: every 30 minutes) using the date defined in that table, so i need to convert that date time to milliseconds so that i can divide it by the interval i need like in this query
SELECT FLOOR(UNIX_TIMESTAMP(timestamp)/(15 * 60 * 1000)) AS timekey
FROM table
GROUP BY timekey;
This query is running perfectly on SQL Server but on informix it's giving me the error
Routine (unix_timestamp) can not be resolved.
As it's not defined in IBM Informix server.
So i need a direct way to get epoch unix time from timestamp DATETIME YEAR TO FRACTION(3) column in IBM informix server like 'UNIX_TIMESTAMP' in SQL server.

If the timestamp column is of type DATETIME YEAR TO SECOND or similar, then you can convert it to a DECIMAL(18,5) number of seconds since the Unix Epoch, aka 1970-01-01 00:00:00Z (UTC; time zone offset +00:00) using a procedure such as this:
{
# "#(#)$Id: tounixtime.spl,v 1.6 2002/09/25 18:10:48 jleffler Exp $"
#
# Stored procedure TO_UNIX_TIME written by Jonathan Leffler (previously
# jleffler#informix.com and now jleffler#us.ibm.com). Includes fix for
# bug reported by Tsutomu Ogiwara <Tsutomu.Ogiwara#ctc-g.co.jp> on
# 2001-07-13. Previous version used DATETIME(0) SECOND TO SECOND
# instead of DATETIME(0:0:0) HOUR TO SECOND, and when the calculation
# extended the shorter constant to DATETIME HOUR TO SECOND, it added the
# current hour and minute fields, as documented in the Informix Guide to
# SQL: Syntax manual under EXTEND in the section on 'Expression'.
# Amended 2002-08-23 to handle 'eternity' and annotated more thoroughly.
# Amended 2002-09-25 to handle fractional seconds, as companion to the
# new stored procedure FROM_UNIX_TIME().
#
# If you run this procedure with no arguments (use the default), you
# need to worry about the time zone the database server is using because
# the value of CURRENT is determined by that, and you need to compensate
# for it if you are using a different time zone.
#
# Note that this version works for dates after 2001-09-09 when the
# interval between 1970-01-01 00:00:00+00:00 and current exceeds the
# range of INTERVAL SECOND(9) TO SECOND. Returning DECIMAL(18,5) allows
# it to work for all valid datetime values including fractional seconds.
# In the UTC time zone, the 'Unix time' of 9999-12-31 23:59:59 is
# 253402300799 (12 digits); the equivalent for 0001-01-01 00:00:00 is
# -62135596800 (11 digits). Both these values are unrepresentable in
# 32-bit integers, of course, so most Unix systems won't handle this
# range, and the so-called 'Proleptic Gregorian Calendar' used to
# calculate the dates ignores locale-dependent details such as the loss
# of days that occurred during the switch between the Julian and
# Gregorian calendar, but those are minutiae that most people can ignore
# most of the time.
}
CREATE PROCEDURE to_unix_time(d DATETIME YEAR TO FRACTION(5)
DEFAULT CURRENT YEAR TO FRACTION(5))
RETURNING DECIMAL(18,5);
DEFINE n DECIMAL(18,5);
DEFINE i1 INTERVAL DAY(9) TO DAY;
DEFINE i2 INTERVAL SECOND(6) TO FRACTION(5);
DEFINE s1 CHAR(15);
DEFINE s2 CHAR(15);
LET i1 = EXTEND(d, YEAR TO DAY) - DATETIME(1970-01-01) YEAR TO DAY;
LET s1 = i1;
LET i2 = EXTEND(d, HOUR TO FRACTION(5)) -
DATETIME(00:00:00.00000) HOUR TO FRACTION(5);
LET s2 = i2;
LET n = s1 * (24 * 60 * 60) + s2;
RETURN n;
END PROCEDURE;
Some of the commentary about email addresses is no longer valid – things have changed in the decade and a half since I wrote this.

Related

Manipulating timestamps without converting them to datetime in MySql 8

It seems MySql 8 differenciates TIMESTAMP from DATETIME more than previous versions (at least more than 5.7) and java drivers understand a Timestamp is universal (senconds since epoch) while DATETIME is like a textual representation of date and time without timezone information, it is not an instant in time.
I need to work with timestamps but I need to add 1 second to one of them in a SELECT query. Unfortunately I can't do it. If I try to do it then it gets converted to DATETIME with it's side effects, as they are not the same.
This code can be using for testing without other programming language:
CREATE TABLE z (
`id` bigint unsigned NOT NULL PRIMARY KEY AUTO_INCREMENT,
`myTs` timestamp NULL DEFAULT NULL
) ENGINE=InnoDB;
insert into z (myTs) values (TIMESTAMP'2023-01-01 00:00:00+00:00');
create table x as select myTs + interval 1 second from z limit 1;
desc x;
drop table x;
drop table z;
You will get myTs + interval 1 second | datetime | ...
But if you remove + interval 1 second then you get myTs | timestamp | ...
I have read the documentation and tried different functions and I couldn't avoid or reverse this conversion.
Specifically I tried at least timestamp(myTs + interval 1 second), timestampadd(second, 1, myTs), date_add(myTs, interval 1 second) and timestamp(myTs + 1).
I need to either sum 1 second to a timestamp without it being converted to datetime or a way to convert a datetime to a timestamp type (providing a time zone if needed) in MySql 8.
I'm using MySql 8.0.30.
Thank you.

Timestamp field in a dbf file (dBase 7 format) is not making sense

I've looked at both [1] and [2] and I'm completely confused (and since the dbf file is a version
4 file, [1] should apply well). For one thing why does [1] state that the timestamp's date portion is the # of days since 1/1/4713 BC? That's just very puzzling. Secondly, assuming that it is the # of days since 4713 BC, I'm having some trouble with the value I am getting.
First off, my dbf file has a timestamp field which has an 8 byte long value. The actual
date is 2000/8/16 17:21:41. In the dbf file, the 8 byte sequence is as follows
0x42ccb20e0340df00.
From [1], it says the first 4 bytes are for the date, and 2nd 4 bytes for the time. If the original
byte sequence is actually little-endian (0x42ccb20e) then that should be 0x0eb2cc42 which
comes to the value of 246598722. So date is 0x0eb2cc42 (246598722) and time is 0x00df4003
(14630915).
I must be missing something here or calculating something wrong. 246598722 is equivalent to 675612 years(assuming 1yr = 365 days, as adding leap years would confuse me..and shouldn't really be that much off).
From [2], I shouldn't use 01/01/4173bc as the basis but 12/31/1899 (well, 1/1/1900). But then, the date value I have isn't even in the range of what [2] shows.
Now if I take the actual value (2000/8/16) and use [1] and [2], I get the following:
method [1]: 2450501 days : (2000 - -4713) * 365 + (8 * 30) + 16
method [2]: 36756 days : [100 * 365 + 8 * 30 + 16] (over counting the # of days)
The dbf file isn't corrupted (otherwise, if I look at the timestamp in dBase, it'd crap out
and display something crazy).
I've thought of using big-endian, but that makes even less sense as the values are even larger. I've even thought of the possibility that it's actually the # of seconds elapsed since either date, but that makes the values with even less sense. i.e. 246598722 = # of seconds elapsed (counting back from 2000/8/16) will make the base year as 1812. (calculations: 246898722 / (3600 * 365) = 187.8985, so 2000 - 187.8985 = 1812.1015)
Can someone point out where I'm doing this wrong?
Thanks!
[1] - https://www.dbase.com/Knowledgebase/INT/db7_file_fmt.htm
[2] - Convert dBase Timestamp
For any dBASE questions, I would recommend to go to the dBASE newsgroups, they have a very helpful and knowledgeable community.
I've finally found the answer thanks to [3].
Basically, the timestamp 8 byte sequence is used as a whole with the following notes:
It's stored in big-endian.
The last byte is not used.
It's a Julian Day Number.
So in my case, it's 0x42ccb20e0340df00 and truncating the last byte,
I get 0x42ccb20e0340df.
Then the following python code gets the correct info:
import datetime
base = 0x42cc418ba99a00
frm_date = int('42ccb20e0340df', 16)
final_ts = (frm_date - base) / 500
final_date = datetime.datetime.utcfromtimestamp(final_ts)
which outputs 2000-8-16 17:21:41 and some milliseconds, which I just ignore.
So I'm guessing the theory is that the above code moves the 'base' date to
1970/1/1 from 1/1/1, which helps since utcfromtimestamp() doesn't
work with any value prior to 1970/1/1.
My confusion stems from the fact it doesn't use 4713BC as the
base year, instead it uses 1/1/1, though I'm still trying to figure out how to get the value 0x42cc418ba99a00 for 1970/1/1.
[3] - https://stackoverflow.com/a/60424157/10860403

Converting a 19 digits time stamp to a real time (from .zvi file format)

After a long day of research,
Is anybody knows how to convert a 19 digits time stamp from the metadata of .zvi file (produce by axiovision, Zeiss) to a real time format ? (The output probably includes milliseconds)
An example time-stamp is: 4675873294709522577
Thanks !
Arnon
Matlab solution:
The main issue is not the x2mdate conversion (which simply adds the number of days between the year zero, when Matlab starts counting, and the year 1900, when Excel/zvi starts counting), but the same class issue as described above. This conversion to double can be done with typecast in Matlab:
myZVI = 4675946358764751269;
timestampDouble = typecast(int64(myZVI),'double');
myTime = datestr(timestampDouble + 693960, 'dd-mmm-yyyy HH:MM:SS.FFF');
693960 is the number of days between year zero and 1900; if you don't need an absolute date but just the difference between two timestamps, you don't even need this; for instance the interval between two of my video frames can be calculated like this:
myZVI2 = 4675946358764826427;
timestampDouble2 = typecast(int64(myZVI2),'double');
myTimeDifference = datestr(timestampDouble2 - timestampDouble,'SS.FFF');
hope this helps:-)
This is a Microsoft OLE Automation Date. But you've read it as a 64-bit long integer instead of the 64-bit double that it should be.
You didn't specify a language, so I will pick C#:
long l = 4675873294709522577L;
byte[] b = BitConverter.GetBytes(l);
double d = BitConverter.ToDouble(b, 0);
Debug.WriteLine(d); // 41039.901598693
DateTime dt = DateTime.FromOADate(d);
Debug.WriteLine(dt); // 5/10/2012 9:38:18 PM
More information in this thread.
An OLE Automation Date is basically the number of whole 24-hour days since 1/1/1900 without any particular time zone reference.

Number of seconds since January 1, 1970 00:00:00 GMT Erlang

I am interacting with a Remote Server. This Remote Server is in a different Time Zone. Part of the Authentication requires me to produce the:
"The number of seconds since January 1, 1970 00:00:00 GMT
The server will only accept requests where the timestamp
is within 600s of the current time"
The documentation of erlang:now(). reveals that it can get me the the elapsed time since 00:00 GMT, January 1, 1970 (zero hour)
on the assumption that the underlying OS supports this. It returns a size=3 tuple, {MegaSecs, Secs, MicroSecs}. I tried using element(2,erlang:now()) but the remote server sends me this message:
Timestamp expired: Given timestamp (1970-01-07T14:44:42Z)
not within 600s of server time (2012-01-26T09:51:26Z)
Which of these 3 parameters is the required number of seconds since Jan 1, 1970 ? What aren't i doing right ? Is there something i have to do with the universal time as in calendar:universal_time() ? UPDATEAs an update, i managed to switch off the time-expired problem by using this:
seconds_1970()->
T1 = {{1970,1,1},{0,0,0}},
T2 = calendar:universal_time(),
{Days,{HH,Mins,Secs}} = calendar:time_difference(T1,T2),
(Days * 24 * 60 * 60) + (HH * 60 * 60) + (Mins * 60) + Secs.
However, the question still remains. There must be a way, a fundamental Erlang way of getting this, probably a BIF, right ?
You have to calculate the UNIX time (seconds since 1970) from the results of now(), like this:
{MegaSecs, Secs, MicroSecs} = now().
UnixTime = MegaSecs * 1000000 + Secs.
Just using the second entry of the tuple will tell you the time in seconds since the last decimal trillionellium (in seconds since the UNIX epoch).
[2017 Edit]
now is deprecated, but erlang:timestamp() is not and returns the same format as now did.
Which of these 3 parameters is the required number of seconds since Jan 1, 1970 ?
All three of them, collectively. Look at the given timestamp. It's January 7, 1970. Presumably Secs will be between 0 (inclusive) and 1,000,000 (exclusive). One million seconds is only 11.574 days. You need to use the megaseconds as well as the seconds. Since the error tolerance is 600 seconds you can ignore the microseconds part of the response from erlang:now().

Convert 64bit timestamp to a readable value

In my dataset I have two timestamp columns. The first is microseconds since application was started - e.g., 1400805323. The second is described as 64bit timestamp which I'm hoping will indicate clock time, using NTP format of number of seconds from 1/1/1901.
Example of '64bit' timestamps:
129518309081725000
129518309082059000
129518309082393000
129518309082727000
129518309083060000
129518309083394000
129518309083727000
Is there any matlab/python code that could convert this into a readable format?
Any help much appreciated,
Steve
Assuming that these values were generated today, June 6th 2011, these values look like number of 100-nanosecond intervals since Jan 1st year 1601. This is how Windows NT stores FILETIME. For more concentrated info on this read this blog post of Raymond Chen. These articles also show how to convert it to anything else
See edit below for updated answer:
For NTP time, the 64bits are broken in to seconds and fraction of seconds. The top 32 bits is the seconds. The bottom 32 bits is the fraction of seconds. You get the fraction by dividing the fraction part by 2^32.
So step one, convert to a double.
If you like python that's easy enough, I didn't add any bounds checking:
def to_seconds(h):
return (h>>32) + ((float)(h&0xffffffff))/pow(2,32)
>>> to_seconds(129518309081725000)
30155831.26845886
The time module can covert that float to a readable time format.
import time
time.ctime(to_seconds(ntp_timestamp))
You'll need to worry about where the timestamp originated though. time.ctime assumes reletive to the Jan 1, 1970. So if your program is basing the ntp formats of time since program run, you'd need to add to the seconds to normalize the timestamp for ctime.
>>> time.ctime(to_seconds(129518309081725000))
'Tue Dec 15 17:37:11 1970'
EDIT:
PyGuy is right, the original timestamps are not ntp time numbers, they are Windows 64 bit timestamps.
Here is the new to_seconds method to convert the 100ns interval based on 1/1/1601 to the 1970 seconds interval:
def to_seconds(h):
s=float(h)/1e7 # convert to seconds
return s-11644473600 # number of seconds from 1601 to 1970
And the new output:
import time
time.ctime(to_seconds(129518309081725000))
'Mon Jun 6 04:48:28 2011'

Resources