SQLite Epoch time query - sqlite

Could use a bit of help on this. I have a table which stores records in JSON format in the acctinfo column. I can export the JSON content without issues, but the problem i'm running into is with the epoch time. I would like to be able to display my LastLoginTime in standard locatime format(Not convert the column, but rather convert the output to make it understandable). Any suggestion would be greatly appreciated.
SELECT name,
json_extract(table1.acctinfo, '$.LastloginTime'(1319017136629, 'unixepoch', 'localtime'))
from table1;
Here's an example of the JSON stored in the acctinfo column:
{
"AcctCreateTime": 1518112456,
"LastLoginTime": 1601055231,
"LastModified": 1518112456,
}

Use the function datetime() and json_extract() like this:
SELECT datetime(json_extract(acctinfo, '$.LastLoginTime'), 'unixepoch', 'localtime')
FROM table1;
See a simplified demo.

Related

Using a query with a datetime cell in spreadsheets

I need to query a cell that contains a datetime but its formatted by default when I import it to only show the date.
31/7/2020 19:18:58 (in reality it's like this)
31/7/2020 (but it shows this)
So when I run this query:
=QUERY(A5:R10, "select K")
It returns only the date no matter what I do:
31/07/2020
I've tried:
Options
Formatted like "yyyy-MM-dd HH:mm" it returns 00:00
Filter by datetime or timestamp
Used '"&Text(now(), "yyyy-mm-dd HH:mm"&"' or something like that
The question is:
Is there a way to do what I'm trying to do without reformatting the imported cells?
link to test it:
https://docs.google.com/spreadsheets/d/14rGPngGvFXGP8txMS2yFo1v4gPcI4K1oYWj_8yt2Uq8/edit?usp=sharing
when I select one cell with F2 it shows the time:
Thanks a lot for your time!
select column B and format it as date time
or without reformating it:
=ARRAYFORMULA(QUERY(1*(B2:B4&""), "format Col1 'yyyy-mm-dd hh:mm:ss'"))
or:
=ARRAYFORMULA(QUERY(TEXT(B2:B4, "yyy-mm-dd hh:mm:ss"), "select *"))
Just make sure your cells are formatted as Automatic
If not, even if you use the format clause of the query, the clause will be ignored

BigQuery - Get timezone offset from timezone name

Is there any way in BigQuery to get the current UTC timezone offset from a timezone name? For example using the input:
`Australia/Victoria`
How could I currently return:
+10:00
Below example for BigQuery STandard SQL
#standardSQL
WITH `project.dataset.table` AS (
SELECT 'Australia/Victoria' tz_string
)
SELECT tz_string, DATETIME_DIFF(CURRENT_DATETIME(tz_string), CURRENT_DATETIME(), hour) tz_hours
FROM `project.dataset.table`
with result
Row tz_string tz_hours
1 Australia/Victoria 10
Another way to do this is to use the (at least, now) built in FORMAT_TIMESTAMP() function and the %Ez format element.
SELECT FORMAT_TIMESTAMP('%Ez', CURRENT_TIMESTAMP(), 'Australia/Victoria');
Result:
+11:00

Date parameter mis-read in Delphi SQLite Query

What is wrong with my code:
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= 2016-12-14');
This runs, but deletes ALL the rows in STLac for RegN, not just the rows with BegDate on or after 2016-12-14.
Originally I had:
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= :myDdate,[myDate]);
which has the advantage I hoped of not being particular to the date format. So I tried the literal date should in the format SQLite likes. Either way I get all rows deleted, not just those on or after the specified date.
Scott S.
Try double quote while putting date. As any value must be provided in between quotes until and unless that column is not int
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= "2016-12-14"');
SQLite does not have datetime format as such, so you have to figure out how date is actually represented in the table and change your query to provide the same format. First execute the "select" statement in some kind of management tool,
select * from STLac where RegN = 99 and BegDate >= '2016-12-14' --(or '2016.12.04' or something else)
which displays the result in the grid; when you see the expected rows, change it to "delete" query and copy into your Delphi program.

Sybase How to get dash separated date yyyy-mm-dd?

I want to get date in such format yyyy-mm-dd, for example 2014-04-11. But it seems there is no way to do this in Sybase (ASE 12.5) with the convert function.
Currently, I get the date by 112 and add the - between digits. Any good way?
Take advantage of format 140: yyyy-mm-dd hh:mm:ss.ssssss
Use char(10) to make Sybase truncate the string to just the first 10 characters, i.e.
convert(char(10), col1, 140)
Try this:
select str_replace( convert( varchar, col1, 111 ), '/', '-')
from table
Look the table documentation shared by Doberon, the table have all the formats. I try it and works nice:
SELECT convert(char(10),dateadd(month,-1, convert(date,getdate())),112) from table;
My query format is yyyymmdd.

PostgreSQL, R and timestamps with no time zone

I am reading a big csv (>1GB big for me!). It contains a timestamp field.
I read it (100 rows to start with ) with fread from the excellent data.table package.
ddfr <- fread(input="~/file1.csv",nrows=100, header=T)
Problem 1 (RESOLVED): the timestamp fields (called "ts" and "update"), e.g. "02/12/2014 04:40:00 AM" is converted to string. I convert the fields back to timestamp with lubridate package mdh_hms. Splendid.
ddfr$ts <- data.frame( mdy_hms(ddfr$ts))
Problem 2 (NOT RESOLVED): The timestamp is created with time zone as per POSIXlt.
How do I create in R a timestamp with NO TIME ZONE? is it possible??
Now I use another (new) great package, PivotalR to write the dataframe to PostGreSQL 9.3 using as.db.data.frame. It works as a charm.
x <- as.db.data.frame(ddfr, table.name= "tbl1", conn.id = 1)
Problem 3 (NOT RESOLVED): As the original dataframe timestamp fields had time zones, a table is created with the fields "timestamp with time zone". Ultimately the data needs to be stored in a table with fields configured as "timestamp without time zone".
But in my table in Postgres the data is stored as "2014-02-12 04:40:00.0", where the .0 at the end is the UTC offset. I think I need to have "2014-02-12 04:40:00".
I tried
ALTER TABLE tbl ALTER COLUMN ts type timestamp without time zone;
Then I copied across. While Postgres accepts the ALTER COLUMN command, when I try to copy (using INSERT INTO tbls SELECT ...) I get an error:
"column "ts" is of type timestamp without time zone but expression is of type text
Hint: You will need to rewrite or cast the expression."
Clearly the .0 at the end is not liked (but why then Postgres accepts the ALTER COLUMN? boh!).
I tried to do what the error suggested using CAST in the INSERT INTO query:
INSERT INTO tbl2 SELECT CAST(ts as timestamp without time zone) FROM tbl1
But I get the same error (including the suggestion to use CAST aargh!)
The table directly created by PivotalR (based on the dataframe) has this CREATE script:
CREATE TABLE tbl2
(
businessid integer,
caseno text,
ts timestamp with time zone
)
WITH (
OIDS=FALSE
);
ALTER TABLE tbl1
OWNER TO mydb;
The table I'm inserting into has this CREATE script:
CREATE TABLE tbl1
(
id integer NOT NULL DEFAULT nextval('bus_seq'::regclass),
businessid character varying,
caseno character varying,
ts timestamp without time zone,
updated timestamp without time zone,
CONSTRAINT busid_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE tbl1
OWNER TO postgres;
My apologies for the convoluted explanation, but potentially a solution could be found at any step in the chain, so I preferred to put all my steps in one question. I am sure there has to be a simpler method...
I think you're confused about copying data between tables.
INSERT INTO ... SELECT without a column list expects the columns from source and destination to be the same. It doesn't magically match up columns by name, it'll just assign columns from the SELECT to the INSERT from left to right until it runs out of columns, at which point any remaining cols are assumed to be null. So your query:
INSERT INTO tbl2 SELECT ts FROM tbl1;
isn't doing this:
INSERT INTO tbl2(ts) SELECT ts FROM tbl1;
it's actually picking the first column of tbl2, which is businessid, so it's really attempting to do:
INSERT INTO tbl2(businessid) SELECT ts FROM tbl1;
which is clearly nonsense, and no casting will fix that.
(Your error in the original question doesn't match your tables and queries, so the details might be different as you've clearly made a mistake in mangling/obfuscating your tables or posted a newer version of the tables than the error. The principle remains.)
It's generally a really bad idea to assume your table definitions won't change and column order won't change anyway. So always be explicit about columns. In this case I think your intention might have actually been:
INSERT INTO tbl2(businessid, caseno, ts)
SELECT CAST(businessid AS integer), caseno, ts
FROM tbl1;
Note the cast, because the type of businessid is different between the two tables.

Resources