Is there a conversion to go from an integer (ex: 54600) to date-time format? I am seeing the integers in a flowsheet in healthcare, used to record a time
CONVERT(varchar, DATEADD(ms, b22.meas_value * 1000, 0), 114) AS 'START TIME' : Is the code that worked in MS SQL, but SNOWFLAKE does not recognize the function.
You can try to use TO_DATE. The data type of the returned value is DATE.
Syntax
TO_DATE( '<integer>' )
where
<integer>
An expression that evaluates to a string containing an integer, for example ‘15000000’. Depending upon the magnitude of the string, it can be interpreted as seconds, milliseconds, microseconds, or nanoseconds.
For details or other usage, see the above link.
There are a few different ways:
select 1637804567::varchar::date as date;
+------------+
| DATE |
|------------|
| 2021-11-25 |
+------------+
select to_date(1637804567::varchar) as date;
+------------+
| DATE |
|------------|
| 2021-11-25 |
+------------+
TO_DATE or CASTING will require the value to be string, so you need to convert the integer to string first before converting to date.
For your example, it will be like below in Snowflake:
select dateadd(ms, 54600*1000, to_date(1637804567::varchar)) as date;
+-------------------------------+
| DATE |
|-------------------------------|
| 2021-11-25 15:10:00.000000000 |
+-------------------------------+
TRY_TO_TIME() (there's a joke in there somewhere)
select try_to_time('54600') AS "START TIME"
Related
I have a where condition which I want to run over a set of tables in my Azure Data Explorer DB. I found "Find in ()" operator in Kusto query quite useful, works fine when I pass list of tables as intended.
find withsource=DataType in (AppServiceFileAuditLogs,AzureDiagnostics)
where TimeGenerated > ago(31d)
project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true
| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId, DataType | sort by BillableDataBytes nulls last
However, in my scenario, I would like to decide the list of tables at run time using another query.
Usage
| where TimeGenerated > ago(32d)
| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
| where IsBillable == true
| summarize BillableDataGB = sum(Quantity) / 1000 by DataType
| sort by BillableDataGB desc
|project DataType
find withsource=DataType in (<pass resulting table expression from above query here as comma separated list of tables>)
where TimeGenerated > ago(31d)
project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true
| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId, DataType | sort by BillableDataBytes nulls last
Found some examples of passing all tables in a database or cluster using wildcards but that does not fit my scenario. Can somebody help me here.
Here is one way to achieve this:
let Tables = toscalar(Usage
| where TimeGenerated > ago(32d)
| where StartTime >= startofday(ago(31d)) and EndTime < startofday(now())
| where IsBillable == true
| summarize by DataType);
union withsource=T *
| where T in (Tables)
| count
Note that there is a significance to the toscalar expression, it precalculates the list of tables and optimizes the filter on the union expression. I also updated your query to avoid unnecessary work.
I have a table with emojis like this :
MariaDB> SELECT HEX(value), value FROM `emojis`;
+----------------------------+-------+
| HEX(value) | value |
+----------------------------+-------+
| F09F9AA9 | 🚩 |
| F09F8FB4 | 🏴 |
| E29C94EFB88F | ✔️ |
| F09F9AA9 | 🏴 |
| F09F8FB4 | 🧛♀️ |
| F09FA79BE2808DE29980EFB88F | 💋 |
+----------------------------+-------+
But when I add a group by, some values are not returned :
MariaDB> SELECT value, HEX(value) FROM `emojis` GROUP BY value;
+-------+----------------------------+
| value | HEX(value) |
+-------+----------------------------+
| ✔️ | E29C94EFB88F |
| 🧛♀️ | F09F9AA9 |
| 🚩 | F09FA79BE2808DE29980EFB88F |
+-------+----------------------------+
The black flag and the kiss are missing. It looks like MariaDB can not do a group by with UTF8 MB4 characters.
My column type is : varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL;
I tried with Mariadb 10.1 and 10.5, and got the same result.
Is this a bug or am I missing something ?
HEX is not an aggregate function, so you cannot use as one with GROUP BY. If you wish to get the different emojis and the respective hex-values use DISTINCT.
You should (see Character Collating Weights) store the emojis as utf8mb4_bin instead utf8mb4_unicode_ci. The reason for this is how the character's collating weight is determined. See handling of comparision of characters with WEIGTH_STRING-function.
SELECT DISTINCT value, HEX(value)
FROM emojis;
See db-fiddle
The query like this should not be used:
SELECT value
FROM emojis
GROUP BY value;
The GROUP BY should be used only when you mix normal columns with aggregate functions. As said, use DISTINCT instead.
Thanks to your answers I understand what difference it makes to use utf8mb4_bin or utf8mb4_unicode_ci when using DISTINCT or GROUP BY and other functions comparing strings.
But in my case I could not change the encoding of my column, but I learned that collations could be specified in request, like this :
SELECT id, value
FROM `emojis`
GROUP BY value COLLATE utf8mb4_bin;
Warning this is not working if sql_mode is ONLY_FULL_GROUP_BY.
The collation of Emoji changed at some point in the history of Unicode. The older collation utf8mb4_unicode_ci (version 4.0) treated them as equal. Newer collation utf8mb4_unicode_520 (version 5.20) treated them as unequal.
GROUP BY and ORDER BY and < depend on collation.
I'm working on app where I use SQLite to store data.
I created column Date. Since I'm beginner I made a mistake by inputing date as %m/%d/%Y (for example: 2/20/2020)
Now I've got a problem while taking out rows between selected dates.
I tried using this code:
SELECT * FROM Table WHERE Date BETWEEN strftime('%m/%d/%Y','2/5/2019') AND strftime('%m/%d/%Y','2/20/2020')
But that's not working.
Example table:
ID | Date
01 | 9/2/2019
02 | 2/20/2020
Thank you in advance for your help.
Update your dates to the only valid for SQLite date format which is YYYY-MM-DD:
update tablename
set date = substr(date, -4) || '-' ||
substr('00' || (date + 0), -2, 2) || '-' ||
substr('00' || (substr(date, instr(date, '/') + 1) + 0), -2, 2);
See the demo.
Results:
| ID | Date |
| --- | ---------- |
| 1 | 2019-09-02 |
| 2 | 2020-02-20 |
Now you can set the conditions like:
Date BETWEEN '2019-02-05' AND '2020-02-20'
If you do this change then you can use the function strftime() in select statements to return the dates in any format that you want:
SELECT strftime('%m/%d/%Y', date) date FROM Table
If you don't change the format of date column then every time you need to compare dates you will have to transform the value with the expression used in the UPDATE statement, and this is the worst choice that you could make.
I have 2 columns: time_stamp and time_offset. Both are STRING data type.
How can we convert one column values into UTC with the help of second column which is in UTC? Is their any hive or from unix solution to convert time_stamp column into UTC?
hive> select time_stamp from table1 limit 2;
OK
20170717-22:31:57.348
20170719-21:10:15.393
[yyyymmdd-hh:mm:ss.msc] this column is in local time
hive> select time_offset from table1 limit 2;
OK
-05:00
+05:00
[‘+hh:mm’ or ‘-hh:mm’ ] this column is in UTC
You can use the Hive Date Functions unix_timestamp and from_unixtime to perform the conversion.
Code
WITH table1 AS (
SELECT '20170717-22:31:57.348' AS time_stamp, '-05:00' AS time_offset UNION ALL
SELECT '20170719-21:10:15.393' AS time_stamp, '+05:00' AS time_offset
)
SELECT
time_stamp,
time_offset,
unix_timestamp(concat(time_stamp, ' ', time_offset), 'yyyyMMdd-HH:mm:ss.SSS X') AS unix_timestamp_with_offset,
from_unixtime(unix_timestamp(concat(time_stamp, ' ', time_offset), 'yyyyMMdd-HH:mm:ss.SSS X'), 'yyyyMMdd-HH:mm:ss.SSS') AS string_timestamp_with_offset
FROM table1
;
Result Set
+------------------------+--------------+-----------------------------+-------------------------------+--+
| time_stamp | time_offset | unix_timestamp_with_offset | string_timestamp_with_offset |
+------------------------+--------------+-----------------------------+-------------------------------+--+
| 20170717-22:31:57.348 | -05:00 | 1500348717 | 20170717-20:31:57.000 |
| 20170719-21:10:15.393 | +05:00 | 1500480615 | 20170719-09:10:15.000 |
+------------------------+--------------+-----------------------------+-------------------------------+--+
Explanation
unix_timestamp can accept an optional format string in the same syntax as Java SimpleDateFormat. I am guessing that your offsets are using the ISO 8601 syntax, so let's use the X format specifier. Then, we can use the concat String Operator to combine time_stamp and time_offset before passing to unix_timestamp.
The unix_timestamp function results in a numeric timestamp specified as seconds since epoch. To convert that back to a string representation, we can pass the result obtained from unix_timestamp through from_unixtime, this time specifying our original format specifier.
(Please do test thoroughly to make sure the results are making sense in your environment. Time zone math can be tricky.)
I have table with timestamps in ms stored in it. I want to convert those timestamps in a human readable form.
Here is a sample output of my table:
SELECT date raw, strftime('%d-%m-%Y', (date/1000)) as_string
FROM my_table
+-----------------+--------------+
| raw | as_string |
+-----------------+--------------+
| 1444687200000 | 06-47-3950 |
+-----------------+--------------+
... ... ...
+-----------------+--------------+
As you can see, the date as string is quite strange (06-47-3950).
How can I obtain 12-10-2015?
Try this:
SELECT date raw, strftime('%d-%m-%Y', datetime(date/1000, 'unixepoch')) as_string
FROM my_table
You need to convert timestamp to date before.
You need to convert timestamp to daytime first. There was an answer on one forum. I quote it here.
Here you are: try those queries to see why and how.
select julianday('1899-12-30 00:00:00');
-- that gives 2415018.5 (remember Julian dates start at noon)
select datetime('40660.9454658044', '+2415018 days', '+12 hours', 'localtime');
-- gets you 2011-04-28 00:41:28 (depending on your local
time)