I'm using a SQLite3 database, and I have a table that looks like this:
The database is quite big and running queries is very slow. I'm trying to speed up the process by indexing some of the columns. One of the columns that I want to index is the QUOTE_DATETIME column.
Problem: I want to index by date (YYYY-MM-DD) only, not by date and time (YYYY-MM-DD HH:MM:SS), which is the data I currently have in QUOTE_DATETIME.
Question: How can I use CREATE INDEX to create an index that uses only dates in the format YYYY-MM-DD? Should I split QUOTE_DATETIME into 2 columns: QUOTE_DATE and QUOTE_TIME? If so, how can I do that? Is there an easier solution?
Thanks for helping! :D
Attempt 1: I tried running CREATE INDEX id ON DATA (date(QUOTE_DATETIME)) but I got the error Error: non-deterministic functions prohibited in index expressions.
Attempt 2: I ran ALTER TABLE data ADD COLUMN QUOTE_DATE TEXT to create a new column to hold the date only. And then INSERT INTO data(QUOTE_DATE) SELECT date(QUOTE_DATETIME) FROM data. The date(QUOTE_DATETIME) should convert the date + time to only date, and the INSERT INTO should add the new values to QUOTE_DATE. However, it doesn't work and I don't know why. The new column ends up not having anything added to it.
Expression indexes must not use functions that might change their return value based on data not mentioned in the function call itself. The date() function is such a function because it might use the current time zone setting.
However, in SQLite 3.20 or later, you can use date() in indexes as long as you are not using any time zone modifiers.
INSERT adds new rows. To modify existing rows, use UPDATE:
UPDATE Data SET Quote_Date = date(Quote_DateTime);
Related
What is wrong with my code:
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= 2016-12-14');
This runs, but deletes ALL the rows in STLac for RegN, not just the rows with BegDate on or after 2016-12-14.
Originally I had:
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= :myDdate,[myDate]);
which has the advantage I hoped of not being particular to the date format. So I tried the literal date should in the format SQLite likes. Either way I get all rows deleted, not just those on or after the specified date.
Scott S.
Try double quote while putting date. As any value must be provided in between quotes until and unless that column is not int
ExecSql('DELETE FROM STLac WHERE RegN=99 AND BegDate>= "2016-12-14"');
SQLite does not have datetime format as such, so you have to figure out how date is actually represented in the table and change your query to provide the same format. First execute the "select" statement in some kind of management tool,
select * from STLac where RegN = 99 and BegDate >= '2016-12-14' --(or '2016.12.04' or something else)
which displays the result in the grid; when you see the expected rows, change it to "delete" query and copy into your Delphi program.
I need to use cast function with length of column in teradata.
say I have a table with following data ,
id | name
1|dhawal
2|bhaskar
I need to use cast operation something like
select cast(name as CHAR(<length of column>) from table
how can i do that?
thanks
Dhawal
You have to find the length by looking at the table definition - either manually (show table) or by writing dynamic SQL that queries dbc.ColumnsV.
update
You can find the maximum length of the actual data using
select max(length(cast(... as varchar(<large enough value>))) from TABLE
But if this is for FastExport I think casting as varchar(large-enough-value) and postprocessing to remove the 2-byte length info FastExport includes is a better solution (since exporting a CHAR() will results in a fixed-length output file with lots of spaces in it).
You may know this already, but just in case: Teradata usually recommends switching to TPT instead of the legacy fexp.
I am reading a big csv (>1GB big for me!). It contains a timestamp field.
I read it (100 rows to start with ) with fread from the excellent data.table package.
ddfr <- fread(input="~/file1.csv",nrows=100, header=T)
Problem 1 (RESOLVED): the timestamp fields (called "ts" and "update"), e.g. "02/12/2014 04:40:00 AM" is converted to string. I convert the fields back to timestamp with lubridate package mdh_hms. Splendid.
ddfr$ts <- data.frame( mdy_hms(ddfr$ts))
Problem 2 (NOT RESOLVED): The timestamp is created with time zone as per POSIXlt.
How do I create in R a timestamp with NO TIME ZONE? is it possible??
Now I use another (new) great package, PivotalR to write the dataframe to PostGreSQL 9.3 using as.db.data.frame. It works as a charm.
x <- as.db.data.frame(ddfr, table.name= "tbl1", conn.id = 1)
Problem 3 (NOT RESOLVED): As the original dataframe timestamp fields had time zones, a table is created with the fields "timestamp with time zone". Ultimately the data needs to be stored in a table with fields configured as "timestamp without time zone".
But in my table in Postgres the data is stored as "2014-02-12 04:40:00.0", where the .0 at the end is the UTC offset. I think I need to have "2014-02-12 04:40:00".
I tried
ALTER TABLE tbl ALTER COLUMN ts type timestamp without time zone;
Then I copied across. While Postgres accepts the ALTER COLUMN command, when I try to copy (using INSERT INTO tbls SELECT ...) I get an error:
"column "ts" is of type timestamp without time zone but expression is of type text
Hint: You will need to rewrite or cast the expression."
Clearly the .0 at the end is not liked (but why then Postgres accepts the ALTER COLUMN? boh!).
I tried to do what the error suggested using CAST in the INSERT INTO query:
INSERT INTO tbl2 SELECT CAST(ts as timestamp without time zone) FROM tbl1
But I get the same error (including the suggestion to use CAST aargh!)
The table directly created by PivotalR (based on the dataframe) has this CREATE script:
CREATE TABLE tbl2
(
businessid integer,
caseno text,
ts timestamp with time zone
)
WITH (
OIDS=FALSE
);
ALTER TABLE tbl1
OWNER TO mydb;
The table I'm inserting into has this CREATE script:
CREATE TABLE tbl1
(
id integer NOT NULL DEFAULT nextval('bus_seq'::regclass),
businessid character varying,
caseno character varying,
ts timestamp without time zone,
updated timestamp without time zone,
CONSTRAINT busid_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE tbl1
OWNER TO postgres;
My apologies for the convoluted explanation, but potentially a solution could be found at any step in the chain, so I preferred to put all my steps in one question. I am sure there has to be a simpler method...
I think you're confused about copying data between tables.
INSERT INTO ... SELECT without a column list expects the columns from source and destination to be the same. It doesn't magically match up columns by name, it'll just assign columns from the SELECT to the INSERT from left to right until it runs out of columns, at which point any remaining cols are assumed to be null. So your query:
INSERT INTO tbl2 SELECT ts FROM tbl1;
isn't doing this:
INSERT INTO tbl2(ts) SELECT ts FROM tbl1;
it's actually picking the first column of tbl2, which is businessid, so it's really attempting to do:
INSERT INTO tbl2(businessid) SELECT ts FROM tbl1;
which is clearly nonsense, and no casting will fix that.
(Your error in the original question doesn't match your tables and queries, so the details might be different as you've clearly made a mistake in mangling/obfuscating your tables or posted a newer version of the tables than the error. The principle remains.)
It's generally a really bad idea to assume your table definitions won't change and column order won't change anyway. So always be explicit about columns. In this case I think your intention might have actually been:
INSERT INTO tbl2(businessid, caseno, ts)
SELECT CAST(businessid AS integer), caseno, ts
FROM tbl1;
Note the cast, because the type of businessid is different between the two tables.
So, I have created a variable "batch" with datatype datetime. Now my OLEBD source has a column "addDate" eg 2012-05-18 11:11:17.470 so does empty destination which is to be populated.
now this column addDate has many dates and I want to copy all dates which are "2012-05-18 11:11:17.470"
When I put value of the variable as this date, it automatically changes to mm/dd/yyyy hh;mm AM format and hence in my conditional split transformation, it couldn't match the date with the variable and hence no records are getting copied to the destination !!
Where exactly is the problem?
Thanks!
I had this issue and the best solution I found is not “pretty”.
Basically you need to change the “expression” of the variable and the “evaluate as expression” to true (otherwise it will ignore the value on expression).
The secret is (and kind of the reason I said it is not a pretty solution) to create a second variable to evaluate the expression of the first variable because you can’t change the value of a variable based on a expression.
So let’s say your variable is called “DateVariable” and you have 23/05/2012, create a variable called “DateVar2” for example and set its expression to
(DT_WSTR,4)YEAR(#[User::DateVariable]) + "/"+RIGHT("0" +
(DT_WSTR,2)MONTH(#[User::DateVariable]),2) + "/" + RIGHT("0" +
(DT_WSTR,2)DAY(#[User::DateVariable]),2)
That will give you 2012/05/23
Just keep going to get the date on the format you want
I found the easier solution. Select datatype as string. put any desired value.
Before conditional split, you need data conversion transformation.
convert it into DT_DBTIMESTAMP then run the package.
It works!
I'm trying to do a query like this on a table with a DATETIME column.
SELECT * FROM table WHERE the_date =
2011-03-06T15:53:34.890-05:00
I have the following as an string input from an external source:
2011-03-06T15:53:34.890-05:00
I need to perform a query on my database table and extract the row which contains this same date. In my database it gets stored as a DATETIME and looks like the following:
2011-03-06 15:53:34.89
I can probably manipulate the outside input slightly ( like strip off the -5:00 ). But I can't figure out how to do a simple select with the datetime column.
I found the convert function, and style 123 seems to match my needs but I can't get it to work. Here is the link to reference about style 123
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.blocks/html/blocks/blocks125.htm
I think that convert's slightly wrongly documented in that version of the docs.
Because this format always has century I think you only need use 23. Normally the 100 range for convert adds the century to the year format.
That format only goes down to seconds what's more.
If you want more you'll need to past together 2 x converts. That is, past a ymd part onto a convert(varchar, datetime-column, 14) and compare with your trimmed string. milliseconds comparison is likely to be a problem depending on where you got your big time string though because the Sybase binary stored form has a granularity of 300ms I think, so if your source string is from somewhere else it's not likely to compare. In other words - strip the milliseconds and compare as strings.
So maybe:
SELECT * FROM table WHERE convert(varchar,the_date,23) =
'2011-03-06T15:53:34'
But the convert on the column would prevent the use of an index, if that's a problem.
If you compare as datetimes then the convert is on the rhs - but you have to know what your milliseconds are in the_date. Then an index can be used.