SQL Server 2012 How to change the data type of a column from bit to datefield? - datetime

I have a table Person with a column called onvacation.
This column is of data type bit since it's a boolean in the code. It has values null, 0 and 1.
I would like to change the data type of this column from bit to datetime so that all values that are 1, are converted to a new date (could be current date). and 0 and null values would both be just null.
I tried following w3bschool's tutorial and did a query:
ALTER TABLE Person ALTER COLUMN onvacation datetime
But that gives an error 'DF____Person__onvac__59062A42' is dependent on column 'onvacation'.

you get this error because DF____Person__onvac__59062A42 sql object Depends on onvacation column.
You can Find Dependency of Person table by Right Click-->View Dependancy
remove that dependent object and try to alter column

Related

Issue with replacing NULL sqlite3 database column values with other types in Python 3?

I've run into a problem with the sqlite3 module in Python 3, where I can't seem to figure out how to replace NULL values from the database with other ones, mainly strings and integers.
This command doesn't do the job, but also raises no exceptions:
UPDATE table SET animal='cat' WHERE animal=NULL AND id=32
The database table column "animal" is of type TEXT and gets filled with NULLs where no other value has been specified.
The column "id" is primary keyed and thus features only unique integer row indices.
If the column "animal" is defined, not NULL, the above command works flawlessly.
I can replace existing strings, integers, and floats with it.
What am I overlooking here?
Thanks.
The NULL value in SQL is special, and to compare values against it you need to use the IS and IS NOT operators. So your query should be this:
UPDATE table
SET animal = 'cat'
WHERE animal IS NULL AND id = 32;
NULL by definition means "unknown" in SQL, and so comparing a column directly against it with = also produces an unknown result.

How can i get my table filled automatically when i gave id from another table in ax 2009

I have a two tables in first table i filled the values with name id,
on second table if i gave the id the table needs to fill the name automatically, how can i do this please help.
You commented that the error is in the following line:
axsl.TransDate = DateTimeUtil::utcNow();
This is logic because axsl.TransDate is Date and DateTimeUtil::utcNow() return a UtcDateTime when you compile get this error Operand types are not compatible with the operator.
There are many ways to fix this error.
Try this:
axsl.TransDate = DateTimeUtil::date(DateTimeUtil::utcNow())
DateTimeUtil::date() convert UtcDateTime in Date.
or you can use today() method to return the actual date.

Merging two tables and returning value through r script

I am attempting to add a dynamic column to a table in spotfire that is updated using r-script/data functions in order to handle different variable types. When you just insert columns, it does not allow you to change the column from a text value to a string value.
The basic code structure is create a new table by merging the base table with the information table, select a column header to populate the new column from, and return the calculated column values to the base table. Parameters are as follows:
Input Parameters:
Name Type
columnMatch Value
baseTable Table
infoTable Table
Output Parameters (to be added to baseTable)
Name Type
outputColumn Column
Script
newTable <- merge(baseTable,infoTable, by = "uniqueIdentifier")
cnames <- colnames(newTable)
outputColumn <- newTable[,match(colorSelection, cnames, nomatch=1)]
outputColumn
The issue thatI am having is as follows:
The code is not returning the correct value for the correct uniqueIdentifier. Is there a way that I can make the values line up, or sort the table in order to return the correct value for the correct uniqueIdentifier?
Thanks!
Jordan
EDIT: found out how to dynamically refer to column number using match function.

Parametric recursive looped SQLite insert - do all columns have to be supplied?

I added a new column to my table, so there are now 4 instead of 3, and am now getting the following error when do a parametric insert (looped):
table 'test' has 4 columns but 3 values were supplied
Does this mean that you have to code your query for EVERY column the table has (as opposed to just the columns you want populated) when doing inserts, and that SQLite won't just add a default value if a column is missing from the query?
My query is:
"INSERT OR IGNORE INTO test VALUES (NULL, #col2, #col3)"
And this is the code that controls what's inserted in the recursive lopp:
sqlStatement.clearParameters();
var _currentRow:Object = _dataArray.shift();
sqlStatement.parameters["#col2"] = _currentRow.val2;
sqlStatement.parameters["#col3"] = _currentRow.val3;
sqlStatement.execute();
Ideally, I'd like column 4 to be left blank, without having to code it into the query.
Thanks for taking a look.
If you're inserting less values than there are columns, you need to explicitly specify the columns you are inserting to. For example
INSERT INTO test(firstcolumn,secondcolumn) VALUES(1,2);
Those columns that are not specified will get the default value, or NULL if there is no default value.

PostgreSQL, R and timestamps with no time zone

I am reading a big csv (>1GB big for me!). It contains a timestamp field.
I read it (100 rows to start with ) with fread from the excellent data.table package.
ddfr <- fread(input="~/file1.csv",nrows=100, header=T)
Problem 1 (RESOLVED): the timestamp fields (called "ts" and "update"), e.g. "02/12/2014 04:40:00 AM" is converted to string. I convert the fields back to timestamp with lubridate package mdh_hms. Splendid.
ddfr$ts <- data.frame( mdy_hms(ddfr$ts))
Problem 2 (NOT RESOLVED): The timestamp is created with time zone as per POSIXlt.
How do I create in R a timestamp with NO TIME ZONE? is it possible??
Now I use another (new) great package, PivotalR to write the dataframe to PostGreSQL 9.3 using as.db.data.frame. It works as a charm.
x <- as.db.data.frame(ddfr, table.name= "tbl1", conn.id = 1)
Problem 3 (NOT RESOLVED): As the original dataframe timestamp fields had time zones, a table is created with the fields "timestamp with time zone". Ultimately the data needs to be stored in a table with fields configured as "timestamp without time zone".
But in my table in Postgres the data is stored as "2014-02-12 04:40:00.0", where the .0 at the end is the UTC offset. I think I need to have "2014-02-12 04:40:00".
I tried
ALTER TABLE tbl ALTER COLUMN ts type timestamp without time zone;
Then I copied across. While Postgres accepts the ALTER COLUMN command, when I try to copy (using INSERT INTO tbls SELECT ...) I get an error:
"column "ts" is of type timestamp without time zone but expression is of type text
Hint: You will need to rewrite or cast the expression."
Clearly the .0 at the end is not liked (but why then Postgres accepts the ALTER COLUMN? boh!).
I tried to do what the error suggested using CAST in the INSERT INTO query:
INSERT INTO tbl2 SELECT CAST(ts as timestamp without time zone) FROM tbl1
But I get the same error (including the suggestion to use CAST aargh!)
The table directly created by PivotalR (based on the dataframe) has this CREATE script:
CREATE TABLE tbl2
(
businessid integer,
caseno text,
ts timestamp with time zone
)
WITH (
OIDS=FALSE
);
ALTER TABLE tbl1
OWNER TO mydb;
The table I'm inserting into has this CREATE script:
CREATE TABLE tbl1
(
id integer NOT NULL DEFAULT nextval('bus_seq'::regclass),
businessid character varying,
caseno character varying,
ts timestamp without time zone,
updated timestamp without time zone,
CONSTRAINT busid_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE tbl1
OWNER TO postgres;
My apologies for the convoluted explanation, but potentially a solution could be found at any step in the chain, so I preferred to put all my steps in one question. I am sure there has to be a simpler method...
I think you're confused about copying data between tables.
INSERT INTO ... SELECT without a column list expects the columns from source and destination to be the same. It doesn't magically match up columns by name, it'll just assign columns from the SELECT to the INSERT from left to right until it runs out of columns, at which point any remaining cols are assumed to be null. So your query:
INSERT INTO tbl2 SELECT ts FROM tbl1;
isn't doing this:
INSERT INTO tbl2(ts) SELECT ts FROM tbl1;
it's actually picking the first column of tbl2, which is businessid, so it's really attempting to do:
INSERT INTO tbl2(businessid) SELECT ts FROM tbl1;
which is clearly nonsense, and no casting will fix that.
(Your error in the original question doesn't match your tables and queries, so the details might be different as you've clearly made a mistake in mangling/obfuscating your tables or posted a newer version of the tables than the error. The principle remains.)
It's generally a really bad idea to assume your table definitions won't change and column order won't change anyway. So always be explicit about columns. In this case I think your intention might have actually been:
INSERT INTO tbl2(businessid, caseno, ts)
SELECT CAST(businessid AS integer), caseno, ts
FROM tbl1;
Note the cast, because the type of businessid is different between the two tables.

Resources