I have a table with this:
MyField VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC DEFAULT ''
I want to drop the default empty string value so that it defaults to null. I've discovered that I can set the default value to null, but then it actually says DEFAULT NULL in the DDL. This toys with the scripts I use to compare DDL diffs between multiple database environments, so I want to look simply like this:
MyField VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC
Edit: I am wanting this to behave the same as if I were changing a column from NOT NULL to nullable.
Non-nullable column:
n INTEGER NOT NULL
Nullable column:
n INTEGER
Notice how the latter doesn't say:
n INTEGER NULL
You need to drop and create the column as below:
alter table DBC.TEST
drop MY_FIELD
alter table DBC.TEST
ADD MY_FIELD VARCHAR(50)
I have tested it and its working.
Related
I have this simple table:
create table Customers
(
Id bigint not null primary key auto_increment,
Name varchar(100) not null,
IsVip boolean null
)
Now I want to set a default value for IsVip column. I tried:
alter table Customers
modify IsVip set default 0
But it doesn't work. How should I do it?
According to the ALTER TABLE syntax you use the syntax
| ALTER [COLUMN] col_name SET DEFAULT literal | (expression)
or the syntax
| MODIFY [COLUMN] [IF EXISTS] col_name column_definition
In your case it can be
alter table Customers ALTER COLUMN IsVip set default 0
I have created a table as below:
CREATE TABLE case_status(data_entry_timestamp DATETIME DEFAULT (datetime('now','localtime')) NOT NULL,
case_number TEXT PRIMARY KEY NOT NULL,
case_name TEXT DEFAULT MISSING,
death_reportdate DATE CONSTRAINT death_reportdate_chk CHECK (death_reportdate==strftime('%Y-%m-%d',death_reportdate)),
);
The column death_reportdate need to have a date with pre-defined format (e.g. 2000-12-31). I created the table, inserted some rows of data, and then try to modified data in death_reportdate, the check rule seems to be bypassed when I enter some random string to it.
What have I done wrong?
You had an extra comma at the end. Correct code:
CREATE TABLE case_status(data_entry_timestamp DATETIME DEFAULT (datetime('now','localtime')) NOT NULL,
case_number TEXT PRIMARY KEY NOT NULL,
case_name TEXT DEFAULT MISSING,
death_reportdate DATE CONSTRAINT death_reportdate_chk CHECK (death_reportdate==strftime('%Y-%m-%d',death_reportdate))
)
it is an old Topic but i had the the same Problem. if the strftime method Fails to Format the string( a bad Input) it retuns null, so you have to check is not null in the end
Here is another solution which works like a charm:
`date` DATE CHECK(date IS strftime('%Y-%m-%d', date))
This also works with the time:
`time` TIME CHECK(time IS strftime('%H:%M:%S', time))
Use this to define your column. I think that is a more elegant solution than checking for null value.
First, two small notes.
I'm using the TEXT type since SQLite does not have "real types." It has 5 column "affinities", INTEGER, TEXT, BLOB, REAL, and NUMERIC. If you say DATE then it uses NUMERIC which can behave a little weirdly in my opinion. I find it best to explicitly use one of the 5 affinities.
I'm using date(...) instead of strftime('%Y-%m-%d', ...) because they are the same thing.
Let's break down why the original question did not work.
DROP TABLE IF EXISTS TEMP.example;
CREATE TEMPORARY TABLE example (
deathdate TEXT CHECK (deathdate == date(deathdate))
);
INSERT INTO TEMP.example (deathdate) VALUES ('2020-01-01');
INSERT INTO TEMP.example (deathdate) VALUES ('a');
INSERT INTO TEMP.example (deathdate) VALUES (NULL);
SELECT * FROM TEMP.example;
Running this lets all three values get into the database. Why? Let's check the documentation for CHECK constraints.
If the result is zero (integer value 0 or real value 0.0), then a constraint violation has occurred. If the CHECK expression evaluates to NULL, or any other non-zero value, it is not a constraint violation.
If you run SELECT 'a' == date('a'); you'll see it is NULL. Why? Check SELECT date('a'); and you'll see it is also NULL. Huh, maybe the documentation for == can help?
Note that there are two variations of the equals and not equals operators. Equals can be either = or ==. [...]
The IS and IS NOT operators work like = and != except when one or both of the operands are NULL. In this case, if both operands are NULL, then the IS operator evaluates to 1 (true) and the IS NOT operator evaluates to 0 (false). If one operand is NULL and the other is not, then the IS operator evaluates to 0 (false) and the IS NOT operator is 1 (true). It is not possible for an IS or IS NOT expression to evaluate to NULL.
We need to use IS, not ==, and trying that we see that 'a' no longer gets in.
DROP TABLE IF EXISTS TEMP.example;
CREATE TEMPORARY TABLE example (
deathdate TEXT CHECK (deathdate IS date(deathdate))
);
INSERT INTO TEMP.example (deathdate) VALUES ('2020-01-01');
INSERT INTO TEMP.example (deathdate) VALUES ('a');
INSERT INTO TEMP.example (deathdate) VALUES (NULL);
SELECT * FROM TEMP.example;
If you don't want NULL to get in, simple change it to deathdate TEXT NOT NULL CHECK (deathdate IS date(deathdate))
In Lasso 8 with MySQL connector, the field() method seemed to always return a string type, regardless of what data was in the column, or the column's data type. The exception might've been BLOB columns, which might've returned a bytes type. (I don't recall at the moment.)
In Lasso 9 I see that the field() method returns an integer type for integer columns. This is causing some issue with conditionals where I tested for '1' instead of 1.
Is Lasso really using the MySQL data type, or is Lasso just interpreting the results?
Is there any documentation as to what column types are cast to what Lasso data types?
Lasso is using the information MySQL gives it about the column type to return the data as a corresponding Lasso type. Not sure of all the mechanics underneath. Lasso 8 may have done the same thing for integers, but Lasso 8 also allowed you to compare integers and strings with integer values. (In fact, Lasso 8 even allowed for array->get('1') - that's right, a string for the index!).
I don't know of any documentation about what fields are what. Anecdotally, I can tell you that while MySQL decimal and float fields are treated as Lasso decimals, MySQL doubles are not. (I also don't believe MySQL date(time) fields come over as Lasso dates, though that would be awesome.)
It's easy to find out what column types Lasso reports.
With a table looking like this:
CREATE TABLE `addressb` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`type` enum('commercial','residential') COLLATE utf8_swedish_ci DEFAULT NULL,
`street` varchar(50) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`city` varchar(25) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`state` char(2) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`zip` char(10) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`image` blob,
`created` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_swedish_ci;
You can get the reported column types like this:
inline(-database = 'testdb', -sql = "SELECT * FROM address") => {^
with col in column_names do {^
column(#col) -> type
'<br />'
^}
^}
Result:
id integer
type string
street string
city string
state string
zip null
image bytes
created string
As you can see, everything is reported as string except the integer and blob field. Plus the zip field that in this record happen to contain NULL and is reported as such.
When doing comparisons it is always a good idea to make sure you're comparing apples with apples. That is, making sure you're comparing values of the same type. Also, to check if there's content I always go for size.
string(column('zip')) -> size > 0 ?
In addition, NULL are returned as NULL in 9, whereas in 8 and earlier it was empty string. Beware of comparison of:
field('icanhaznull') == ''
If the field contains a NULL value, the above evaluates as TRUE in 8, and FALSE in 9.
That may mean altering your schema for columns toggling between NOT NULL and not. Or you may prefer to cast the field to string:
string(field('icanhaznull')) == ''
Testing a field or var against the empty string
I have one Auto Increment Field, rest are Integer,Text and Datetime field. How do I fix it out?
The Table Structure is given below:
CREATE TABLE "q1" (
"sb_id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"sb_title" text(100,0) NOT NULL,
"sb_details" text(300,0) NOT NULL,
"sb_image" text(30,0) NOT NULL,
"sb_type" integer(4,0) NOT NULL DEFAULT '1',
"sb_date" datetime NOT NULL
)
It could be because in your insert command
connection.execute("INSERT INTO q1(sb_title,sb_details)VALUES(?,?)",a,b);
you didn't insert any values for sb_image or sb_date, both of which are NOT NULL and have no default defined. SQLite doesn't know what to put in there. You should either take away the NOT NULL constraint on those columns, define a default for the columns, or insert something explicitly.
Using SQLite3, if you create a table like this:
CREATE TABLE MyTable (
id int primary key,
--define other columns here--
)
it turns out sqlite3_column_type(0) always returns SQLITE_NULL.
If I read on a bit, this may well be by design because this column is actually an alias to the internal rowid field.
Still, what is the programatical way to determine a certain column is an/the alias to the rowid field?
(Perhaps related, can I use sqlite3_column_type(x)==SQLITE_NULL to determine if the field of the current record holds NULL?)
According to http://www.sqlite.org/draft/lang_createtable.html#rowid
A PRIMARY KEY column only becomes an
integer primary key if the declared
type name is exactly "INTEGER". Other
integer type names like "INT" or
"BIGINT" or "SHORT INTEGER" or
"UNSIGNED INTEGER" causes the primary
key column to behave as an ordinary
table column with integer affinity and
a unique index, not as an alias for
the rowid.
So in your case it's "int" so invalid alias