MySQL Connector field() Auto-Conversion to Lasso Types? - lasso-lang

In Lasso 8 with MySQL connector, the field() method seemed to always return a string type, regardless of what data was in the column, or the column's data type. The exception might've been BLOB columns, which might've returned a bytes type. (I don't recall at the moment.)
In Lasso 9 I see that the field() method returns an integer type for integer columns. This is causing some issue with conditionals where I tested for '1' instead of 1.
Is Lasso really using the MySQL data type, or is Lasso just interpreting the results?
Is there any documentation as to what column types are cast to what Lasso data types?

Lasso is using the information MySQL gives it about the column type to return the data as a corresponding Lasso type. Not sure of all the mechanics underneath. Lasso 8 may have done the same thing for integers, but Lasso 8 also allowed you to compare integers and strings with integer values. (In fact, Lasso 8 even allowed for array->get('1') - that's right, a string for the index!).
I don't know of any documentation about what fields are what. Anecdotally, I can tell you that while MySQL decimal and float fields are treated as Lasso decimals, MySQL doubles are not. (I also don't believe MySQL date(time) fields come over as Lasso dates, though that would be awesome.)

It's easy to find out what column types Lasso reports.
With a table looking like this:
CREATE TABLE `addressb` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`type` enum('commercial','residential') COLLATE utf8_swedish_ci DEFAULT NULL,
`street` varchar(50) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`city` varchar(25) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`state` char(2) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`zip` char(10) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`image` blob,
`created` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_swedish_ci;
You can get the reported column types like this:
inline(-database = 'testdb', -sql = "SELECT * FROM address") => {^
with col in column_names do {^
column(#col) -> type
'<br />'
^}
^}
Result:
id integer
type string
street string
city string
state string
zip null
image bytes
created string
As you can see, everything is reported as string except the integer and blob field. Plus the zip field that in this record happen to contain NULL and is reported as such.
When doing comparisons it is always a good idea to make sure you're comparing apples with apples. That is, making sure you're comparing values of the same type. Also, to check if there's content I always go for size.
string(column('zip')) -> size > 0 ?

In addition, NULL are returned as NULL in 9, whereas in 8 and earlier it was empty string. Beware of comparison of:
field('icanhaznull') == ''
If the field contains a NULL value, the above evaluates as TRUE in 8, and FALSE in 9.
That may mean altering your schema for columns toggling between NOT NULL and not. Or you may prefer to cast the field to string:
string(field('icanhaznull')) == ''
Testing a field or var against the empty string

Related

Processing results from multiple tables - is there a faster way?

I'm just trying to improve my Database structure in order to avoid getting cloned results
My tables sits it one Database and looks like this:
CREATE TABLE IF NOT EXISTS `Players`(
`ID` INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
`UserID` INTEGER NOT NULL UNIQUE,
`SteamID` TEXT NOT NULL UNIQUE,
`Nick` TEXT NOT NULL,
`KlanID` INTEGER NOT NULL,
`Money` INTEGER NOT NULL,
`Kills` INTEGER NOT NULL,
`Deaths` INTEGER NOT NULL,
`Score` INTEGER NOT NULL,
`PayOrNot` INTEGER NOT NULL,
`IsNewOne` INTEGER NOT NULL,
`HUDMode` INTEGER NOT NULL,
`HUDColors` INTEGER NOT NULL,
`HUDPoz1` INTEGER NOT NULL,
`HUDPoz2` INTEGER NOT NULL
);
CREATE TABLE IF NOT EXISTS `Classes` (
`ClassID` INTEGER NOT NULL PRIMARY KEY UNIQUE,
`UserID` INTEGER NOT NULL,
`Level` INTEGER NOT NULL,
`Health` INTEGER NOT NULL,
`Intelligence` INTEGER NOT NULL,
`Stamina` INTEGER NOT NULL,
`Durability` INTEGER NOT NULL
);
CREATE TABLE `Equipment` (
`ID` INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
`UserID` INTEGER NOT NULL,
`Name` TEXT NOT NULL,
`Type` INTEGER NOT NULL,
`Time` INTEGER NOT NULL,
`NumberOfUses` INTEGER NOT NULL
);
It is a plugin designed for a Counter Strike server. It introduces some extra functionalities such as choosing own class, receiving equipment which gives extra powers, leveling system etc etc. I need to get almost all of the data stored there. The problem is, the amount of Equipment of each player is not known. Therefore, i have to figure out how to get these informations as optimal as i can.
For now, my query looks like this:
SELECT DISTINCT Players.SteamID,Players.Nick,Players.KlanID,Players.Money,Players.Kills,Players.Deaths,Players.Score,Players.PayOrNot,Players.IsNew,Players.HUDMode,Players.HUDColors,Players.HUDPoz1,Players.HUDPoz2,
Classes.Name,Classes.Level,Classes.Health,Classes.Intelligence,Classes.Stamina,Classes.Durability,
Equipment.Name,Equipment.Type,Equipment.Time,Equipment.NumberOfUses
FROM Players
INNER JOIN Equipment ON Players.UserID = Equipment.UserID
INNER JOIN Classes ON Equipment.UserID = Classes.UserID
WHERE Players.UserID=%d
Here are some sample results
(don't be scared by some of the words out there - I come originally from Poland. The things needed has beed translated and highlighted)
Every time map changes, each player has to receive this data + there might be some new ones throughout the map.
As you can see, there are some field copies due to the INNER JOIN nature. Luckily, I am able to process the results, because i know the number of classes, so i get the Equipment from [row % number_of_classes] rows and Classes from [row < number_of_classes] rows. However, i feel it could be done better. I was thinking about getting results from each query separately, but it would triple the number of queries. Maybe I have to rebuild the whole SQL structure? Or maybe this is the best way to do so?
The output of a single query has the form of a table, i.e., it has a fixed number of columns, and a certain number of rows.
If you want to get your data with a single query, you have no choice but to arrange it so that is has tabular form. When joining, this often results in duplicates.
But there is no reason to use a single query. SQLite has no client/server communication overhead, so many small queries are just as efficient.

How to drop a column's default value?

I have a table with this:
MyField VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC DEFAULT ''
I want to drop the default empty string value so that it defaults to null. I've discovered that I can set the default value to null, but then it actually says DEFAULT NULL in the DDL. This toys with the scripts I use to compare DDL diffs between multiple database environments, so I want to look simply like this:
MyField VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC
Edit: I am wanting this to behave the same as if I were changing a column from NOT NULL to nullable.
Non-nullable column:
n INTEGER NOT NULL
Nullable column:
n INTEGER
Notice how the latter doesn't say:
n INTEGER NULL
You need to drop and create the column as below:
alter table DBC.TEST
drop MY_FIELD
alter table DBC.TEST
ADD MY_FIELD VARCHAR(50)
I have tested it and its working.

use sqlite check to validate whether date with proper format is entered in the column

I have created a table as below:
CREATE TABLE case_status(data_entry_timestamp DATETIME DEFAULT (datetime('now','localtime')) NOT NULL,
case_number TEXT PRIMARY KEY NOT NULL,
case_name TEXT DEFAULT MISSING,
death_reportdate DATE CONSTRAINT death_reportdate_chk CHECK (death_reportdate==strftime('%Y-%m-%d',death_reportdate)),
);
The column death_reportdate need to have a date with pre-defined format (e.g. 2000-12-31). I created the table, inserted some rows of data, and then try to modified data in death_reportdate, the check rule seems to be bypassed when I enter some random string to it.
What have I done wrong?
You had an extra comma at the end. Correct code:
CREATE TABLE case_status(data_entry_timestamp DATETIME DEFAULT (datetime('now','localtime')) NOT NULL,
case_number TEXT PRIMARY KEY NOT NULL,
case_name TEXT DEFAULT MISSING,
death_reportdate DATE CONSTRAINT death_reportdate_chk CHECK (death_reportdate==strftime('%Y-%m-%d',death_reportdate))
)
it is an old Topic but i had the the same Problem. if the strftime method Fails to Format the string( a bad Input) it retuns null, so you have to check is not null in the end
Here is another solution which works like a charm:
`date` DATE CHECK(date IS strftime('%Y-%m-%d', date))
This also works with the time:
`time` TIME CHECK(time IS strftime('%H:%M:%S', time))
Use this to define your column. I think that is a more elegant solution than checking for null value.
First, two small notes.
I'm using the TEXT type since SQLite does not have "real types." It has 5 column "affinities", INTEGER, TEXT, BLOB, REAL, and NUMERIC. If you say DATE then it uses NUMERIC which can behave a little weirdly in my opinion. I find it best to explicitly use one of the 5 affinities.
I'm using date(...) instead of strftime('%Y-%m-%d', ...) because they are the same thing.
Let's break down why the original question did not work.
DROP TABLE IF EXISTS TEMP.example;
CREATE TEMPORARY TABLE example (
deathdate TEXT CHECK (deathdate == date(deathdate))
);
INSERT INTO TEMP.example (deathdate) VALUES ('2020-01-01');
INSERT INTO TEMP.example (deathdate) VALUES ('a');
INSERT INTO TEMP.example (deathdate) VALUES (NULL);
SELECT * FROM TEMP.example;
Running this lets all three values get into the database. Why? Let's check the documentation for CHECK constraints.
If the result is zero (integer value 0 or real value 0.0), then a constraint violation has occurred. If the CHECK expression evaluates to NULL, or any other non-zero value, it is not a constraint violation.
If you run SELECT 'a' == date('a'); you'll see it is NULL. Why? Check SELECT date('a'); and you'll see it is also NULL. Huh, maybe the documentation for == can help?
Note that there are two variations of the equals and not equals operators. Equals can be either = or ==. [...]
The IS and IS NOT operators work like = and != except when one or both of the operands are NULL. In this case, if both operands are NULL, then the IS operator evaluates to 1 (true) and the IS NOT operator evaluates to 0 (false). If one operand is NULL and the other is not, then the IS operator evaluates to 0 (false) and the IS NOT operator is 1 (true). It is not possible for an IS or IS NOT expression to evaluate to NULL.
We need to use IS, not ==, and trying that we see that 'a' no longer gets in.
DROP TABLE IF EXISTS TEMP.example;
CREATE TEMPORARY TABLE example (
deathdate TEXT CHECK (deathdate IS date(deathdate))
);
INSERT INTO TEMP.example (deathdate) VALUES ('2020-01-01');
INSERT INTO TEMP.example (deathdate) VALUES ('a');
INSERT INTO TEMP.example (deathdate) VALUES (NULL);
SELECT * FROM TEMP.example;
If you don't want NULL to get in, simple change it to deathdate TEXT NOT NULL CHECK (deathdate IS date(deathdate))

How to set the ABORT option in SQLite from Tcl when violating NOT NULL constraint

I have a TCL script where i generate a SQLite table:
DB eval {CREATE TABLE StressDat2( LC int NOT NULL, EID int NOT NULL, Xtens float, Ytens float ) }
When I try to write NULL values they get accepted anyhow. How can I from Tcl, when generating my table, set the ABORT option which shall handle writing attempts of NULL values?
The Xtens and Ytens columns do not have a NOT NULL constraint.
(The default conflict resolution algorithm is ABORT; you don't need to set it.)
It depends on how do you insert data into the table. If you also do it with Tcl, you need to be aware, that Tcl doesn't understand the idea of NULL value.
Therefore, this is WRONG:
set lc ""
set eid ""
set xtens ""
set utens ""
DB eval {INSERT INTO StressDat2 (LC, EID, Xtens, Ytens) VALUES ($lc, $eid, $xtens, $ytens);
You're obviously inserting empty strings, not null values. To insert null, use null keyword in SQL statement.
This is CORRECT:
DB eval {INSERT INTO StressDat2 (LC, EID, Xtens, Ytens) VALUES (NULL, NULL, NULL, NULL);
Finally, a word about "ABORT" conflict resolution in NOT NULL constraint, that you mentioned. You said this is blank in SQLiteStudio and not "ABORT", as you would like to. Well, the "ABORT" algorithm is a default algorithm used by sqlite when no algorithm was defined, so even you have it blank (default), it means it's "ABORT". Read this for more details: http://sqlite.org/lang_createtable.html

How do I use a boolean field in a where clause in SQLite?

It seems like a dumb question, and yet. It could be my IDE that's goofing me up. Here's the code (this is generated from DbLinq):
SELECT pics$.Caption, pics$.Id, pics$.Path, pics$.Public, pics$.Active, portpics$.PortfolioID
FROM main.Pictures pics$
inner join main.PortfolioPictures portpics$ on pics$.Id = portpics$.PictureId
WHERE portpics$.PortfolioId = 1 AND pics$.Id > 0
--AND pics$.Active = 1 AND pics$.Public = 1
ORDER BY pics$.Id
If I run this query I get three rows back, with two boolean fields called Active and Public. Adding in the commented out line returns no rows. Changing the line to any of the following:
pics$.Active = 'TRUE'
pics$.Active = 't'
pics$.Active = boolean(1)
It doesn't work. Either errors or no results. I've googled for this and found a dearth of actual SQL queries out there. And here we are.
So: how do I use a boolean field in a where clause in SQLite?
IDE is SQLite Administrator.
Update: Well, I found the answer. SQLite Administrator will let you make up your own types apparently; the create SQL that gets generated looks like this:
CREATE TABLE [Pictures] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
[Path] VARCHAR(50) UNIQUE NOT NULL,[Caption] varchAR(50) NULL,
[Public] BOOLEAN DEFAULT '0' NOT NULL,[Active] BOOLEAN DEFAULT '1' NOT NULL)
The fix for the query is
AND pics$.Active = 'Y' AND pics$.Public = 'Y'
The real issue here is, as the first answerer pointed out, there is no boolean type in SQLite. Not an issue, but something to be aware of. I'm using DbLinq to generate my data layer; maybe it shouldn't allow mapping of types that SQLite doesn't support. Or it should map all types that aren't native to SQLite to a string type.
You don't need to use any comparison operator in order to compare a boolean value in your where clause.
If your 'boolean' column is named is_selectable, your where clause would simply be:
WHERE is_selectable
SQLite does not have the boolean type: What datatypes does SQLite support?
The commented-out line as it is should work, just use integer values of 1 and 0 in your data to represent a boolean.
SQLite has no built-in boolean type - you have to use an integer instead. Also, when you're comparing the value to 'TRUE' and 't', you're comparing it to those values as strings, not as booleans or integers, and therefore the comparison will always fail.
Source: http://www.sqlite.org/datatype3.html
--> This Will Give You Result having False Value of is_online field
select * from device_master where is_online!=1
--> This Will Give You Result having True Value of is_online field
select * from device_master where is_online=1

Resources