Why doesn't tsqlt support not null columns? - tsqlt

After looking at the code it seems overly simple:
in [tSQLt].[Private_CreateFakeOfTable]
change
CASE WHEN cc.IsComputedColumn = 1 OR id.IsIdentityColumn = 1
to
CASE WHEN cc.IsComputedColumn = 1 OR id.IsIdentityColumn = 1 or c.is_nullable = 0
It's so simple, even to make it conditional upon an additional parameter, that it makes me wonder what the reasoning is behind it not being supported out of the box. I'm currently tempted to change it before starting to use it but thought I'd find out why - in case the ramifications are important.

tSQLt does make all columns nullable in any faked table. That is kind of the purpose of faking the table in the first place.
However, it seems you are looking for a temporary solution to make a column not-nullable so you can catch an error. In that case, I'd manually alter the column in question to be NOT NULL after faking the table.
And to answer your second question, there is no intrinsic technical reason to not have a switch to preserve nullability. There just never was a need for it so far.

Related

How to insert an element into the middle of an array (json) in SQLite?

I found a method json_insert in the json section of the SQLite document. But it seems to be not working in the way that I expected.
e.g. select json_insert('[3,2,1]', '$[3]', 4) as result;
The result column returns '[3,2,1,4]', which is correct.
But for select json_insert('[3,2,1]', '$[1]', 4) as result;
I am expecting something like '[3,2,4,1]' to be returned, instead of '[3,2,1]'.
Am I missing something ? I don't see there is an alternative method to json_insert.
P.S. I am playing it on https://sqlime.org/#demo.db, the SQLite version is 3.37.2.
The documentation states that json_insert() will not overwrite values ("Overwrite if already exists? - No"). That means you can't insert elements in the middle of the array.
My interpretation: The function is primarily meant to insert keys into an object, where this kind of behavior makes more sense - not changing the length of an array is a sacrifice for consistency.
You could shoehorn it into SQLite by turning the JSON array into a table, appending your element, sorting the result, and turning it all back into a JSON array:
select json_group_array(x.value) from (
select key, value from json_each('[3,2,1]')
union
select 1.5, 4 -- 1.5 = after 1, before 2
order by 1
) x
This will produce '[3,2,4,1]'.
But you can probably see that this won't scale, and even if there was a built-in function that did this for you, it wouldn't scale, either. String manipulation is slow. It might work well enough for one-offs, or when done infrequently.
In the long run, I would recommend properly normalizing your database structure instead of storing "non-blob" data in JSON blobs. Manipulating normalized data is much easier than manipulating JSON, not to mention faster by probably orders of magnitude.

Easy, computationally cheap way to add a row number to a table using SQLite?

Googled this quite a bit and found answers asking similar but different questions, but not what I am looking to do.
I have a table, and I want to add a row number to it. So:
ID || value
91 || valueA
11 || valueB
71 || valueC
becomes
Row# || ID || value
1 || 91 || valueA
2 || 11 || valueB
3 || 71 || valueC
Found this answer that is a bit more complex than my use case. Also I was warned against using the answers at they are computationally expensive (n^2-ish).
Also found a few other answers like this one where the user wanted the row number returned for a query, but that is a different use case. I just want to append a row number to all the rows in the table.
Based on your question, it seemed like you were asking for another column in your database. If that's not the case please comment.
In your database creation class (or wherever you create your database), in the CREATE TABLE statement, use the following structure:
CREATE TABLE table_name(
RowNum INTEGER AUTOINCREMENT,
_ID INTEGER PRIMARY KEY,
value TEXT,
);
Increment the database version by 1 and it'll be good to go.
As I think you know, in SQL tables have no inherent order. Therefore any "row number" is based on some implicit order. If you make use of any kind of "row id" in the DBMS, the implicit order is likely to be insertion order. That's cheap, if if it suits your needs, that's what you want.
Any other "row number" you create requires a sort; if supported by an index, that sort will be O(N log N), else, yes, O(N^2). The math has a way of being very insistent about that.
After answering this question many times in different guises, I wrote a simple example. In SQLite I've had good experience with under a million rows. Larger sorts take longer, YMMV.
FWIW, I never store derived order. Because the system can't feasibly enforce its correctness, it's impossible to know if it's correct. Better to keep a covering index on the interesting order, and rely on a view to supply rank ordinals when needed.

Change "Expiration Date" font color if within 30 days or already expired?

Not sure the best approach to do this, the application is older which is why I'm having so much trouble generating this. I read about doing a CASE statement but I don't have much SQL experience. Any help is appreciated and answers will be respected. Thanks. Also, there's no design to this, the people who wrote the application used placeholders and all the data comes form this huge file, which is beyond me. I don't know why because I've never seen anything like this. It's a monster.
'-
Dim TemplateColumnCDLExpiration As System.Web.UI.WebControls.TemplateColumn
TemplateColumnCDLExpiration = New System.Web.UI.WebControls.TemplateColumn
If Me.AllowSorting Then
TemplateColumnCDLExpiration.SortExpression = "CDLExpiration"
End If
TemplateColumnCDLExpiration.ItemTemplate =
New JAG.WebUI.Controls.IEditGridTemplate(ListItemType.Item,
"CDLExpiration",
JAG.WebUI.Controls.tEditGridItemControlType.Label)
TemplateColumnCDLExpiration.HeaderText = "CDL Expiration"
MyBase.Columns.Add(TemplateColumnCDLExpiration)
'-
OK, I'll give you the answer to your CASE question, but you have to promise that you'll read the considerations below. :)
I'm using Oracle SQL; I don't know if the syntax is different for other SQL implementations. Here's an example of a dummy query to show the syntax:
SELECT
CASE
WHEN (sysdate - TO_DATE('04/09/2013', 'mm/dd/yyyy') > 30) THEN 'red'
ELSE 'black'
END text_color
FROM dual;
The code in the parenthesis after the WHEN is the test. It compares the current date to April 9th and asks, "Is April 9th more than 30 days ago?" If so, it returns 'red' as the value of text_color. If that condition is false, it returns 'black'. Here's a more generalized form:
SELECT
CASE
WHEN (sysdate - :date_to_check > :expiration_days) THEN 'red'
ELSE 'black'
END text_color
FROM :my_table;
Considerations
You don't need this nasty piece of logic in SQL. The check for X number of days passing since the given date is not database logic. Fetching a date is database logic, deciding how many days have elapsed from that date until today could be argued as either DB logic or business logic, but deciding the text color is definitely display logic, meaning you should be modifying your .NET code, not your SQL. What happens if you need to change the display colors? The date check remains the same, but you have to modify...your SQL? SQL modifications should only need to happen if the data that is being retrieved or stored is modified. The bottom line is that this is not a clean separation of concerns.

Dataset column always returns -1

I have a SQL stored proc that returns a dataset to ASP.NET v3.5 dataset. One of the columns in the dataset is called Attend and is a nullable bit column in the SQL table. The SELECT for that column is this:
CASE WHEN Attend IS NULL THEN -1 ELSE Attend END AS Attend
When I execute the SP in Query Analyzer the row values are returned as they should be - the value for Attend is -1 is some rows, 0 in others, and 1 in others. However, when I debug the C# code and examine the dataset, the Attend column always contains -1.
If I SELECT any other columns or constant values for Attend the results are always correct. It is only the above SELECT of the bit field that is behaving strangely. I suspect it has something to do with the type being bit that is causing this. So to test this I instead selected "CONVERT(int, Attend)" but the behavior is the same.
I have tried using ExecuteDataset to retrieve the data and I have also created a .NET Dataset schema with TableAdapter and DataTable. Still no luck.
Does anyone know what is the problem here?
Like you, I suspect the data type. If you can change the data type of Attend, change it to smallint, which supports negative numbers. If not, try changing the name of the alias from Attend to IsAttending (or whatever suits the column).
Also, you can make your query more concise by using this instead of CASE:
ISNULL(Attend, -1)
You've suggested that the Attend field is a bit, yet it contains three values (-1,0,1). A bit, however, can only hold two values. Often (-1, 0) when converted to an integer, but also possible (0, 1), depending on whether the BIT is considered signed (two's compliment) or unsigned (one's compliment).
If your client (the ASP code) is converting all values for that field to a BIT type then both -1 and 1 will likely show as the same value. So, I would ensure two things:
- The SQL returns an INTEGER
- The Client isn't converting that to a BIT
[Though this doesn't explain the absence of 0's]
One needs to be careful with implicit conversion of types. When not specifying explicitly double check the precidence. Or, to be certain, explicitly specify every type...
Just out of interest, what do you get when using the following?
CASE [table].attend
WHEN NULL THEN -2
WHEN 0 THEN 0
ELSE 2
END

Fun with Database Triggers and Recursion in RDB

I had a problem this week (which thankfully I've solved in a much better way);
I needed to keep a couple of fields in a database constant.
So, I knocked up a script to place a Trigger on the table, that would set the value back to a preset number when either an insert, or update took place.
The database is RDB running on VMS (but i'd be interested to know the similarities for SQLServer).
Here are the triggers:
drop trigger my_ins_trig;
drop trigger my_upd_trig;
!
!++ Create triggers on MY_TABLE
CREATE TRIGGER my_ins_trig AFTER INSERT ON my_table
WHEN somefield = 2
(UPDATE my_table table1
SET table1.field1 = 0.1,
table1.field2 = 1.2
WHERE my_table.dbkey = table1.dbkey)
FOR EACH ROW;
CREATE TRIGGER my_upd_trig AFTER UPDATE ON my_table
WHEN somefield = 2
(UPDATE my_table table1
SET table1.field1 = 0.1,
table1.field2 = 1.2
WHERE my_table.dbkey = table1.dbkey)
FOR EACH ROW;
Question Time
I'd would expect this to form an infinite recursion - but it doesnt seem to?
Can anyone explain to me how RDB deals with this one way or another...or how other databases deal with it.
[NOTE: I know this is an awful approach but various problems and complexities meant that even though this is simple in the code - it couldn't be done the best/easiest way. Thankfully I haven't implemented it in this way but I wanted to ask the SO community for its thoughts on this. ]
Thanks in advance
edit: It seems Oracle RDB just plain doesnt execute nested triggers that result in recursion. From the paper: 'A trigger can nest other triggers as long as recursion does not take place.' I'll leave the rest of the answer here for anyone else wondering about recursive triggers in other DBs.
Well firstly to answer your question - it depends on the database. Its entirely possible that trigger recursion is turned off on the instance you are working on. As you can imagine, trigger recursion could cause all kinds of chaos if handled incorrectly so SQL Server allows you to disable it altogether.
Secondly, I would suggest that perhaps there is a better way to get this functionality without triggers. You can get view based row level security with SQL Server. The same outcome can be achieved with Oracle VPDs.
Alternatively, if its configuration values you are trying to protect, I would group them all into a single table and apply permissions on that (simpler than row based security).

Resources