T-SQL Server ORDER BY date and nulls last - case

I am studying for exam 70-761 and there is a challenge asking to place nulls in the end when using order by, I know the result is this one:
select
orderid,
shippeddate
from Sales.Orders
where custid = 20
order by case when shippeddate is null then 1 else 0 end, shippeddate
what i don't know is why the 1 and 0 and how they affect the result can anyone clarify.
Best Regards,
Daniel

There are two parameters in your order clause, it like to split two groups and then continue sort items inside those groups
First, because 0 less than 1, so all the orders without shippeddate will be push to last.
Then we will order by shippeddate
Example:
orderID | shippeddate
| null
| today
| null
| yesterday
| tomorrow
First sort by case when shippeddate is null then 1 else 0 end we will got
orderID | shippeddate
| today
| yesterday
| tomorrow
| null
| null
then continue sort with shippeddate, we will got
| yesterday
| today
| tomorrow
| null
| null
hope it useful to you

Related

SQLite Count() of true boolean values in GROUP

I want to get messages from a view (it is a view because it contains messages from multiple sources) in an SQLite database. If a there are multiple messages from the same user, I want only the newest one. The view is already sorted by Date DESC.
If I use the query SELECT *, COUNT(IsRead = false) FROM Messages GROUP BY User I get the newest message from each user and the amount of messages for each user.
However instead of the total amount of messages, I only want the amount of unread messages (Read == false)
For the following table:
+-------+--------+------------+
| User | IsRead | Date |
+-------+--------+------------+
| User1 | false | 2020-01-05 |
| User2 | false | 2020-01-04 |
| User1 | false | 2020-01-03 |
| User3 | true | 2020-01-02 |
| User2 | true | 2020-01-01 |
| User3 | true | 2020-01-01 |
+-------+--------+------------+
I would like to get the following result
+-------+--------+------------+---------+
| User | IsRead | Date | notRead |
+-------+--------+------------+---------+
| User1 | false | 2020-01-05 | 2 |
| User2 | false | 2020-01-04 | 1 |
| User3 | true | 2020-01-02 | 0 |
+-------+--------+------------+---------+
How can I achieve that? The COUNT(IsRead = false) part in my query is just the way I imagined it could work. I could not find anything about how to do that in sqlite.
Also: Am I correct about always getting the most recent message from each user, if the output from the view is already sorted by Date descending? That seemed to be the case in my tests but I just want to make sure that was not a fluke.
From SELECT/The ORDER BY clause:
If a SELECT statement that returns more than one row does not have an
ORDER BY clause, the order in which the rows are returned is
undefined.
This means that even if the view is defined to return sorted rows, selecting from the view is not guaranteed to maintain that sort order.
Also, with this query:
SELECT *, COUNT(IsRead = false) FROM Messages GROUP BY User
even if you get the newest message from each user, this is not guaranteed, because it is not a documented feature.
So, for your question:
Am I correct about always getting the most recent message from each
user, if the output from the view is already sorted by Date
descending?
the answer is no.
You can't rely on coincidental results, but you can rely on documented features.
A documented feature is the use of Bare columns in an aggregate query which can solve your problem with a simple aggregation query:
SELECT User,
IsRead,
MAX(Date) Date,
SUM(NOT IsRead) notRead -- sums all 0s (false) by converting them to 1s (true)
FROM Messages
GROUP BY User;
Or, use window functions:
SELECT DISTINCT User,
FIRST_VALUE(IsRead) OVER (PARTITION BY User ORDER BY Date DESC) IsRead,
Max(Date) OVER (PARTITION BY User) Date,
SUM(NOT IsRead) OVER (PARTITION BY User) notRead
FROM Messages;
See the demo.

Application Insights query to get time between 2 custom events

I am trying to write a query that will get me the average time between 2 custom events, sorted by user session. I have added custom tracking events throughout this application and I want to query the time it takes the user from 'Setup' event to 'Process' event.
let allEvents=customEvents
| where timestamp between (datetime(2019-09-25T15:57:18.327Z)..datetime(2019-09-25T16:57:18.327Z))
| extend SourceType = 5;
let allPageViews=pageViews
| take 0;
let all = allEvents
| union allPageViews;
let step1 = materialize(all
| where name == "Setup" and SourceType == 5
| summarize arg_min(timestamp, *) by user_Id
| project user_Id, step1_time = timestamp);
let step2 = materialize(step1
| join
hint.strategy=broadcast (all
| where name == "Process" and SourceType == 5
| project user_Id, step2_time=timestamp
)
on user_Id
| where step1_time < step2_time
| summarize arg_min(step2_time, *) by user_Id
| project user_Id, step1_time,step2_time);
let 1Id=step1_time;
let 2Id=step2_time;
1Id
| union 2Id
| summarize AverageTimeBetween=avg(step2_time - step1_time)
| project AverageTimeBetween
When I run this query it produces this error message:
'' operator: Failed to resolve table or column or scalar expression named 'step1_time'
I am relatively new to writing queries with AI and have not found many resources to assist with this problem. Thank you in advance for your help!
I'm not sure what the let 1id=step1_time lines are intended to do.
those lines are trying to declare a new value, but step1_time isn't a thing, it was a field in another query
i'm also not sure why you're doing that pageviews | take 0 and unioning it with events?
let allEvents=customEvents
| where timestamp between (datetime(2019-09-25T15:57:18.327Z)..datetime(2019-09-25T16:57:18.327Z))
| extend SourceType = 5;
let step1 = materialize(allEvents
| where name == "Setup" and SourceType == 5
| summarize arg_min(timestamp, *) by user_Id
| project user_Id, step1_time = timestamp);
let step2 = materialize(step1
| join
hint.strategy=broadcast (allEvents
| where name == "Process" and SourceType == 5
| project user_Id, step2_time=timestamp
)
on user_Id
| where step1_time < step2_time
| summarize arg_min(step2_time, *) by user_Id
| project user_Id, step1_time,step2_time);
step2
| summarize AverageTimeBetween=avg(step2_time - step1_time)
| project AverageTimeBetween
if I remove the things I don't understand (like union with 0 pageviews, and the lets, I get a result, but I don't have your data so I had to use other values than "Setup" and "Process" so I don't know if it is what you expect?
you might want to look at the results of the step2 query without the summarize to just see what you're getting matches what you expect.

Add serial number for each id based on dates

I have a dataset like shown below (except the Ser_NO, this is the field i want to create).
+--------+------------+--------+
| CaseID | Order_Date | Ser_No |
+--------+------------+--------+
| 44 | 22-01-2018 | 1 |
+--------+------------+--------+
| 44 | 24-02-2018 | 3 |
+--------+------------+--------+
| 44 | 12-02-2018 | 2 |
+--------+------------+--------+
| 100 | 24-01-2018 | 1 |
+--------+------------+--------+
| 100 | 26-01-2018 | 2 |
+--------+------------+--------+
| 100 | 27-01-2018 | 3 |
+--------+------------+--------+
How can i achieve a serial number for each CaseId based on my dates. So the first date in a specific CaseID gets number 1, the second date in this CaseID gets number 2 and so on.
I'm working with T-SQL btw,
I've tried a few things:
CASE
WHEN COUNT(CaseID) > 1
THEN ORDER BY (Order_Date)
AND Ser_no +1
END
Thanks in advance.
First of all, although I don't understand what you did, it gives you what you wanted. The serial number is assigned by date order. The problem I can see is that the result shows you the rows in the wrong order (1, 3, 2 instead of 1, 2, 3).
To sort that order you can try this:
SELECT *, ROW_NUMBER() OVER (PARTITION BY caseid ORDER BY caseid, order_date) AS ser_no
FROM [Table]
Thanks for your reply,
Sorry for the misunderstanding, because the ser_no is not yet in my table. That is the field a want to calculate.
I finished it myself this morning, but it looks almost the same like your measure:
RANK() OVER(PARTITION BY CaseID ORDER BY CaseID, Order_Date ASC

Creating a view in a not relational database

I had an issue and I hope that someone could help me out. In fact, I work on a poorly designed database and I have no control to change things in it. I have a table "Books", and each book can have one or more author. Unfortunately the database is not fully relational (please don't ask me why because I am asking the same question from the beginning). In the table "Books" there is a field called "Author_ID" and "Author_Name", so when a book was written by 2 or 3 authors their IDs and Their names will be concatenated in the same record separated by an star. Here is a demonstration:
ID_BOOK | ID_AUTHOR | NAME AUTHOR | Adress | Country |
----------------------------------------------------------------------------------
001 |01 | AuthorU | AdrU | CtryU |
----------------------------------------------------------------------------------
002 |02*03*04 | AuthorX*AuthorY*AuthorZ | AdrX*NULL*AdrZ | NULL*NULL*CtryZ |
----------------------------------------------------------------------------------
I need to create a view against this table that would give me this result:
ID_BOOK | ID_AUTHOR | NAME AUTHOR | Adress | Country |
----------------------------------------------------------------------------------
001 |01 | AuthorU | AdrU | CtryU |
----------------------------------------------------------------------------------
002 |02 | AuthorX | AdrX | NULL |
----------------------------------------------------------------------------------
002 |03 | AuthorY | NULL | NULL |
----------------------------------------------------------------------------------
002 |04 | AuthorZ | AdrZ | CtryZ |
----------------------------------------------------------------------------------
I will continue trying to do it and I hope that someone could help me with at least some hints. Many thanks guys.
After I applied the solution given by you guys I got this problem. I am trying to solve it and hopefully you can help me. In fact, when the sql query run, the CLOB fields are disorganized when some of them contain NULL value. The reslut should be like above, but i got the result below:
ID_BOOK | ID_AUTHOR | NAME AUTHOR | Adress | Country |
----------------------------------------------------------------------------------
001 |01 | AuthorU | AdrU | CtryU |
----------------------------------------------------------------------------------
002 |02 | AuthorX | AdrX | CtryZ |
----------------------------------------------------------------------------------
002 |03 | AuthorY | AdrZ | NULL |
----------------------------------------------------------------------------------
002 |04 | AuthorZ | NULL | NULL |
----------------------------------------------------------------------------------
Why does it put the NULL values in the end? Thank you.
in 11g you can use a factored recursive sub query for this:
with data (id_book, id_author, name, item_author, item_name, i)
as (select id_book, id_author, name,
regexp_substr(id_author, '[^\*]+', 1, 1) item_author,
regexp_substr(name, '[^\*]+', 1, 1) item_name,
2 i
from books
union all
select id_book, id_author, name,
regexp_substr(id_author, '[^\*]+', 1, i) item_author,
regexp_substr(name, '[^\*]+', 1, i) item_name,
i+1
from data
where regexp_substr(id_author, '[^\*]+', 1, i) is not null)
select id_book, item_author, item_name
from data;
fiddle
A couple weeks ago I answered a similar question here. That answer has an explanation (I hope) of the general approach so I'll skip the explanation here. This query will do the trick; it uses REGEXP_REPLACE and leverages its "occurrence" parameter to pick the individual author ID's and names:
SELECT
ID_Book,
REGEXP_SUBSTR(ID_Author, '[^*]+', 1, Counter) AS AuthID,
REGEXP_SUBSTR(Name_Author, '[^*]+', 1, Counter) AS AuthName
FROM Books
CROSS JOIN (
SELECT LEVEL Counter
FROM DUAL
CONNECT BY LEVEL <= (
SELECT MAX(REGEXP_COUNT(ID_Author, '[^*]+'))
FROM Books))
WHERE REGEXP_SUBSTR(Name_Author, '[^*]+', 1, Counter) IS NOT NULL
ORDER BY 1, 2
There's a Fiddle with your data plus another row here.
Addendum: OP has Oracle 9, not 11, so regular expressions won't work. Following are instructions for doing the same task without regexes...
Without REGEXP_COUNT, the best way count authors is to count the asterisks and add one. To count asterisks, take the length of the string, then subtract its length when all the asterisks are sucked out of it: LENGTH(ID_Author) - LENGTH(REPLACE(ID_Author, '*')).
Without REGEX_SUBSTR, you need to use INSTR to find the position of the asterisks, and then SUBSTR to pull out the author IDs and names. This gets a little complicated - consider these Author columns from your original post:
Author U
Author X*Author Y*Author Z
AuthorX lies between the beginning the string and the first asterisk.
AuthorY is surrounded by asterisks
AuthorZ lies between the last asterisk and the end of the string.
AuthorU is all alone and not surrounded by anything.
Because of this, the opening piece (WITH AuthorInfo AS... below) adds an asterisk to the beginning and the end so every author name (and ID) is surrounded by asterisks. It also grabs the author count for each row. For the sample data in your original post, the opening piece will yield this:
ID_Book AuthCount ID_Author Name_Author
------- --------- ---------- -------------------------
001 1 *01* *AuthorU*
002 3 *02*03*04* *AuthorX*AuthorY*AuthorZ*
Then comes the join with the "Counter" table and the SUBSTR machinations to pull out the individual names and IDs. The final query looks like this:
WITH AuthorInfo AS (
SELECT
ID_Book,
LENGTH(ID_Author) -
LENGTH(REPLACE(ID_Author, '*')) + 1 AS AuthCount,
'*' || ID_Author || '*' AS ID_Author,
'*' || Name_Author || '*' AS Name_Author
FROM Books
)
SELECT
ID_Book,
SUBSTR(ID_Author,
INSTR(ID_Author, '*', 1, Counter) + 1,
INSTR(ID_Author, '*', 1, Counter+1) - INSTR(ID_Author, '*', 1, Counter) - 1) AS AuthID,
SUBSTR(Name_Author,
INSTR(Name_Author, '*', 1, Counter) + 1,
INSTR(Name_Author, '*', 1, Counter+1) - INSTR(Name_Author, '*', 1, Counter) - 1) AS AuthName
FROM AuthorInfo
CROSS JOIN (
SELECT LEVEL Counter
FROM DUAL
CONNECT BY LEVEL <= (SELECT MAX(AuthCount) FROM AuthorInfo))
WHERE AuthCount >= Counter
ORDER BY ID_Book, Counter
The Fiddle is here
If you have an authors table, you can do:
select b.id_book, a.id_author, a.NameAuthor
from books b left outer join
authors a
on '*'||NameAuthor||'*' like '%*||a.author||'*%'
In addition:
SELECT distinct id_book,
, trim(regexp_substr(id_author, '[^*]+', 1, LEVEL)) id_author
, trim(regexp_substr(author_name, '[^*]+', 1, LEVEL)) author_name
FROM yourtable
CONNECT BY LEVEL <= regexp_count(id_author, '[^*]+')
ORDER BY id_book, id_author
/
ID_BOOK ID_AUTHOR AUTHOR_NAME
------------------------------------
001 01 AuthorU
002 02 AuthorX
002 03 AuthorY
002 04 AuthorZ
003 123 Jane Austen
003 456 David Foster Wallace
003 789 Richard Wright
No REGEXP:
SELECT str, SUBSTR(str, substr_start_pos, substr_end_pos) final_str
FROM
(
SELECT str, substr_start_pos
, (CASE WHEN substr_end_pos <= 0 THEN (Instr(str, '*', 1)-1)
ELSE substr_end_pos END) substr_end_pos
FROM
(
SELECT distinct '02*03*04' AS str
, (Instr('02*03*04', '*', LEVEL)+1) substr_start_pos
, (Instr('02*03*04', '*', LEVEL)-1) substr_end_pos
FROM dual
CONNECT BY LEVEL <= length('02*03*04')
)
ORDER BY substr_start_pos
)
/
STR FINAL_STR
---------------------
02*03*04 02
02*03*04 03
02*03*04 04

Pull a row from SQL database based on if the value of a column is changed

I need to pull a row in a select statement from a SQL database if a certain value in a table is changed.
For example, I have a column called price in a Price table. If the user changes the value for price (through an asp.net app), I want to select that entire row. This is going to be done in a workflow and an email is sent to the user that the row that was changed AFTER it was changed.
Does this make sense? Can someone point me in the right direction of a procedure or function to use? Thanks.
You could use an SQL trigger to accomplish this.
There is a tutorial (using Price as you described) that shows how to accomplish this here: http://benreichelt.net/blog/2005/12/13/making-a-trigger-fire-on-column-change/
well, in order to update a row, you'll have to update that row "WHERE uniqueID = [someid]". Can't you simply run a select immediately after that? (SELECT * FROM [table] WHERE uniquueID = [someid])
Without knowing what your data looks like (or what database this is, it's a little difficult) but assuming you have a history table with a date and an ID that stays the same like this...
+----+-------+------------+
| ID | PRICE | CHNG_DATE |
+----+-------+------------+
| 1 | 2.5 | 2001-01-01 |
| 1 | 42 | 2001-01-01 |
| 2 | 4 | 2001-01-01 |
| 2 | 4 | 2001-01-01 |
| 3 | 4 | 2001-01-01 |
| 3 | 3 | 2001-01-01 |
| 3 | 2 | 2001-01-01 |
+----+-------+------------+
and your database supports With and Row_number You could write the following
WITH data
AS (SELECT id,
price,
chng_date,
Row_number()
OVER (
partition BY id
ORDER BY chng_date) rn
FROM price)
SELECT data.id,
data.price new,
data_prv.price old,
data.chng_date
FROM data
INNER JOIN data data_prv
ON data.id = data_prv.id
AND data.rn = data_prv.rn + 1
WHERE data.price <> data_prv.price
That would produce this
+----+-----+-----+------------+
| ID | NEW | OLD | CHNG_DATE |
+----+-----+-----+------------+
| 1 | 42 | 2.5 | 2001-01-01 |
| 3 | 3 | 4 | 2001-01-01 |
| 3 | 2 | 3 | 2001-01-01 |
+----+-----+-----+------------+
Demo
If your Database supports LAG() its even eaiser
WITH data
AS (SELECT id,
price new,
chng_date,
Lag(price)
OVER (
partition BY id
ORDER BY chng_date) old
FROM price)
SELECT id,
new,
old,
chng_date
FROM data
WHERE new <> old
Demo

Resources