I have 6 columns with same value as either of 0,1,2,3. I want to display the result such as 0 represents SUCCESS, 1 or 2 represent failure and 3 represents NOT APPLICABLE. So if in DB the values are :
col A | col B | col C | col D | col E | col F
0 | 1 | 2 | 0 | 3 | 2
Output should be :
col A | col B | col C | col D | col E | col F
S | F | F | S | NA | F
Is it possible to do it through decode by selecting all the columns at once rather than selecting them individually?
If I understand your question correctly, it sounds like you just need a case expression (or decode, if you prefer, but that's less self-documenting than a case expression), along the lines of:
case when some_col = 0 then 'S'
when some_col in (1, 2) then 'F'
...
else some_col -- replace with whatever you want the output to be if none of the above conditions are met
end
or maybe:
case some_col
when 0 then 'S'
when 1 then 'F'
...
else some_col -- replace with whatever you want the output to be if none of the above conditions are met
end
So your query would look something like:
select case ...
end col_a,
...
case ...
end col_f
from your_table;
Is it possible to do it through decode by selecting all the columns at once rather than selecting them individually?
No
However, besides using pivot, the only solution I see would be using PL/SQL:
1.This is how I simulated your table
SELECT *
FROM (WITH tb1 (col_a, col_b, col_c, col_d, col_e, col_f) AS
(SELECT 0, 1, 2, 0, 3, 2 FROM DUAL)
SELECT *
FROM tb1)
2.I would append the columns together with a comma between them and save them into a table of strings
SELECT col_a || ',' || col_b || ',' || col_c || ',' || col_d || '.' || col_e || ',' || col_f
FROM (WITH tb1 (col_a, col_b, col_c, col_d, col_e, col_f) AS (SELECT 0, 1, 2, 0, 3, 2 FROM DUAL)
SELECT *
FROM tb1)
3.Then I would use REGEXP_REPLACE to replace your values one row at a time
SELECT REPLACE (REGEXP_REPLACE (REPLACE ('0,1,2,0,3,2', 0, 'S'), '[1-2]', 'F'), 3, 'NA') COL_STR
FROM DUAL
4. Using dynamic SQL I would update the table using rowid or whatever you intend to do. I made this SQL which will separate the string into columns
SELECT REGEXP_SUBSTR (COL_STR, '[^,]+', 1, 1) AS COL_A,
REGEXP_SUBSTR (COL_STR, '[^,]+', 1, 2) AS COL_B,
REGEXP_SUBSTR (COL_STR, '[^,]+', 1, 3) AS COL_C,
REGEXP_SUBSTR (COL_STR, '[^,]+', 1, 4) AS COL_D,
REGEXP_SUBSTR (COL_STR, '[^,]+', 1, 5) AS COL_E,
REGEXP_SUBSTR (COL_STR, '[^,]+', 1, 6) AS COL_F
FROM tst1)
All of this is very tedious and it could take some time. Using DECODE or CASE would be easier to look at and interpret and thus easier to maintain.
Related
I have some data that looks like this:
UserID Category
------ --------
1 a
1 b
2 c
3 b
3 a
3 c
A I'd like to binary-encode this grouped by UserID: three different values exist in Category, so a binary encoding would be something like:
UserID encoding
------ --------
1 "1, 1, 0"
2 "0, 0, 1"
3 "1, 1, 1"
i.e., all three values are present for UserID = 3, so the corresponding vector is "1, 1, 1".
Is there a way to do this without doing a bunch of CASE WHEN statements? There may be dozens of possible values in Category
Cross join the distinct users to distinct categories and left join to the table.
Then use GROUP_CONCAT() window function which supports an ORDER BY clause, to collect the 0s and 1s:
WITH
users AS (SELECT DISTINCT UserID FROM tablename),
categories AS (
SELECT DISTINCT Category, DENSE_RANK() OVER (ORDER BY Category) rn
FROM tablename
),
cte AS (
SELECT u.UserID, c.rn,
'"' || GROUP_CONCAT(t.UserID IS NOT NULL)
OVER (PARTITION BY u.UserID ORDER BY c.rn) || '"' encoding
FROM users u CROSS JOIN categories c
LEFT JOIN tablename t
ON t.UserID = u.UserID AND t.Category = c.Category
)
SELECT DISTINCT userID,
FIRST_VALUE(encoding) OVER (PARTITION BY UserID ORDER BY rn DESC) encoding
FROM cte
ORDER BY userID
This will work for any number of categories.
See the demo.
Results:
UserID
encoding
1
"1,1,0"
2
"0,0,1"
3
"1,1,1"
First create an encoding table to explicit establish order of categories in the bitmap:
create table e (Category int, Encoding int);
insert into e values ('a', 1), ('b', 2), ('c', 4);
First generate a list of users u (cross) joined with the encoding table e to get a fully populated (UserId, Category, Encoding) table. Then left join the fully populated table with the user supplied data t. The right hand side t can now be used to drive if we need to set a bit or not:
select
u.UserId,
'"' ||
group_concat(case when t.UserId is null then 0 else 1 end, ', ')
|| '"' 'encoding'
from
(select distinct UserID from t) u
join e
left natural join t
group by 1
order by e.Encoding
and it gives the expected result:
1|"1, 1, 0"
2|"0, 0, 1"
3|"1, 1, 1"
I am trying to achieve these things:
Get most recent data for certain fields (base on timestamp) -> call this latestRequest
Get previous data for these fields (basically timestamp < latestRequest.timestamp)-> call this previousRequest
Count the difference between latestRequest and previousRequest
This is what I come with now:
let LatestRequest=requests
| where operation_Name == "SearchServiceFieldMonitor"
| extend Mismatch = split(tostring(customDimensions.IndexerMismatch), " in ")
| extend difference = toint(Mismatch[0])
, field = tostring(Mismatch[1])
, indexer = tostring(Mismatch[2])
, index = tostring(Mismatch[3])
, service = tostring(Mismatch[4])
| summarize MaxTime=todatetime(max(timestamp)) by service,index,indexer;
let previousRequest = requests
| where operation_Name == "SearchServiceFieldMonitor"
| extend Mismatch = split(tostring(customDimensions.IndexerMismatch), " in ")
| extend difference = toint(Mismatch[0])
, field = tostring(Mismatch[1])
, indexer = tostring(Mismatch[2])
, index = tostring(Mismatch[3])
, service = tostring(Mismatch[4])
|join (LatestRequest) on indexer, index,service
|where timestamp <LatestRequest.MaxTime
However, I get this error from this query:
Ensure that expression: LatestRequest.MaxTime is indeed a simple name
I tried to use toDateTime(LatestRequest.MaxTime) but it doesn't make any difference. What I am doing wrong?
The error you get is because you can't refer to a column in a table using the dot notation, you should simply use the column name since the results of a join operator is a table with the applicable columns from both side of the join.
An alternative to join might be using the row_number() and prev() functions. You can find the last record and the one before it by ordering the rows based on the key and timestamp and then calculate the values between the current row and the row before it.
Here is an example:
datatable(timestamp:datetime, requestId:int, val:int)
[datetime(2021-02-20 10:00), 1, 5,
datetime(2021-02-20 11:00), 1, 6,
datetime(2021-02-20 12:00), 1, 8,
datetime(2021-02-20 10:00), 2, 10,
datetime(2021-02-20 11:00), 2, 20,
datetime(2021-02-20 12:00), 2, 30,
datetime(2021-02-20 13:00), 2, 40,
datetime(2021-02-20 13:00), 3, 100
]
| order by requestId asc, timestamp desc
| extend rn = row_number(0, requestId !=prev(requestId))
| where rn <= 1
| order by requestId, rn desc
| extend diff = iif(prev(rn) == 1, val - prev(val), val)
| where rn == 0
| project-away rn
The results are:
SQLite
I want to convert single row value seperate by ',' to multiple rows
Example :
Single_Row
6,7,8,9,10,11,12,13,14,15,16
Result must be :
MultipleRows
6
7
8
9
10
12
13
14
15
16
I tried doing it with substr function but getting unexpected result
select
numbers.n,
substr(CbahiHSSpecialtyUnits.units,numbers.n,1)
from
numbers inner join CbahiHSSpecialtyUnits
on LENGTH(CbahiHSSpecialtyUnits.units)
- LENGTH(REPLACE(CbahiHSSpecialtyUnits.units, ',', ''))>=numbers.n-1
WHERE HsSubStandardID=22 and SpecialtyID=2 and numbers.n>0
order by numbers.n;
One good thing is I'm getting number of rows correct.. But the values that should be separated is wrong ..
Please note numbers table is I have created for indexing purpose, with the help of this post.
SQL split values to multiple rows
You can do it with a recursive CTE:
WITH cte AS (
SELECT SUBSTR(Units, 1, INSTR(Units || ',', ',') - 1) col,
SUBSTR(Units, INSTR(Units || ',', ',') + 1) value
FROM CbahiHSSpecialtyUnits
WHERE HsSubStandardID=22 AND SpecialtyID = 2
UNION ALL
SELECT SUBSTR(value, 1, INSTR(value || ',', ',') - 1),
SUBSTR(value, INSTR(value || ',', ',') + 1)
FROM cte
WHERE LENGTH(value) > 0
)
SELECT col
FROM cte
WHERE col + 0 > 0
Or, if you know the upper limit of the numbers is, say 20 and there are no duplicates among the numbers:
WITH cte AS (SELECT 1 col UNION ALL SELECT col + 1 FROM cte WHERE col < 20)
SELECT c.col
FROM cte c INNER JOIN CbahiHSSpecialtyUnits u
ON ',' || u.Units || ',' LIKE '%,' || c.col || ',%'
WHERE HsSubStandardID=22 AND SpecialtyID = 2
See the demo.
Results:
col
6
7
8
9
10
11
12
13
14
15
16
I got the solution
http://www.samuelbosch.com/2018/02/split-into-rows-sqlite.html
WITH RECURSIVE split(predictorset_id, predictor_name, rest) AS (
SELECT CbahiHSSpecialtyUnits.SpclUnitSerial, '', units || ',' FROM CbahiHSSpecialtyUnits WHERE HsSubStandardID=22 and SpecialtyID=2
UNION ALL
SELECT predictorset_id,
substr(rest, 0, instr(rest, ',')),
substr(rest, instr(rest, ',')+1)
FROM split
WHERE rest <> ''
I'm struggling to convert
a | a1,a2,a3
b | b1,b3
c | c2,c1
to:
a | a1
a | a2
a | a3
b | b1
b | b2
c | c2
c | c1
Here are data in sql format:
CREATE TABLE data(
"one" TEXT,
"many" TEXT
);
INSERT INTO "data" VALUES('a','a1,a2,a3');
INSERT INTO "data" VALUES('b','b1,b3');
INSERT INTO "data" VALUES('c','c2,c1');
The solution is probably recursive Common Table Expression.
Here's an example which does something similar to a single row:
WITH RECURSIVE list( element, remainder ) AS (
SELECT NULL AS element, '1,2,3,4,5' AS remainder
UNION ALL
SELECT
CASE
WHEN INSTR( remainder, ',' )>0 THEN
SUBSTR( remainder, 0, INSTR( remainder, ',' ) )
ELSE
remainder
END AS element,
CASE
WHEN INSTR( remainder, ',' )>0 THEN
SUBSTR( remainder, INSTR( remainder, ',' )+1 )
ELSE
NULL
END AS remainder
FROM list
WHERE remainder IS NOT NULL
)
SELECT * FROM list;
(originally from this blog post: https://blog.expensify.com/2015/09/25/the-simplest-sqlite-common-table-expression-tutorial)
It produces:
element | remainder
-------------------
NULL | 1,2,3,4,5
1 | 2,3,4,5
2 | 3,4,5
3 | 4,5
4 | 5
5 | NULL
the problem is thus to apply this to each row in a table.
Yes, a recursive common table expression is the solution:
with x(one, firstone, rest) as
(select one, substr(many, 1, instr(many, ',')-1) as firstone, substr(many, instr(many, ',')+1) as rest from data where many like "%,%"
UNION ALL
select one, substr(rest, 1, instr(rest, ',')-1) as firstone, substr(rest, instr(rest, ',')+1) as rest from x where rest like "%,%" LIMIT 200
)
select one, firstone from x UNION ALL select one, rest from x where rest not like "%,%"
ORDER by one;
Output:
a|a1
a|a2
a|a3
b|b1
b|b3
c|c2
c|c1
Check my answer in How to split comma-separated value in SQLite?.
This will give you the transformation in a single query rather than having to apply to each row.
-- using your data table assuming that b3 is suppose to be b2
WITH split(one, many, str) AS (
SELECT one, '', many||',' FROM data
UNION ALL SELECT one,
substr(str, 0, instr(str, ',')),
substr(str, instr(str, ',')+1)
FROM split WHERE str !=''
) SELECT one, many FROM split WHERE many!='' ORDER BY one;
a|a1
a|a2
a|a3
b|b1
b|b2
c|c2
c|c1
I need to select all rows (for a range) which have a common value within a column.
For example (starting from the last row)
I try to select all of the rows where _user_id == 1 until _user_id != 1 ?
In this case resulting in selecting rows [4, 5, 6]
+------------------------+
| _id _user_id amount |
+------------------------+
| 1 1 777 |
| 2 2 1 |
| 3 2 11 |
| 4 1 10 |
| 5 1 100 |
| 6 1 101 |
+------------------------+
/*Create the table*/
CREATE TABLE IF NOT EXISTS t1 (
_id INTEGER PRIMARY KEY AUTOINCREMENT,
_user_id INTEGER,
amount INTEGER);
/*Add the datas*/
INSERT INTO t1 VALUES(1, 1, 777);
INSERT INTO t1 VALUES(2, 2, 1);
INSERT INTO t1 VALUES(3, 2, 11);
INSERT INTO t1 VALUES(4, 1, 10);
INSERT INTO t1 VALUES(5, 1, 100);
INSERT INTO t1 VALUES(6, 1, 101);
/*Check the datas*/
SELECT * FROM t1;
1|1|777
2|2|1
3|2|11
4|1|10
5|1|100
6|1|101
In my attempt I use Common Table Expressions to group the results of _user_id. This gives the index of the last row containing a unique value (eg. SELECT _id FROM t1 GROUP BY _user_id LIMIT 2; will produce: [6, 3])
I then use those two values to select a range where LIMIT 1 OFFSET 1 is the lower end (3) and LIMIT 1 is the upper end (6)
WITH test AS (
SELECT _id FROM t1 GROUP BY _user_id LIMIT 2
) SELECT * FROM t1 WHERE _id BETWEEN 1+ (
SELECT * FROM test LIMIT 1 OFFSET 1
) and (
SELECT * FROM test LIMIT 1
);
Output:
4|1|10
5|1|100
6|1|101
This appears to work ok at selecting the last "island" but what I really need is a way to select the n'th island.
Is there a way to generate a query capable of producing outputs like these when provided a parameter n?:
island (n=1):
4|1|10
5|1|100
6|1|101
island (n=2):
2|2|1
3|2|11
island (n=3):
1|1|777
Thanks!
SQL tables are unordered, so the only way to search for islands is to search for consecutive _id values:
WITH RECURSIVE t1_with_islands(_id, _user_id, amount, island_number) AS (
SELECT _id,
_user_id,
amount,
1
FROM t1
WHERE _id = (SELECT max(_id)
FROM t1)
UNION ALL
SELECT t1._id,
t1._user_id,
t1.amount,
CASE WHEN t1._user_id = t1_with_islands._user_id
THEN island_number
ELSE island_number + 1
END
FROM t1
JOIN t1_with_islands ON t1._id = (SELECT max(_id)
FROM t1
WHERE _id < t1_with_islands._id)
)
SELECT *
FROM t1_with_islands
ORDER BY _id;