A teradata table
as
Group_categ id
A 1
A 2
A 3
A 5
A 8
A 9
B 11
C 1
C 2
C 3
C 4
need to filter it like
Group_categ min_id max _id
A 1 2
A 3 5
A 8 9
B 11 11
C 1 2
C 3 4
Seems you want to combine two consecutive rows into a single row:
SELECT Group_categ, Min(id), Max(id)
FROM
(
SELECT
Group_categ, id,
-- assign the same value to two consecutive rows: 0,0,1,1,2,2,..
-- -> used in the outer GROUP BY
(Row_Number() Over (PARTITION BY Group_categ ORDER BY id)-1) / 2 AS grp
FROM mytab
) AS dt
GROUP BY Group_categ, grp
My table structure for referrals is , the field ref is unique:
ID pid ref ref_by
1 1 k NAN
2 2 l k
3 3 m k
4 4 n l
And the user table is:
id name
1 john
2 Bob
3 Tim
4 Rob
I need to get the id,pid, ref and count of ref in next column .Based on the number of referrals they each will be assigned some points that is a constant 100, the result should look like this .
pid name ref number_of_referals points_earned
1 john k 2 200
2 Bob l 1 100
3 Tim m 0 0
4 Rob n 0 0
You need 2 joins:
the 1st from users table to referrals and the 2nd to a query that groups and counts the referrals:
select
r.id, u.name, r.ref,
case when c.counter is null then 0 else c.counter end number_of_referals,
case when c.counter is null then 0 else c.counter end * 100 points_earned
from users u inner join referrals r
on r.pid = u.id
left join (select ref_by, count(*) counter from referrals group by ref_by) c
on c.ref_by = r.ref
order by r.id
See the demo
I would like to delete old data from database table. I would just like to keep last 2 records per id. For example I have a table with following records.
ID TIME DATA
1 2 3
1 3 4
1 4 5
2 2 3
2 3 4
2 4 5
2 5 6
Result which I would like to make is (it must be sorted by TIME):
ID TIME DATA
1 3 4
1 4 5
2 4 5
2 5 6
Thank you for your help.
A solution could be:
select * from tab where (
select count(*) from tab as t
where t.ID = tab.ID and t.TIME >= tab.TIME
) <= 2;
for more details visit:
http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/
I Have a two ID that can can be a primary key
SID DEFECT_ID
1 1
1 2
1 3
1 4
1 5
DEFECT_ID is auto increment SID is static and can be another value like 1 or 2 or 3
I need to reseed when sid is another value like this
SID DEFECT_ID
1 1
1 2
1 3
1 4
1 5
2 1
2 2
2 3
3 1
3 2
1 6
1 7
can it be setting in sql server ?
You can use OVER Clause(T-SQL) to get the order number of each ID in your table
SELECT ID
,row_number() OVER (
PARTITION BY ID ORDER BY id ASC
) value
FROM table_name
There's probably a way to do what you're asking, but I have a feeling you might be making it more complicated than it needs to be.
Typically you don't try to "re-seed" like that for identity fields. Instead you end up with output like this:
SID DEFECT_ID
1 1
1 2
1 3
1 4
1 5
2 6
2 7
2 8
3 9
3 10
1 11
1 12
This is how relational databases generally do it and trying to "go against the grain" simply to have the numbers restart for each unique SID is going to be difficult and error-prone.
I have a problem which is a bit beyond me (I'm really awfully glad I'm a Beta) involving duplicates (so GROUP BY, HAVING, COUNT), compounded by keeping the solution within the standard functions that came with SQLite. I am using the sqlite3 module from Python.
Example table workers, Columns:
* ID: integer, auto-incrementing
* ColA: integer
* ColB: varchar(20)
* UserType: varchar(20)
* LoadMe: Boolean
(Yes, SQLite's datatypes are nominal)
My data table, Workers, at start looks like:
ID ColA ColB UserType LoadMe
1 1 a Alpha 0
2 1 b Beta 0
3 2 a Alpha 0
4 2 a Beta 0
5 2 b Delta 0
6 2 b Alpha 0
7 1 a Delta 0
8 1 b Epsilon 0
9 1 c Gamma 0
10 4 b Delta 0
11 5 a Alpha 0
12 5 a Beta 0
13 5 b Gamma 0
14 5 a Alpha 0
I would like to enable, for Loading onto trucks at a new factory, all workers who have unique combinations between ColA and ColB. For those duplicates (twins, triplets, etc., perhaps via Bokanovsky's Process) where unique combinations of ColA and ColB have more than one worker, I would like to select only one from each set of duplicates. To make the problem harder, I would like to additionally be able to make the selection one from each set of duplicates on the basis of UserType in some form of ORDER BY. I may wish to select the first "duplicate" with a UserType of "Alpha," to work on a frightfully clever problem, or ORDER BY UserType DESC, that I may issue an order for black tunics for the lowest of the workers.
You can see that IDs 9, 10, and 13 have unique combinations of ColA and ColB and are most easily identified. The 1-a, 1-b, 2-a, 2-b, and 5-a combinations, however, have duplicates within them.
My current process, as it stands so far:
0) Everyone comes with a unique ID number. This is done at birth.
1) SET all Workers to LoadMe = 1.
UPDATE Workers
SET LoadMe = 1
2) Find my duplicates based on their similarity in two columns (GROUP BY ColA, ColB):
SELECT Wk1.*
FROM Workers AS Wk1
INNER JOIN (
SELECT ColA, ColB
FROM Workers
GROUP BY ColA, ColB
HAVING COUNT(*) > 1
) AS Wk2
ON Wk1.ColA = Wk2.ColA
AND Wk1.ColB = Wk2.ColB
ORDER BY ColA, ColB
3) SET all of my duplicates to LoadMe = 0.
UPDATE Workers
SET LoadMe = 0
WHERE ID IN (
SELECT Wk1.ID
FROM Workers AS Wk1
INNER JOIN (
SELECT ColA, ColB
FROM Workers
GROUP BY ColA, ColB
HAVING COUNT(*) > 1
) AS Wk2
ON Wk1.ColA = Wk2.ColA
AND Wk1.ColB = Wk2.ColB
)
4) For each set of duplicates in my GROUP BY, ORDERed BY UserType, SELECT only one, the first in the list, to have LoadMe SET to 1.
This table would look like:
ID ColA ColB UserType LoadMe
1 1 a Alpha 1
2 1 b Beta 1
3 2 a Alpha 1
4 2 a Beta 0
5 2 b Delta 0
6 2 b Alpha 1
7 1 a Delta 0
8 1 b Epsilon 0
9 1 c Gamma 1
10 4 b Delta 1
11 5 a Alpha 1
12 5 a Beta 0
13 5 b Gamma 1
14 5 a Alpha 0
ORDERed BY ColA, ColB, UserType, then ID, and broken out by the GROUP BY columns, (and finally spaced for clarity) that same data might look like:
ID ColA ColB UserType LoadMe
1 1 a Alpha 1
7 1 a Delta 0
2 1 b Beta 1
8 1 b Epsilon 0
9 1 c Gamma 1
3 2 a Alpha 1
4 2 a Beta 0
6 2 b Alpha 1
5 2 b Delta 0
10 4 b Delta 1
11 5 a Alpha 1
14 5 a Alpha 0
12 5 a Beta 0
13 5 b Gamma 1
I am confounded on the last step and feel like an Epsilon-minus semi-moron. I had previously been pulling the duplicates out of the database into program space and working within Python, but this situation arises not infrequently and I would like to more permanently solve this.
I like to break a problem like this up a bit. The first step is to identify the unique ColA,ColB pairs:
SELECT ColA,ColB FROM Workers GROUP BY ColA,ColB
Now for each of these pairs you want to find the highest priority record. A join won't work because you'll end up with multiple records for each unique pair but a subquery will work:
SELECT ColA,ColB,
(SELECT id FROM Workers w1
WHERE w1.ColA=w2.ColA AND w1.ColB=w2.ColB
ORDER BY UserType LIMIT 1) AS id
FROM Workers w2 GROUP BY ColA,ColB;
You can change the ORDER BY clause in the subquery to control the priority. LIMIT 1 ensures that there is only one record for each subquery (otherwise sqlite will return the last record that matches the WHERE clause, although I'm not sure that that's guaranteed).
The result of this query is a list of records to be loaded with ColA, ColB, id. I would probably work directly from that and get rid of LoadMe but if you want to keep it you could do this:
BEGIN TRANSACTION;
UPDATE Workers SET LoadMe=0;
UPDATE Workers SET LoadMe=1
WHERE id IN (SELECT
(SELECT id FROM Workers w1
WHERE w1.ColA=w2.ColA AND w1.ColB=w2.ColB
ORDER BY UserType LIMIT 1) AS id
FROM Workers w2 GROUP BY ColA,ColB);
COMMIT;
That clears the LoadMe flag and then sets it to 1 for each of the records returned by our last query. The transaction guarantees that this all takes place or fails as one step and never leaves your LoadMe fields in an inconsistent state.