add and subtract by type - sqlite

I have a SQLite table payments:
+------+--------+-------+
| user | amount | type |
+------+--------+-------+
| AAA | 100 | plus |
| AAA | 200 | plus |
| AAA | 50 | minus |
| BBB | 100 | plus |
| BBB | 20 | minus |
| BBB | 5 | minus |
| CCC | 200 | plus |
| CCC | 300 | plus |
| CCC | 25 | minus |
I need to calculate the sum with type 'plus' and subtract from it the sum with type 'minus' for each user.
The result table should look like this:
+------+--------+
| user | total |
+------+--------+
| AAA | 250 |
| BBB | 75 |
| CCC | 475 |
I think that my query is terrible, and I need help to improve it:
select user,
(select sum(amount) from payments as TABLE1 WHERE TABLE1.type = 'plus' AND
TABLE1.user= TABLE3.user) -
(select sum(amount) from payments as TABLE2 WHERE TABLE2.type = 'minus' AND
TABLE2.user= TABLE3.user) as total
from payments as TABLE3
group by client
order by id asc

The type is easier handled with a CASE expression. And then you can merge the aggregation into the outer query:
SELECT user,
SUM(CASE type
WHEN 'plus' THEN amount
WHEN 'minus' THEN -amount
END) AS total
FROM payments
GROUP BY client
ORDER BY id;

Related

Computing Balance Sheet

With the following query:
select b1.Name DrBook, c1.Name DrControl, b2.Name CrBook, c2.Name CrControl, tn.Amount
from Transactions tn
left join Books b1 on b1.Id = tn.DrBook
left join Books b2 on b2.Id = tn.CrBook
left join ControlLedgers c1 on c1.Id = tn.DrControl
left join ControlLedgers c2 on c2.Id = tn.CrControl
I get this result set for Balance Sheet:
+---------------------+-----------+---------------------+-----------+--------+
| DrBook | DrControl | CrBook | CrControl | Amount |
+---------------------+-----------+---------------------+-----------+--------+
| Current Assets | Cash | Fund | Initial | 100000 |
| Current Assets | Cash | Fund | Initial | 100000 |
| Current Assets | Cash | Fund | Initial | 100000 |
| Current Assets | Cash | Fund | Initial | 100000 |
| Current Assets | Cash | Fund | Initial | 100000 |
| Expenses | Foods | Current Liabilities | Payables | 10000 |
| Current Liabilities | Payables | Current Assets | Cash | 5000 |
+---------------------+-----------+---------------------+-----------+--------+
To present the Balance Sheet in my Application What I'm doing right now is issuing two queries and get two result sets like these:
query1:
select b1.Name DrBook, c1.Name DrControl, SUM(tn.Amount) Amount
from Transactions tn
left join Books b1 on b1.Id = tn.DrBook
left join ControlLedgers c1 on c1.Id = tn.DrControl
group by DrBook, DrControl
result set 1:
+---------------------+-----------+--------+
| DrBook | DrControl | Amount |
+---------------------+-----------+--------+
| Current Assets | Cash | 500000 |
| Expenses | Foods | 10000 |
| Current Liabilities | Payables | 5000 |
+---------------------+-----------+--------+
query 2:
select b1.Name CrBook, c1.Name CrControl, SUM(tn.Amount) Amount
from Transactions tn
left join Books b1 on b1.Id = tn.CrBook
left join ControlLedgers c1 on c1.Id = tn.CrControl
group by CrBook, CrControl
result set 2:
+---------------------+-----------+--------+
| CrBook | CrControl | Amount |
+---------------------+-----------+--------+
| Current Assets | Cash | 5000 |
| Current Liabilities | Payables | 10000 |
| Fund | Initial | 500000 |
+---------------------+-----------+--------+
and subtract result set 2 from result set 1 if it is Assets or Expenses (in this case Current Assets and Expenses) and result set 1 from result set 2 if it is Liabilities, Incomes or Fund (in this case Current Liabilities and Fund) to get final result set like this:
+---------------------+---------------+---------+
| Book | ControlLedger | Balance |
+---------------------+---------------+---------+
| Current Assets | Cash | 495000 |
| Expenses | Food | 10000 |
| Current Liabilities | Payables | 5000 |
| Fund | Initial | 500000 |
+---------------------+---------------+---------+
I've tried some case statement to get the final result set through sql query instead of computing manually in application code BUT those didn't work!
EDIT
Here's the definition for the Table:
CREATE TABLE "Transactions"(
"Id" INTEGER NOT NULL,
"Date" TEXT NOT NULL,
"DrBook" INTEGER NOT NULL,
"CrBook" INTEGER NOT NULL,
"DrControl" INTEGER NOT NULL,
"CrControl" INTEGER NOT NULL,
"DrLedger" INTEGER,
"CrLedger" INTEGER,
"DrSubLedger" INTEGER,
"CrSubLedger" INTEGER,
"DrPartyGroup" INTEGER,
"CrPartyGroup" INTEGER,
"DrParty" INTEGER,
"CrParty" INTEGER,
"DrMember" INTEGER,
"CrMember" INTEGER,
"Amount" INTEGER NOT NULL,
"Narration" TEXT,
FOREIGN KEY("DrBook") REFERENCES "Books"("Id"),
FOREIGN KEY("CrBook") REFERENCES "Books"("Id"),
FOREIGN KEY("DrControl") REFERENCES "ControlLedgers"("Id"),
FOREIGN KEY("CrControl") REFERENCES "ControlLedgers"("Id"),
FOREIGN KEY("DrLedger") REFERENCES "Ledgers"("Id"),
FOREIGN KEY("CrLedger") REFERENCES "Ledgers"("Id"),
FOREIGN KEY("DrSubLedger") REFERENCES "SubLedgers"("Id"),
FOREIGN KEY("CrSubLedger") REFERENCES "SubLedgers"("Id"),
FOREIGN KEY("DrPartyGroup") REFERENCES PartyGroups("Id"),
FOREIGN KEY("CrPartyGroup") REFERENCES PartyGroups("Id"),
FOREIGN KEY("DrParty") REFERENCES "Parties"("Id"),
FOREIGN KEY("CrParty") REFERENCES "Parties"("Id"),
FOREIGN KEY("DrMember") REFERENCES "Members"("Id"),
FOREIGN KEY("CrMember") REFERENCES "Members"("Id")
);
for each Journal I insert one row and it contains both Debit, Credit and Amount Information. I don't have Dr/CrProduct or Dr/CrServices since this has been designed for Individual's and Families' Bookkeeping and Accounting.
For a purchase of food from Mr. A, for example, I pass (1):
Expenses -> Food -> Rice -> Fine Rice A/c Dr. 10000
Current Liabilities -> Payables A/C Cr. 10000
if it's on credit and when the amount of purchase is paid in cash, I pass (2):
Current Liabilities -> Payables A/C Dr. 10000
Current Assets -> Cash -> In Hand -> Emon A/c Cr. 10000
In the table it becomes:
+----+------------+--------+--------+-----------+-----------+----------+----------+-------------+-------------+--------------+--------------+---------+---------+----------+----------+--------+----------------+
| Id | Date | DrBook | CrBook | DrControl | CrControl | DrLedger | CrLedger | DrSubLedger | CrSubLedger | DrPartyGroup | CrPartyGroup | DrParty | CrParty | DrMember | CrMember | Amount | Narration |
+----+------------+--------+--------+-----------+-----------+----------+----------+-------------+-------------+--------------+--------------+---------+---------+----------+----------+--------+----------------+
| 3 | 2020-06-15 | 3 | 5 | 9 | 18 | 2 | | 2 | | | 4 | | 1 | | | 10000 | Some Narration |
| 3 | 2020-06-15 | 5 | 2 | 18 | 7 | | 1 | | 1 | 4 | | 1 | | | | 10000 | |
+----+------------+--------+--------+-----------+-----------+----------+----------+-------------+-------------+--------------+--------------+---------+---------+----------+----------+--------+----------------+
and here's a quick dissection of the first row:
+--------------+----------------+---------------------+
| Columns | Values | Mappings |
+--------------+----------------+---------------------+
| DrBook | 3 | Expenses |
| CrBook | 5 | Current Liabilities |
| DrControl | 9 | Food |
| CrControl | 18 | Payables |
| DrLedger | 2 | Rice |
| DrSubLedger | 2 | Fine Rice |
| CrPartyGroup | 4 | Groceries |
| CrParty | 1 | Mr. A |
| Amount | 10000 | |
| Narration | Some Narration | |
+--------------+----------------+---------------------+
Not sure whether the following is the only way:
with t1(Id, Book, Control, Amount) as (
select tn.DrBook, b1.Name, c1.Name, sum(tn.Amount)
from Transactions tn
left join Books b1 on b1.Id = tn.DrBook
left join ControlLedgers c1 on c1.Id = tn.DrControl
group by DrBook, DrControl
),
t2 as (
select tn.CrBook, b1.Name, c1.Name, -1*sum(tn.Amount)
from Transactions tn
left join Books b1 on b1.Id = tn.CrBook
left join ControlLedgers c1 on c1.Id = tn.CrControl
group by CrBook, CrControl
),
t3 as (
select * from t1 union all select * from t2
)
select Book, Control,
case when Id <=3 then sum(Amount)
else -1*sum(Amount) end Balance
from t3 group by Book, Control order by Id
and it produces exactly what I expected:
+---------------------+----------+---------+
| Book | Control | Balance |
+---------------------+----------+---------+
| Current Assets | Cash | 495000 |
| Current Liabilities | Payables | 5000 |
| Expenses | Foods | 10000 |
| Fund | Initial | 500000 |
+---------------------+----------+---------+

How to merge records based on consective fields in Teradata

I have a source like below table:
+---------+--+--------+--+---------+--+--+------+
| ID | | SEQ_NO | | UNIT_ID | | | D_ID |
+---------+--+--------+--+---------+--+--+------+
| 7979092 | | 1 | | 99 | | | 759 |
| 7979092 | | 2 | | -1 | | | 869 |
| 7979092 | | 3 | | -1 | | | 927 |
| 7979092 | | 4 | | -1 | | | 812 |
| 7979092 | | 5 | | 99 | | | 900 |
| 7979092 | | 6 | | 99 | | | 891 |
| 7979092 | | 7 | | -1 | | | 785 |
| 7979092 | | 8 | | -1 | | | 762 |
| 7979092 | | 9 | | -1 | | | 923 |
+---------+--+--------+--+---------+--+--+------+
I have to merge the rows when consecutive unit_id has same value. We should take max(D_id) when we consolidate the rows. Expected output is:
+---------+---------+------+
| ID | UNIT_ID | D_ID |
+---------+---------+------+
| 7979092 | 99 | 759 |
| 7979092 | -1 | 927 |
| 7979092 | 99 | 900 |
| 7979092 | -1 | 923 |
+---------+---------+------+
I have tried to find the solution using Teradata ordered analytical function, but did not find the solution. I use Teradata 16.
Thank You.
This logic is a bit quirky, it's based on two sequences created by different sort orders:
SELECT
ID
,UNIT_ID
,Max(D_ID)
FROM
(
SELECT
ID
,SEQ_NO
,UNIT_ID
,D_ID
-- assign the same value to consecutive UNIT_IDs
,SEQ_NO -
Row_Number()
Over(PARTITION BY ID, UNIT_ID
ORDER BY SEQ_NO) AS grp
FROM tab
) AS dt
GROUP BY 1,2,grp
You can use RESET WHEN to dynamically create groups within the window. Here's one way to do it:
select ID, UNIT_ID,
max(D_ID) over(
partition by ID order by SEQ_NO
reset when UNIT_ID <> UNIT_ID_prev -- Create new group for new value
) as D_ID
from (
select ID, SEQ_NO, UNIT_ID, D_ID,
lag(UNIT_ID) over(partition by ID order by SEQ_NO) as UNIT_ID_prev -- Previous value
from MY_TABLE
) src
qualify row_number() over(
partition by ID order by SEQ_NO
reset when UNIT_ID <> UNIT_ID_prev -- Match original max() window
) = 1 -- One row per group (similar to DISTINCT)

SQLite - Update a column based on values from two other tables' columns

I am trying to update Data1's ID to Record2's ID when:
Record1's and Record2's Name are the same, and
Weight is greater in Record2.
Record1
| ID | Weight | Name |
|----|--------|------|
| 1 | 10 | a |
| 2 | 10 | b |
| 3 | 10 | c |
Record2
| ID | Weight | Name |
|----|--------|------|
| 4 | 20 | a |
| 5 | 20 | b |
| 6 | 20 | c |
Data1
| ID | Weight |
|----|--------|
| 4 | 40 |
| 5 | 40 |
I have tried the following SQLite query:
update data1
set id =
(select record2.id
from record2,record1
where record1.name=record2.name
and record1.weight<record2.weight)
where id in
(select record1.id
from record1, record2
where record1.name=record2.name
and record1.weight<record2.weight)
Using the above query Data1's id is updated to 4 for all records.
NOTE: Record1's ID is the foreign key for Data1.
For the given data set the following seems to serve the cause:
update data1
set id =
(select record2.id
from record2,record1
where
data1.id = record1.id
and record1.name=record2.name
and record1.weight<record2.weight)
where id in
(select record1.id
from record1, record2
where
record1.id in (select id from data1)
and record1.name=record2.name
and record1.weight<record2.weight)
;
See it in action: SQL Fiddle.
Please comment if and as this requires adjustment / further detail.

Oracle 11g r2: strange behavior on index

I have the query:
SELECT count(*)
FROM
(
SELECT
TBELENCO.DATA_PROC, TBELENCO.POD, TBELENCO.DESCRIZIONE, TBELENCO.ERROR, TBELENCO.STATO,
TBELENCO.SEZIONE, TBELENCO.NOME_FILE, TBELENCO.ID_CARICAMENTO, TBELENCO.ESITO_OPERAZIONE,
TBELENCO.DES_TIPO_MISURA,
--TBELENCO.RAGIONE_SOCIALE,
--ROW_NUMBER() OVER (ORDER BY TBELENCO.DATA_PROC DESC) R
ROWNUM R
FROM(
SELECT
LOG.DATA_PROC, LOG.POD, LOG.DESCRIZIONE, LOG.ERROR, LOG.STATO,
LOG.SEZIONE, LOG.NOME_FILE, LOG.ID_CARICAMENTO, LOG.ESITO_OPERAZIONE, TM.DES_TIPO_MISURA
--,C.RAGIONE_SOCIALE
--ROW_NUMBER() OVER (ORDER BY LOG.DATA_PROC DESC) R
FROM
MS042_LOADING_LOGS LOG JOIN MS116_MEASURE_TYPES TM ON
TM.ID_TIPO_MISURA=LOG.SEZIONE
-- LEFT JOIN(
-- SELECT CUST.RAGIONE_SOCIALE,STR.POD,RSC.DATA_DA, RSC.DATA_A
-- FROM
-- MS038_METERS STR JOIN MS036_REL_SITES_CUSTOMERS RSC ON
-- STR.ID_SITO=RSC.ID_SITO
-- JOIN MS030_CUSTOMERS CUST ON
-- CUST.ID_CLIENTE=RSC.ID_CLIENTE
-- ) C ON
-- C.POD=LOG.POD
--AND LOG.DATA_PROC BETWEEN C.DATA_DA AND C.DATA_A
WHERE
1=1
--AND LOG.DATA_PROC>=TRUNC(SYSDATE)
AND LOG.DATA_PROC>=TRUNC(SYSDATE)-3
--TO_DATE('01/11/2014', 'DD/MM/YYYY')
) TBELENCO
)
WHERE
R BETWEEN 1 AND 200;
If I execute the query with AND LOG.DATA_PROC>=TRUNC(SYSDATE)-3, Oracle uses the index on the data_proc field of the MS042_LOADING_LOGS (LOG) table, if I use, instead, AND LOG.DATA_PROC>=TRUNC(SYSDATE)-4 or -5, or -6, etc, it uses a table access full. Why this behavior?
I also execute a :
ALTER INDEX MS042_DATA_PROC_IDX REBUILD;
but with no changes.
Thank,
Igor
--***********************************************************
SELECT count(*)
FROM
(
SELECT
TBELENCO.DATA_PROC, TBELENCO.POD, TBELENCO.DESCRIZIONE, TBELENCO.ERROR, TBELENCO.STATO,
TBELENCO.SEZIONE, TBELENCO.NOME_FILE, TBELENCO.ID_CARICAMENTO, TBELENCO.ESITO_OPERAZIONE,
TBELENCO.DES_TIPO_MISURA,
ROWNUM R
FROM(
SELECT
LOG.DATA_PROC, LOG.POD, LOG.DESCRIZIONE, LOG.ERROR, LOG.STATO,
LOG.SEZIONE, LOG.NOME_FILE, LOG.ID_CARICAMENTO, LOG.ESITO_OPERAZIONE, TM.DES_TIPO_MISURA
FROM
MS042_LOADING_LOGS LOG JOIN MS116_MEASURE_TYPES TM ON
TM.ID_TIPO_MISURA=LOG.SEZIONE
WHERE
1=1
AND LOG.DATA_PROC>=TRUNC(SYSDATE)-1
) TBELENCO
)
WHERE
R BETWEEN 1 AND 200;
Plan hash value: 2191058229
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 30866 (2)| 00:06:11 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | VIEW | | 94236 | 1196K| 30866 (2)| 00:06:11 |
| 3 | COUNT | | | | | |
|* 4 | HASH JOIN | | 94236 | 1104K| 30866 (2)| 00:06:11 |
| 5 | INDEX FULL SCAN | P087_TIPI_MISURE_PK | 15 | 30 | 1 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| MS042_LOADING_LOGS | 94236 | 920K| 30864 (2)| 00:06:11 |
|* 7 | INDEX RANGE SCAN | MS042_DATA_PROC_IDX | 94236 | | 25742 (2)| 00:05:09 |
-------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("R"<=200 AND "R">=1)
4 - access("TM"."ID_TIPO_MISURA"="LOG"."SEZIONE")
7 - access(SYS_OP_DESCEND("DATA_PROC")<=SYS_OP_DESCEND(TRUNC(SYSDATE#!)-1))
filter(SYS_OP_UNDESCEND(SYS_OP_DESCEND("DATA_PROC"))>=TRUNC(SYSDATE#!)-1)
Plan hash value: 69930686
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 95921 (1)| 00:19:12 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | VIEW | | 1467K| 18M| 95921 (1)| 00:19:12 |
| 3 | COUNT | | | | | |
|* 4 | HASH JOIN | | 1467K| 16M| 95921 (1)| 00:19:12 |
| 5 | INDEX FULL SCAN | P087_TIPI_MISURE_PK | 15 | 30 | 1 (0)| 00:00:01 |
|* 6 | TABLE ACCESS FULL| MS042_LOADING_LOGS | 1467K| 13M| 95912 (1)| 00:19:11 |
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("R"<=200 AND "R">=1)
4 - access("TM"."ID_TIPO_MISURA"="LOG"."SEZIONE")
6 - filter("LOG"."DATA_PROC">=TRUNC(SYSDATE#!)-4)
The larger the fraction of rows that will be returned, the more efficient a table scan is and the less efficient it is to use an index. Apparently, Oracle expects that inflection point to come when the query returns more than 3 days of data. If that is inaccurate, I would expect that the statistics on your table or indexes are inaccurate.

Select single row per unique field value with SQL Developer

I have thousands of rows of data, a segment of which looks like:
+-------------+-----------+-------+
| Customer ID | Company | Sales |
+-------------+-----------+-------+
| 45678293 | Sears | 45 |
| 01928573 | Walmart | 6 |
| 29385068 | Fortinoes | 2 |
| 49582015 | Walmart | 1 |
| 49582015 | Joe's | 1 |
| 19285740 | Target | 56 |
| 39506783 | Target | 4 |
| 39506783 | H&M | 4 |
+-------------+-----------+-------+
In every case that a customer ID occurs more than once, the value in 'Sales' is also the same but the value in 'Company' is different (this is true throughout the entire table). I need for each value in 'Customer ID to only appear once, so I need a single row for each customer ID.
In other words, I'd like for the above table to look like:
+-------------+-----------+-------+
| Customer ID | Company | Sales |
+-------------+-----------+-------+
| 45678293 | Sears | 45 |
| 01928573 | Walmart | 6 |
| 29385068 | Fortinoes | 2 |
| 49582015 | Walmart | 1 |
| 19285740 | Target | 56 |
| 39506783 | Target | 4 |
+-------------+-----------+-------+
If anyone knows how I can go about doing this, I'd much appreciate some help.
Thanks!
Well it would have been helpful, if you have put your sql generate that data.
but it might go something like;
SELECT customer_id, Max(Company) as company, Count(sales.*) From Customers <your joins and where clause> GROUP BY customer_id
Assumes; there are many company and picks out the most number of occurance and the sales data to be in a different table.
Hope this helps.

Resources