Computing Balance Sheet - sqlite

With the following query:
select b1.Name DrBook, c1.Name DrControl, b2.Name CrBook, c2.Name CrControl, tn.Amount
from Transactions tn
left join Books b1 on b1.Id = tn.DrBook
left join Books b2 on b2.Id = tn.CrBook
left join ControlLedgers c1 on c1.Id = tn.DrControl
left join ControlLedgers c2 on c2.Id = tn.CrControl
I get this result set for Balance Sheet:
+---------------------+-----------+---------------------+-----------+--------+
| DrBook | DrControl | CrBook | CrControl | Amount |
+---------------------+-----------+---------------------+-----------+--------+
| Current Assets | Cash | Fund | Initial | 100000 |
| Current Assets | Cash | Fund | Initial | 100000 |
| Current Assets | Cash | Fund | Initial | 100000 |
| Current Assets | Cash | Fund | Initial | 100000 |
| Current Assets | Cash | Fund | Initial | 100000 |
| Expenses | Foods | Current Liabilities | Payables | 10000 |
| Current Liabilities | Payables | Current Assets | Cash | 5000 |
+---------------------+-----------+---------------------+-----------+--------+
To present the Balance Sheet in my Application What I'm doing right now is issuing two queries and get two result sets like these:
query1:
select b1.Name DrBook, c1.Name DrControl, SUM(tn.Amount) Amount
from Transactions tn
left join Books b1 on b1.Id = tn.DrBook
left join ControlLedgers c1 on c1.Id = tn.DrControl
group by DrBook, DrControl
result set 1:
+---------------------+-----------+--------+
| DrBook | DrControl | Amount |
+---------------------+-----------+--------+
| Current Assets | Cash | 500000 |
| Expenses | Foods | 10000 |
| Current Liabilities | Payables | 5000 |
+---------------------+-----------+--------+
query 2:
select b1.Name CrBook, c1.Name CrControl, SUM(tn.Amount) Amount
from Transactions tn
left join Books b1 on b1.Id = tn.CrBook
left join ControlLedgers c1 on c1.Id = tn.CrControl
group by CrBook, CrControl
result set 2:
+---------------------+-----------+--------+
| CrBook | CrControl | Amount |
+---------------------+-----------+--------+
| Current Assets | Cash | 5000 |
| Current Liabilities | Payables | 10000 |
| Fund | Initial | 500000 |
+---------------------+-----------+--------+
and subtract result set 2 from result set 1 if it is Assets or Expenses (in this case Current Assets and Expenses) and result set 1 from result set 2 if it is Liabilities, Incomes or Fund (in this case Current Liabilities and Fund) to get final result set like this:
+---------------------+---------------+---------+
| Book | ControlLedger | Balance |
+---------------------+---------------+---------+
| Current Assets | Cash | 495000 |
| Expenses | Food | 10000 |
| Current Liabilities | Payables | 5000 |
| Fund | Initial | 500000 |
+---------------------+---------------+---------+
I've tried some case statement to get the final result set through sql query instead of computing manually in application code BUT those didn't work!
EDIT
Here's the definition for the Table:
CREATE TABLE "Transactions"(
"Id" INTEGER NOT NULL,
"Date" TEXT NOT NULL,
"DrBook" INTEGER NOT NULL,
"CrBook" INTEGER NOT NULL,
"DrControl" INTEGER NOT NULL,
"CrControl" INTEGER NOT NULL,
"DrLedger" INTEGER,
"CrLedger" INTEGER,
"DrSubLedger" INTEGER,
"CrSubLedger" INTEGER,
"DrPartyGroup" INTEGER,
"CrPartyGroup" INTEGER,
"DrParty" INTEGER,
"CrParty" INTEGER,
"DrMember" INTEGER,
"CrMember" INTEGER,
"Amount" INTEGER NOT NULL,
"Narration" TEXT,
FOREIGN KEY("DrBook") REFERENCES "Books"("Id"),
FOREIGN KEY("CrBook") REFERENCES "Books"("Id"),
FOREIGN KEY("DrControl") REFERENCES "ControlLedgers"("Id"),
FOREIGN KEY("CrControl") REFERENCES "ControlLedgers"("Id"),
FOREIGN KEY("DrLedger") REFERENCES "Ledgers"("Id"),
FOREIGN KEY("CrLedger") REFERENCES "Ledgers"("Id"),
FOREIGN KEY("DrSubLedger") REFERENCES "SubLedgers"("Id"),
FOREIGN KEY("CrSubLedger") REFERENCES "SubLedgers"("Id"),
FOREIGN KEY("DrPartyGroup") REFERENCES PartyGroups("Id"),
FOREIGN KEY("CrPartyGroup") REFERENCES PartyGroups("Id"),
FOREIGN KEY("DrParty") REFERENCES "Parties"("Id"),
FOREIGN KEY("CrParty") REFERENCES "Parties"("Id"),
FOREIGN KEY("DrMember") REFERENCES "Members"("Id"),
FOREIGN KEY("CrMember") REFERENCES "Members"("Id")
);
for each Journal I insert one row and it contains both Debit, Credit and Amount Information. I don't have Dr/CrProduct or Dr/CrServices since this has been designed for Individual's and Families' Bookkeeping and Accounting.
For a purchase of food from Mr. A, for example, I pass (1):
Expenses -> Food -> Rice -> Fine Rice A/c Dr. 10000
Current Liabilities -> Payables A/C Cr. 10000
if it's on credit and when the amount of purchase is paid in cash, I pass (2):
Current Liabilities -> Payables A/C Dr. 10000
Current Assets -> Cash -> In Hand -> Emon A/c Cr. 10000
In the table it becomes:
+----+------------+--------+--------+-----------+-----------+----------+----------+-------------+-------------+--------------+--------------+---------+---------+----------+----------+--------+----------------+
| Id | Date | DrBook | CrBook | DrControl | CrControl | DrLedger | CrLedger | DrSubLedger | CrSubLedger | DrPartyGroup | CrPartyGroup | DrParty | CrParty | DrMember | CrMember | Amount | Narration |
+----+------------+--------+--------+-----------+-----------+----------+----------+-------------+-------------+--------------+--------------+---------+---------+----------+----------+--------+----------------+
| 3 | 2020-06-15 | 3 | 5 | 9 | 18 | 2 | | 2 | | | 4 | | 1 | | | 10000 | Some Narration |
| 3 | 2020-06-15 | 5 | 2 | 18 | 7 | | 1 | | 1 | 4 | | 1 | | | | 10000 | |
+----+------------+--------+--------+-----------+-----------+----------+----------+-------------+-------------+--------------+--------------+---------+---------+----------+----------+--------+----------------+
and here's a quick dissection of the first row:
+--------------+----------------+---------------------+
| Columns | Values | Mappings |
+--------------+----------------+---------------------+
| DrBook | 3 | Expenses |
| CrBook | 5 | Current Liabilities |
| DrControl | 9 | Food |
| CrControl | 18 | Payables |
| DrLedger | 2 | Rice |
| DrSubLedger | 2 | Fine Rice |
| CrPartyGroup | 4 | Groceries |
| CrParty | 1 | Mr. A |
| Amount | 10000 | |
| Narration | Some Narration | |
+--------------+----------------+---------------------+

Not sure whether the following is the only way:
with t1(Id, Book, Control, Amount) as (
select tn.DrBook, b1.Name, c1.Name, sum(tn.Amount)
from Transactions tn
left join Books b1 on b1.Id = tn.DrBook
left join ControlLedgers c1 on c1.Id = tn.DrControl
group by DrBook, DrControl
),
t2 as (
select tn.CrBook, b1.Name, c1.Name, -1*sum(tn.Amount)
from Transactions tn
left join Books b1 on b1.Id = tn.CrBook
left join ControlLedgers c1 on c1.Id = tn.CrControl
group by CrBook, CrControl
),
t3 as (
select * from t1 union all select * from t2
)
select Book, Control,
case when Id <=3 then sum(Amount)
else -1*sum(Amount) end Balance
from t3 group by Book, Control order by Id
and it produces exactly what I expected:
+---------------------+----------+---------+
| Book | Control | Balance |
+---------------------+----------+---------+
| Current Assets | Cash | 495000 |
| Current Liabilities | Payables | 5000 |
| Expenses | Foods | 10000 |
| Fund | Initial | 500000 |
+---------------------+----------+---------+

Related

Default value for LAG function in MariaDB

I'm trying to build a view which allows me to track the difference between paid values at two consecutive month_ids. When a figure is missing however, that would be because it's the first entry and therefore has a paid amount of 0. At present, I'm using the below to represent the previous figure since the [,default] argument has not been implemented in MariaDB.
CASE WHEN (
NOT(policy_agent_month.policy_agent_month_id IS NOT NULL
AND LAG(days_paid, 1) OVER (PARTITION BY claim_id ORDER BY month_id ) IS NULL)) THEN
LAG(days_paid, 1) OVER ( PARTITION BY claim_id ORDER BY month_id)
ELSE
0
END
The problem I have with this is that I have about 30 variables which this function needs to be applied over and it makes my code unreadable and very clunky. Is there a better solution?
Why use WITH?
SELECT province, tot_pop,
tot_pop - COALESCE(
(LAG(tot_pop) OVER (ORDER BY tot_pop ASC)),
0) AS delta
FROM provinces
ORDER BY tot_pop asc;
+---------------------------+----------+---------+
| province | tot_pop | delta |
+---------------------------+----------+---------+
| Nunavut | 14585 | 14585 |
| Yukon | 21304 | 6719 |
| Northwest Territories | 24571 | 3267 |
| Prince Edward Island | 63071 | 38500 |
| Newfoundland and Labrador | 100761 | 37690 |
| New Brunswick | 332715 | 231954 |
| Nova Scotia | 471284 | 138569 |
| Saskatchewan | 622467 | 151183 |
| Manitoba | 772672 | 150205 |
| Alberta | 2481213 | 1708541 |
| British Columbia | 3287519 | 806306 |
| Quebec | 5321098 | 2033579 |
| Ontario | 10071458 | 4750360 |
+---------------------------+----------+---------+
13 rows in set (0.00 sec)
However, it is not cheap (at least in MySQL 8.0);
the table has 13 rows, yet
FLUSH STATUS;
SELECT ...
SHOW SESSION STATUS LIKE 'Handler%';
MySQL 8.0:
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Handler_read_rnd | 89 |
| Handler_read_rnd_next | 52 |
| Handler_write | 26 |
(and others)
MariaDB 10.3:
| Handler_read_rnd | 77 |
| Handler_read_rnd_next | 42 |
| Handler_tmp_write | 13 |
| Handler_update | 13 |
You can use a CTE (Common Table Expression) in MariaDB 10.2+ to pre-compute frequently used expressions and name them for later use:
with
x as ( -- first we compute the CTE that we name "x"
select
*,
coalesce(
LAG(days_paid, 1) OVER (PARTITION BY claim_id ORDER BY month_id),
123456
) as prev_month -- this expression gets the name "prev_month"
from my_table -- or a simple/complex join here
)
select -- now the main query
prev_month
from x
... -- rest of your query here where "prev_month" is computed.
In the main query prev_month has the lag value, or the default value 123456 when it's null.

Optimizing query that looks at a specific time window each day

This is a followup to my previous question
Optimizing query to get entire row where one field is the maximum for a group
I'll change the names from what I used there to make them a little more memorable, but these don't represent my actual use-case (so don't estimate the number of records from them).
I have a table with a schema like this:
OrderTime DATETIME(6),
Customer VARCHAR(50),
DrinkPrice DECIMAL,
Bartender VARCHAR(50),
TimeToPrepareDrink TIME(6),
...
I'd like to extract the rows from the table representing each customer's most expensive drink order during happy hour (3 PM - 6 PM) each day. So for instance I'd want results like
Date | Customer | OrderTime | MaxPrice | Bartender | ...
-------+----------+-------------+------------+-----------+-----
1/1/18 | Alice | 1/1/18 3:45 | 13.15 | Jane | ...
1/1/18 | Bob | 1/1/18 5:12 | 9.08 | Jane | ...
1/1/18 | Carol | 1/1/18 4:45 | 20.00 | Tarzan | ...
1/2/18 | Alice | 1/2/18 3:45 | 13.15 | Jane | ...
1/2/18 | Bob | 1/2/18 5:57 | 6.00 | Tarzan | ...
1/2/18 | Carol | 1/2/18 3:13 | 6.00 | Tarzan | ...
...
The table has an index on OrderTime, and contains tens of billions of records. (My customers are heavy drinkers).
Thanks to the previous question I'm able to extract this for a specific day pretty easily. I can do something like:
SELECT * FROM orders b
INNER JOIN (
SELECT Customer, MAX(DrinkPrice) as MaxPrice
FROM orders
WHERE OrderTime >= '2018-01-01 15:00'
AND OrderTime <= '2018-01-01 18:00'
GROUP BY Customer
) AS a
ON a.Customer = b.Customer
AND a.MaxPrice = b.DrinkPrice
WHERE b.OrderTime >= '2018-01-01 15:00'
AND b.OrderTime <= '2018-01-01 18:00';
This query runs in less than a second. The explain plan looks like this:
+---+-------------+------------+-------+---------------+------------+--------------------+--------------------------------------------------------+
| id| select_type | table | type | possible_keys | key | ref | Extra |
+---+-------------+------------+-------+---------------+------------+--------------------+--------------------------------------------------------+
| 1 | PRIMARY | b | range | OrderTime | OrderTime | NULL | Using index condition |
| 1 | PRIMARY | <derived2> | ref | key0 | key0 | b.Customer,b.Price | |
| 2 | DERIVED | orders | range | OrderTime | OrderTime | NULL | Using index condition; Using temporary; Using filesort |
+---+-------------+------------+-------+---------------+------------+--------------------+--------------------------------------------------------+
I can also get the information about the relevant rows for my query:
SELECT Date, Customer, MAX(DrinkPrice) AS MaxPrice
FROM
orders
INNER JOIN
(SELECT '2018-01-01' AS Date
UNION
SELECT '2018-01-02' AS Date) dates
WHERE OrderTime >= TIMESTAMP(Date, '15:00:00')
AND OrderTime <= TIMESTAMP(Date, '18:00:00')
GROUP BY Date, Customer
HAVING MaxPrice > 0;
This query also runs in less than a second. Here's how its explain plan looks:
+------+--------------+------------+------+---------------+------+------+------------------------------------------------+
| id | select_type | table | type | possible_keys | key | ref | Extra |
+------+--------------+------------+------+---------------+------+------+------------------------------------------------+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | Using temporary; Using filesort |
| 1 | PRIMARY | orders | ALL | OrderTime | NULL | NULL | Range checked for each record (index map: 0x1) |
| 2 | DERIVED | NULL | NULL | NULL | NULL | NULL | No tables used |
| 3 | UNION | NULL | NULL | NULL | NULL | NULL | No tables used |
| NULL | UNION RESULT | <union2,3> | ALL | NULL | NULL | NULL | |
+------+--------------+------------+------+---------------+------+------+------------------------------------------------+
The problem now is retrieving the remaining fields from the table. I tried adapting the trick from before, like so:
SELECT * FROM
orders a
INNER JOIN
(SELECT Date, Customer, MAX(DrinkPrice) AS MaxPrice
FROM
orders
INNER JOIN
(SELECT '2018-01-01' AS Date
UNION
SELECT '2018-01-02' AS Date) dates
WHERE OrderTime >= TIMESTAMP(Date, '15:00:00')
AND OrderTime <= TIMESTAMP(Date, '18:00:00')
GROUP BY Date, Customer
HAVING MaxPrice > 0) b
ON a.OrderTime >= TIMESTAMP(b.Date, '15:00:00')
AND a.OrderTime <= TIMESTAMP(b.Date, '18:00:00')
AND a.Customer = b.Customer;
However, for reasons I don't understand, the database chooses to execute this in a way that takes forever. Explain plan:
+------+--------------+------------+------+---------------+------+------------+------------------------------------------------+
| id | select_type | table | type | possible_keys | key | ref | Extra |
+------+--------------+------------+------+---------------+------+------------+------------------------------------------------+
| 1 | PRIMARY | a | ALL | OrderTime | NULL | NULL | |
| 1 | PRIMARY | <derived2> | ref | key0 | key0 | a.Customer | Using where |
| 2 | DERIVED | <derived3> | ALL | NULL | NULL | NULL | Using temporary; Using filesort |
| 2 | DERIVED | orders | ALL | OrderTime | NULL | NULL | Range checked for each record (index map: 0x1) |
| 3 | DERIVED | NULL | NULL | NULL | NULL | NULL | No tables used |
| 4 | UNION | NULL | NULL | NULL | NULL | NULL | No tables used |
| NULL | UNION RESULT | <union3,4> | ALL | NULL | NULL | NULL | |
+------+--------------+------------+------+---------------+------+------------+------------------------------------------------+
Questions:
What is going on here?
How can I fix it?
To extract the rows from the table representing each customer's most expensive drink order during happy hour (3 PM - 6 PM) each day I would use row_number() over() within a case expression evaluating the hour of day, like this:
CREATE TABLE mytable(
Date DATE
,Customer VARCHAR(10)
,OrderTime DATETIME
,MaxPrice NUMERIC(12,2)
,Bartender VARCHAR(11)
);
note changes were made to OrderTime
INSERT INTO mytable(Date,Customer,OrderTime,MaxPrice,Bartender)
VALUES
('1/1/18','Alice','1/1/18 13:45',13.15,'Jane')
, ('1/1/18','Bob' ,'1/1/18 15:12', 9.08,'Jane')
, ('1/2/18','Alice','1/2/18 13:45',13.15,'Jane')
, ('1/2/18','Bob' ,'1/2/18 15:57', 6.00,'Tarzan')
, ('1/2/18','Carol','1/2/18 13:13', 6.00,'Tarzan')
;
The suggested query is this:
select
*
from (
select
*
, case when hour(OrderTime) between 15 and 18 then
row_number() over(partition by `Date`, customer
order by MaxPrice DESC)
else null
end rn
from mytable
) d
where rn = 1
;
and the result will give access to all columns you include in the derived table.
Date | Customer | OrderTime | MaxPrice | Bartender | rn
:--------- | :------- | :------------------ | -------: | :-------- | -:
0001-01-18 | Bob | 0001-01-18 15:12:00 | 9.08 | Jane | 1
0001-02-18 | Bob | 0001-02-18 15:57:00 | 6.00 | Tarzan | 1
To help display how this works, running the derived table subquery:
select
*
, case when hour(OrderTime) between 15 and 18 then
row_number() over(partition by `Date`, customer order by MaxPrice DESC)
else null
end rn
from mytable
;
produces this interim resultset:
Date | Customer | OrderTime | MaxPrice | Bartender | rn
:--------- | :------- | :------------------ | -------: | :-------- | ---:
0001-01-18 | Alice | 0001-01-18 13:45:00 | 13.15 | Jane | null
0001-01-18 | Bob | 0001-01-18 15:12:00 | 9.08 | Jane | 1
0001-02-18 | Alice | 0001-02-18 13:45:00 | 13.15 | Jane | null
0001-02-18 | Bob | 0001-02-18 15:57:00 | 6.00 | Tarzan | 1
0001-02-18 | Carol | 0001-02-18 13:13:00 | 6.00 | Tarzan | null
db<>fiddle here
The task seems to be a "groupwise-max" problem. Here's one approach, involving only 2 'queries' (the inner one is called a "derived table").
SELECT x.OrderDate, x.Customer, b.OrderTime,
x.MaxPrice, b.Bartender
FROM
(
SELECT DATE(OrderTime) AS OrderDate,
Customer,
Max(Price) AS MaxPrice
FROM tbl
WHERE TIME(OrderTime) BETWEEN '15:00' AND '18:00'
GROUP BY OrderDate, Customer
) AS x
JOIN tbl AS b
ON b.OrderDate = X.OrderDate
AND b.customer = x.Customer
AND b.Price = x.MaxPrice
WHERE TIME(b.OrderTime) BETWEEN '15:00' AND '18:00'
ORDER BY x.OrderDate, x.Customer
Desirable index:
INDEX(Customer, Price)
(There's no good reason to be using MyISAM.)
Billions of new rows per day
This adds new wrinkles. That's upwards of a terabyte of additional disk space needed each and every day?
Is it possible to summarize the data? The goal here is to add summary info as the new data comes in, and never have to re-scan the billions of old data. This may also let you remove all the secondary indexes on the Fact table.
Normalization will help shrink the table size, hence speeding up the queries. Bartender and Customer are prime candidates for such -- perhaps a SMALLINT UNSIGNED (2 bytes; 65K values) for the former and MEDIUMINT UNSIGNED (3 bytes, 16M) for the latter. That would probably shrink by 50% the 5 columns you currently show. You may get a 2x speedup on many operations after normalizing.
Normalization is best done by 'staging' the data -- Load the data into a temporary table, normalize within it, summarize it, then copy into the main Fact table.
See http://mysql.rjweb.org/doc.php/summarytables
and http://mysql.rjweb.org/doc.php/staging_table
Before getting back to the question of optimizing the one query, we need to see the schema, the data flow, whether things can be normalized, whether summary tables can be effective, etc. I would hope to have the 'answer' for the query to be mostly digested in a summary table. Sometimes this leads to a 10x speedup.

add and subtract by type

I have a SQLite table payments:
+------+--------+-------+
| user | amount | type |
+------+--------+-------+
| AAA | 100 | plus |
| AAA | 200 | plus |
| AAA | 50 | minus |
| BBB | 100 | plus |
| BBB | 20 | minus |
| BBB | 5 | minus |
| CCC | 200 | plus |
| CCC | 300 | plus |
| CCC | 25 | minus |
I need to calculate the sum with type 'plus' and subtract from it the sum with type 'minus' for each user.
The result table should look like this:
+------+--------+
| user | total |
+------+--------+
| AAA | 250 |
| BBB | 75 |
| CCC | 475 |
I think that my query is terrible, and I need help to improve it:
select user,
(select sum(amount) from payments as TABLE1 WHERE TABLE1.type = 'plus' AND
TABLE1.user= TABLE3.user) -
(select sum(amount) from payments as TABLE2 WHERE TABLE2.type = 'minus' AND
TABLE2.user= TABLE3.user) as total
from payments as TABLE3
group by client
order by id asc
The type is easier handled with a CASE expression. And then you can merge the aggregation into the outer query:
SELECT user,
SUM(CASE type
WHEN 'plus' THEN amount
WHEN 'minus' THEN -amount
END) AS total
FROM payments
GROUP BY client
ORDER BY id;

Oracle 11g r2: strange behavior on index

I have the query:
SELECT count(*)
FROM
(
SELECT
TBELENCO.DATA_PROC, TBELENCO.POD, TBELENCO.DESCRIZIONE, TBELENCO.ERROR, TBELENCO.STATO,
TBELENCO.SEZIONE, TBELENCO.NOME_FILE, TBELENCO.ID_CARICAMENTO, TBELENCO.ESITO_OPERAZIONE,
TBELENCO.DES_TIPO_MISURA,
--TBELENCO.RAGIONE_SOCIALE,
--ROW_NUMBER() OVER (ORDER BY TBELENCO.DATA_PROC DESC) R
ROWNUM R
FROM(
SELECT
LOG.DATA_PROC, LOG.POD, LOG.DESCRIZIONE, LOG.ERROR, LOG.STATO,
LOG.SEZIONE, LOG.NOME_FILE, LOG.ID_CARICAMENTO, LOG.ESITO_OPERAZIONE, TM.DES_TIPO_MISURA
--,C.RAGIONE_SOCIALE
--ROW_NUMBER() OVER (ORDER BY LOG.DATA_PROC DESC) R
FROM
MS042_LOADING_LOGS LOG JOIN MS116_MEASURE_TYPES TM ON
TM.ID_TIPO_MISURA=LOG.SEZIONE
-- LEFT JOIN(
-- SELECT CUST.RAGIONE_SOCIALE,STR.POD,RSC.DATA_DA, RSC.DATA_A
-- FROM
-- MS038_METERS STR JOIN MS036_REL_SITES_CUSTOMERS RSC ON
-- STR.ID_SITO=RSC.ID_SITO
-- JOIN MS030_CUSTOMERS CUST ON
-- CUST.ID_CLIENTE=RSC.ID_CLIENTE
-- ) C ON
-- C.POD=LOG.POD
--AND LOG.DATA_PROC BETWEEN C.DATA_DA AND C.DATA_A
WHERE
1=1
--AND LOG.DATA_PROC>=TRUNC(SYSDATE)
AND LOG.DATA_PROC>=TRUNC(SYSDATE)-3
--TO_DATE('01/11/2014', 'DD/MM/YYYY')
) TBELENCO
)
WHERE
R BETWEEN 1 AND 200;
If I execute the query with AND LOG.DATA_PROC>=TRUNC(SYSDATE)-3, Oracle uses the index on the data_proc field of the MS042_LOADING_LOGS (LOG) table, if I use, instead, AND LOG.DATA_PROC>=TRUNC(SYSDATE)-4 or -5, or -6, etc, it uses a table access full. Why this behavior?
I also execute a :
ALTER INDEX MS042_DATA_PROC_IDX REBUILD;
but with no changes.
Thank,
Igor
--***********************************************************
SELECT count(*)
FROM
(
SELECT
TBELENCO.DATA_PROC, TBELENCO.POD, TBELENCO.DESCRIZIONE, TBELENCO.ERROR, TBELENCO.STATO,
TBELENCO.SEZIONE, TBELENCO.NOME_FILE, TBELENCO.ID_CARICAMENTO, TBELENCO.ESITO_OPERAZIONE,
TBELENCO.DES_TIPO_MISURA,
ROWNUM R
FROM(
SELECT
LOG.DATA_PROC, LOG.POD, LOG.DESCRIZIONE, LOG.ERROR, LOG.STATO,
LOG.SEZIONE, LOG.NOME_FILE, LOG.ID_CARICAMENTO, LOG.ESITO_OPERAZIONE, TM.DES_TIPO_MISURA
FROM
MS042_LOADING_LOGS LOG JOIN MS116_MEASURE_TYPES TM ON
TM.ID_TIPO_MISURA=LOG.SEZIONE
WHERE
1=1
AND LOG.DATA_PROC>=TRUNC(SYSDATE)-1
) TBELENCO
)
WHERE
R BETWEEN 1 AND 200;
Plan hash value: 2191058229
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 30866 (2)| 00:06:11 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | VIEW | | 94236 | 1196K| 30866 (2)| 00:06:11 |
| 3 | COUNT | | | | | |
|* 4 | HASH JOIN | | 94236 | 1104K| 30866 (2)| 00:06:11 |
| 5 | INDEX FULL SCAN | P087_TIPI_MISURE_PK | 15 | 30 | 1 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| MS042_LOADING_LOGS | 94236 | 920K| 30864 (2)| 00:06:11 |
|* 7 | INDEX RANGE SCAN | MS042_DATA_PROC_IDX | 94236 | | 25742 (2)| 00:05:09 |
-------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("R"<=200 AND "R">=1)
4 - access("TM"."ID_TIPO_MISURA"="LOG"."SEZIONE")
7 - access(SYS_OP_DESCEND("DATA_PROC")<=SYS_OP_DESCEND(TRUNC(SYSDATE#!)-1))
filter(SYS_OP_UNDESCEND(SYS_OP_DESCEND("DATA_PROC"))>=TRUNC(SYSDATE#!)-1)
Plan hash value: 69930686
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 95921 (1)| 00:19:12 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | VIEW | | 1467K| 18M| 95921 (1)| 00:19:12 |
| 3 | COUNT | | | | | |
|* 4 | HASH JOIN | | 1467K| 16M| 95921 (1)| 00:19:12 |
| 5 | INDEX FULL SCAN | P087_TIPI_MISURE_PK | 15 | 30 | 1 (0)| 00:00:01 |
|* 6 | TABLE ACCESS FULL| MS042_LOADING_LOGS | 1467K| 13M| 95912 (1)| 00:19:11 |
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("R"<=200 AND "R">=1)
4 - access("TM"."ID_TIPO_MISURA"="LOG"."SEZIONE")
6 - filter("LOG"."DATA_PROC">=TRUNC(SYSDATE#!)-4)
The larger the fraction of rows that will be returned, the more efficient a table scan is and the less efficient it is to use an index. Apparently, Oracle expects that inflection point to come when the query returns more than 3 days of data. If that is inaccurate, I would expect that the statistics on your table or indexes are inaccurate.

Select single row per unique field value with SQL Developer

I have thousands of rows of data, a segment of which looks like:
+-------------+-----------+-------+
| Customer ID | Company | Sales |
+-------------+-----------+-------+
| 45678293 | Sears | 45 |
| 01928573 | Walmart | 6 |
| 29385068 | Fortinoes | 2 |
| 49582015 | Walmart | 1 |
| 49582015 | Joe's | 1 |
| 19285740 | Target | 56 |
| 39506783 | Target | 4 |
| 39506783 | H&M | 4 |
+-------------+-----------+-------+
In every case that a customer ID occurs more than once, the value in 'Sales' is also the same but the value in 'Company' is different (this is true throughout the entire table). I need for each value in 'Customer ID to only appear once, so I need a single row for each customer ID.
In other words, I'd like for the above table to look like:
+-------------+-----------+-------+
| Customer ID | Company | Sales |
+-------------+-----------+-------+
| 45678293 | Sears | 45 |
| 01928573 | Walmart | 6 |
| 29385068 | Fortinoes | 2 |
| 49582015 | Walmart | 1 |
| 19285740 | Target | 56 |
| 39506783 | Target | 4 |
+-------------+-----------+-------+
If anyone knows how I can go about doing this, I'd much appreciate some help.
Thanks!
Well it would have been helpful, if you have put your sql generate that data.
but it might go something like;
SELECT customer_id, Max(Company) as company, Count(sales.*) From Customers <your joins and where clause> GROUP BY customer_id
Assumes; there are many company and picks out the most number of occurance and the sales data to be in a different table.
Hope this helps.

Resources