Problems about DECLARE Block contains a commented dynamic input variable - plsql

I am just new to PL/SQL.
I wrote a block to calculate the radius and circumference of a circle like below:
SET SERVEROUTPUT ON;
CREATE OR REPLACE PROCEDURE cal_circle AS
-- DECLARE
pi CONSTANT NUMBER := 3.1415926;
radius NUMBER := 3;
-- to make it more dynamic I can set
-- radius NUMBER := &enter_value;
circumference DECIMAL(4,2) := radius * pi * 2;
area DECIMAL(4,2) := pi * radius ** 2;
BEGIN
-- DBMS_OUTPUT.PUT_LINE('Enter a valur of radius: '|| radius);
dbms_output.put_line('For a circle with radius '
|| radius
|| ',the circumference is '
|| circumference
|| ' and the area is '
|| area
|| '.');
END;
/
Here you could see that I've commented the declare code radius NUMBER := &enter_value;, however, when I run my scripts in SQL*Plus or SQL Developer, I always got popup message like please enter the value for enter_value.
Oppositely, what if I delete this comment in the declare and just put it outside of it, then there will be no prompt anymore.
SET SERVEROUTPUT ON;
/* to make it more dynamic, I can set
radius NUMBER := &enter_value;
*/
CREATE OR REPLACE PROCEDURE cal_circle AS
-- DECLARE
pi CONSTANT NUMBER := 3.1415926;
radius NUMBER := 3;
circumference DECIMAL(4,2) := radius * pi * 2;
area DECIMAL(4,2) := pi * radius ** 2;
BEGIN
......
Here I want to clarify that does it mean the DECLARE block cannot accept comment when I try to comment a dynamic variable?
Thanks.

You can use a parameter instead of a substitution variable to allow different users to call the procedure with different values of pi.
I'd recommend using a FUNCTION instead of a PROCEDURE for this, but here's an example (Also using a parameter for radius). :
CREATE OR REPLACE PROCEDURE CAL_CIRCLE(P_RADIUS IN NUMBER, P_PI IN NUMBER) AS
CIRCUMFERENCE DECIMAL(4, 2) := P_RADIUS * P_PI * 2;
AREA DECIMAL(4, 2) := P_PI * P_RADIUS ** 2;
BEGIN
DBMS_OUTPUT.put_line('For a circle with radius '
|| P_RADIUS
|| ',the circumference is '
|| CIRCUMFERENCE
|| ' and the area is '
|| AREA
|| '.' || 'Calculated with Pi=: ' || P_PI);
END;
/
Then try it out:
BEGIN
CAL_CIRCLE(3, 3.14);
END;
/
For a circle with radius 3,the circumference is 18.84 and the area is
28.26.Calculated with Pi=: 3.14
BEGIN
CAL_CIRCLE(3, 3.14159);
END;
/
For a circle with radius 3,the circumference is 18.85 and the area is
28.27.Calculated with Pi=: 3.14159
If you really need to actually COMPILE the procedure with different values for its constants (not recommended) with a substituted value of pi, you can set the substitution variable first with DEFINE. like DEFINE pi_value = 3.1415;, then using &pi_value later.
Update: Why do SQLPlus and SQL Developer detect the Substitution Variable and request a value for it, even when it is in a comment?
TLDR: SQL Clients must deliver comments to the server. Preprocessing substitutions in comments gives greater flexibility and keeps the SQL Clients simpler. The clients have good support for controlling substitution behavior. There is not much of a reason to have orphan substitution variables in finalized code.
Longer-Version:
These tools are database clients--they have lots of features but first and foremost their first job is to gather input SQL, deliver it to the database server and handle fetched data.
Comments need to be delivered to the database server with their accompanying SQL statements. There are reasons for this -- so users can save comments on their compiled SQL code in the database, of course, but also for compiler hints.
Substitution Variables are not delivered with the SQL to the server like comments are. Instead they are evaluated first, and the resultant SQLText is sent to the server. (You can see the SQL that gets into the server has its Substitution Variables replaced with real values. See V$SQLTEXT).
Since the server "makes use" of the comments, it makes things more flexible and simplifies things for SQLPlus to replace the Substitution Variables even in comments. (If needed this can be overridden. I'll show that below). SQLPlus,SQLDeveloper, etc could have been designed to ignore Substitution Variables in comments, but that would make them less flexible and perhaps require more code since they would need to recognize comments and change their behavior accordingly, line-by-line. I'll show some example of this flexibility further below.
There is not much of a drawback to the tools working this way.
Suppose one just wants to ignore a chunk of code for a minute during development and quickly run everything. It would be annoying if one had DEFINE everything even though it wasn't used, or to delete all the commented code just so it could run. So these tools instead allow you to SET DEFINE OFF; and ignore the variables.
For example, this runs fine:
SET DEFINE OFF;
--SELECT '&MY_FIRST_IGNORED_VAR' FROM DUAL;
-- SELECT '&MY_SECOND_IGNORED_VAR' FROM DUAL;
SELECT 1919 FROM DUAL;
If one needs to use '&' itself in a query, SQLPlus lets you choose another character as the substitution marker. There are lots of options to control things.
If one has finished developing one's final query or procedure, it isn't a valid situation to have leftover "orphan" comments with undefined-substitutions. When development is complete, orphan substitutions should all be removed, and anything remaining should reference valid DEFINEd variables.
Here's an example that make use of processing substitutions in comments.
Suppose you wanted to tune some poor-performing SQL. You could use a substitution variable in the HINTs (in a comment) to allow for quickly changing which index is used, or execution mode, etc. without needing to actually change the query script.
CREATE TABLE TEST_TABLE_1(TEST_KEY NUMBER PRIMARY KEY,
TEST_VALUE VARCHAR2(128) NOT NULL,
CONSTRAINT TEST_VALUE_UNQ UNIQUE (TEST_VALUE));
INSERT INTO TEST_TABLE
SELECT LEVEL, 'VALUE-'||LEVEL
FROM DUAL CONNECT BY LEVEL <= 5000;
Normally a query predicating against TEST_VALUE here would normally use its UNIQUE INDEX when fetching the data.
SELECT TEST_VALUE FROM TEST_TABLE WHERE TEST_VALUE = 'VALUE-1919';
X-Plan:
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 66 | 1 (0)| 00:00:01 |
|* 1 | INDEX UNIQUE SCAN| TEST_VALUE_UNQ | 1 | 66 | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------
But one can force a full-scan via a hint. By using a substitution variable in the hint (in a comment), one can allow the values of the Substitution Variable to direct query-execution:
DEFINE V_WHICH_FULL_SCAN = 'TEST_TABLE';
SELECT /*+ FULL(&V_WHICH_FULL_SCAN) */ TEST_VALUE FROM TEST_TABLE WHERE TEST_VALUE = 'VALUE-1919';
Here the Substitution Variable (in its comment) has changed the query-execution.
X-Plan:
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23 | 1518 | 9 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TEST_TABLE | 23 | 1518 | 9 (0)| 00:00:01 |
--------------------------------------------------------------------------------
If there were a bunch of tables here instead of one, a person could DEFINE different targets to full-scan, and evaluate each impact on the query quickly.

Related

How to SELECT a single record in table X with the largest value for X.a WHERE values for fields X.b & X.c are specified

I am using the following query to obtain the current component serial number (tr_sim_sn) installed on the host device (tr_host_sn) from the most recent record in a transaction history table (PUB.tr_hist)
SELECT tr_sim_sn FROM PUB.tr_hist
WHERE tr_trnsactn_nbr = (SELECT max(tr_trnsactn_nbr)
FROM PUB.tr_hist
WHERE tr_domain = 'vattal_us'
AND tr_lot = '99524136'
AND tr_part = '6684112-001')
The actual table has ~190 million records. The excerpt below contains only a few sample records, and only fields relevant to the search to illustrate the query above:
tr_sim_sn |tr_host_sn* |tr_host_pn |tr_domain |tr_trnsactn_nbr |tr_qty_loc
_______________|____________|_______________|___________|________________|___________
... |
356136072015140|99524135 |6684112-000 |vattal_us |178415271 |-1.0000000000
356136072015458|99524136 |6684112-001 |vattal_us |178424418 |-1.0000000000
356136072015458|99524136 |6684112-001 |vattal_us |178628048 |1.0000000000
356136072015050|99524136 |6684112-001 |vattal_us |178628051 |-1.0000000000
356136072015836|99524137 |6684112-005 |vattal_us |178645337 |-1.0000000000
...
* = key field
The excerpt illustrates multiple occurrences of tr_trnsactn_nbr for a single value of tr_host_sn. The largest value for tr_trnsactn_nbr corresponds to the current tr_sim_sn installed within tr_host_sn.
This query works, but it is very slow, ~8minutes.
I would appreciate suggestions to improve or refactor this query to improve its speed.
Check with your admins to determine when they last updated the SQL statistics. If the answer is "we don't know" or "never" then you might want to ask them to run the following 4gl program which will create a SQL script to accomplish that:
/* genUpdateSQL.p
*
* mpro dbName -p util/genUpdateSQL.p -param "tmp/updSQLstats.sql"
*
* sqlexp -user userName -password passWord -db dnName -S servicePort -infile tmp/updSQLstats.sql -outfile tmp/updSQLtats.log
*
*/
output to value( ( if session:parameter <> "" then session:parameter else "updSQLstats.sql" )).
for each _file no-lock where _hidden = no:
put unformatted
"UPDATE TABLE STATISTICS AND INDEX STATISTICS AND ALL COLUMN STATISTICS FOR PUB."
'"' _file._file-name '"' ";"
skip
.
put unformatted "commit work;" skip.
end.
output close.
return.
This will generate a script that updates statistics for all table and all indexes. You could edit the output to only update the tables and indexes that are part of this query if you want.
Also, if the admins are nervous they could, of course, try this on a test db or a restored backup before implementing in a production environment.
I am posting this as a response to my request for an improved query.
As it turns out, the following syntax features two distinct features that greatly improved the speed of the query. One is to include tr_domain search criteria in both main and nested portions of the query. Second is to narrow the search by increasing the number of search criteria, which in the following are all included in the nested section of the syntax:
SELECT tr_sim_sn,
FROM PUB.tr_hist
WHERE tr_domain = 'vattal_us'
AND tr_trnsactn_nbr IN (
SELECT MAX(tr_trnsactn_nbr)
FROM PUB.tr_hist
WHERE tr_domain = 'vattal_us'
AND tr_part = '6684112-001'
AND tr_lot = '99524136'
AND tr_type = 'ISS-WO'
AND tr_qty_loc < 0)
This syntax results in ~0.5s response time. (credit to my colleague, Daniel V.)
To be fair, this query uses criteria outside the originally stated parameters that were included in the original post, making it difficult to impossible for others to attempt a reasonable answer. This omission was not on purpose of course, rather due to being fairly new to fundamentals of good query design. This query in part is a result of learning that when too-few or non-indexed fields are used as search criteria in a large table, it is sometimes helpful to narrow the search by increasing the number of search criteria items. The original had 3, this one has 5.

How to understand a Int-Proc entry, mentioned in client.mon file?

I'm dealing with a list of errors while trying to open a *.w file in the appBuilder. I managed to find a previous version of that file, which opens fine, and I see following differences between both files:
Per procedure segment information
---------------------------------
File Segment #Segments Total-Size
---- ------- --------- ----------
Good_version.w
...
Int-Proc: 19 1 26232
...
Bad_version.w
...
Int-Proc: 19 1 32712
As you can see, "Int-Proc" number 19 seems to be the one, exceeding the segment size (above 32K) and hence is the one causing the problem.
Now the obvious question: how can I know the meaning of "Int-Proc" number 19? I have some procedures inside my code but the number does not correspond with the total number of "Int-Proc" (very naïvely: I have 38 "Int-Proc" entries in client.mon but only 21 End procedure. entries in my source code).
Edit
The action to take in case of exceeding 32K limit is splitting the procedure, which grows too large, into smaller pieces. However, between Bad_version.w and Good_version.w, it seems that in total 5 procedures have been expanded, and I'd like to know which one I need to split.
Disclaimer: I have never used the AppBuilder.
client.mon is for r-code statistics, so I think that instead of .w there should be a .r there. The AppBuilder has a 32000 byte (= maximum size of a character variable) limit for internal procedures. 32000 new lines will also break the AppBuilder view, but compile to 0 bytes (or so).
I /thought/ the AppBuilder would complain about an internal procedure being too large upon selecting the procedure that is too large. If not you will need to get the /text/ content size of block of your .w between procedure and end procedure and you know which are your problem.
Something like:
def var lcw as longchar no-undo.
def var iprocs as integer no-undo.
def var lcproc as longchar no-undo.
def var cc as character no-undo.
def var ic as integer no-undo.
cc = chr(1).
copy-lob from file "my.w" to lcw.
assign
lcw = replace( lcw, 'procedure ', cc )
lcw = replace( lcw, 'end procedure', cc )
iprocs = num-entries( lcw, cc )
.
do ic = 1 to iprocs:
lcproc = entry( ic, lcw, cc ).
if length( lcproc ) > 31000 then
message substring( lcproc, 1, 100 ) view-as alert-box.
end.
Intrigued by how the AppBuilder really complains:
started the AppBuilder
created a Smart Window
opened the first procedure section (it was a trigger)
added // some comment
saved the .w
opened the .w with Notepad++ and blew up // some comment to be larger than 32000 bytes
Opened .w with AppBuilder, endless errors.
Quit.
-> Added -debugalert to my shortcut.
On first error started debugger.
Debugger tries to start, but does not (remember the hidden procedures post)
-> Added -zn to my shortcut.
On first error started debugger.
It starts. While I cannot see any source code since I have not extracted the source code pls, I can see and view all variable and buffers.
Since I had blown up a trigger, the error reported _trg. Viewing _trg:
And:

Reversing the order of a MariaDB recursive CTE

I have a MariaDB 10.2.8 database which I am using to store the results of a crawl of all files beneath a particular root directory. So a file (in the file table) has a parent directory (in the directory table). This parent directory may have its own parents and so on up to the original point at which the directory crawl began.
So if I did a crawl from /home, the file /home/tim/projects/foo/bar.py would have a parent directory foo, which would have a parent directory projects and so on. /home (the root of the crawl) would have a null parent.
I've got the following recursive CTE:
with recursive tree as (
select id, name, parent from directory where id =
(select parent from file where id = #FileID)
union
select d.id, d.name, d.parent from directory d, tree t
where t.parent = d.id
) select name from tree;
which (as expected) returns the results in order, where #FileID is the primary key of the file. e.g.
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 17
Server version: 10.2.8-MariaDB-10.2.8+maria~jessie-log mariadb.org binary distribution
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> use inventory;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [inventory]> with recursive tree as (
-> select id, name, parent from directory where id =
-> (select parent from file where id = 3790)
-> union
-> select d.id, d.name, d.parent from directory d, tree t
-> where t.parent = d.id
-> ) select name from tree;
+----------+
| name |
+----------+
| b8 |
| objects |
| .git |
| fresnel |
| Projects |
| metatron |
+----------+
6 rows in set (0.00 sec)
MariaDB [inventory]> Bye
tim#merlin:~$
So in this case, file ID 3790 corresponds a file in the directory /metatron/Projects/fresnel/.git/objects/b8 (/metatron is, of course, the root of the crawl).
Is there a reliable way of reversing the order of the output (as I want to concatentate it together to produce the full path). I can order by id but this doesn't feel reliable as even though I know, in this case, that children will have a higher ID than their parents I can't guarantee this will always be the case in every circumstance I want to use a CTE.
(
your-current-query
) ORDER BY ...;
(If you have trouble with that, then stick SELECT ... in front, too.

How to write a program to print the wishes based on the sys time?

In PL/SQL Block, I would like to print the 'Good Morning','Good Noon','Good Eve' based on the system time that I have given as input.
If the time is 6 AM to 12 PM then it has to print GOOD MORNING
else if
it lies between 12 PM to 2 PM
it has to print GOOD NOON
else if it has to print GOOD EVE. So anybody can give me the Idea?
Advance Thanks for everyone who gives me the guidance.
I think, you know how the anonymous block looks like (reminder below):
DECLARE
-- variables' / constants' / types' / etc. declarations
BEGIN
-- logic
END;
/
You can create DATE variables or constants as follows:
l_in_date <CONSTANT> DATE := TO_DATE(<date>,<date_mask>);
Then you can use IF..THEN statement and print out the result according to the conditions (http://www.techonthenet.com/oracle/loops/if_then.php):
IF <condition> THEN
-- logic
ELSIF <condition> THEN
-- logic
ELSE
-- logic
END IF;
I believe, you should be able to create your anonymous block easily using the information above.
You could make a procedure or function that looks what time it is from the 24 hour format.
SELECT TO_CHAR(SYSDATE, HH24:MI:SS') INTO sysDate FROM dual;
Here sysDate is the current time in 24 hour format.
Then look if a time is within a certain timespan.

Postgresql partitions: Abnormally high seq scan cost on master table

I have a little database of a few hundreds of millions of rows for storing call detail records. I setup partitioning as per:
http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html
and it seemed to work pretty well until now. I have master table "acmecdr" which has rules for inserting into the correct partition and check constraints to make sure the correct table is used when selecting data. Here is an example of one of the partitions:
cdrs=> \d acmecdr_20130811
Table "public.acmecdr_20130811"
Column | Type | Modifiers
-------------------------------+---------+------------------------------------------------------
acmecdr_id | bigint | not null default
...snip...
h323setuptime | bigint |
acmesipstatus | integer |
acctuniquesessionid | text |
customers_id | integer |
Indexes:
"acmecdr_20130811_acmesessionegressrealm_idx" btree (acmesessionegressrealm)
"acmecdr_20130811_acmesessioningressrealm_idx" btree (acmesessioningressrealm)
"acmecdr_20130811_calledstationid_idx" btree (calledstationid)
"acmecdr_20130811_callingstationid_idx" btree (callingstationid)
"acmecdr_20130811_h323setuptime_idx" btree (h323setuptime)
Check constraints:
"acmecdr_20130811_h323setuptime_check" CHECK (h323setuptime >= 1376179200 AND h323setuptime < 1376265600)
Inherits: acmecdr
Now as one would expect with SET constraint_exclusion = on the correct partition should automatically be preferred and since there is an index on it there should only be one index scan.
However:
cdrs=> explain analyze select * from acmecdr where h323setuptime > 1376179210 and h323setuptime < 1376179400;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Result (cost=0.00..1435884.93 rows=94 width=1130) (actual time=138857.660..138858.778 rows=112 loops=1)
-> Append (cost=0.00..1435884.93 rows=94 width=1130) (actual time=138857.628..138858.189 rows=112 loops=1)
-> Seq Scan on acmecdr (cost=0.00..1435863.60 rows=1 width=1137) (actual time=138857.584..138857.584 rows=0 loops=1)
Filter: ((h323setuptime > 1376179210) AND (h323setuptime < 1376179400))
-> Index Scan using acmecdr_20130811_h323setuptime_idx on acmecdr_20130811 acmecdr (cost=0.00..21.33 rows=93 width=1130) (actual time=0.037..0.283 rows=112 loops=1)
Index Cond: ((h323setuptime > 1376179210) AND (h323setuptime < 1376179400))
Total runtime: 138859.240 ms
(7 rows)
So, I can see it's not scanning all the partitions, only the relevant one (which in index scan and pretty quick) and also the master table (which seems to be normal from the examples I've seen). But the high cost of the seq scan on the master table seems to be abnormal. I would love for that to come down and I see no reason for it, especially since the master table does not have any records in it:
cdrs=> select count(*) from only acmecdr;
count
-------
0
(1 row)
Unless I'm missing something obvious, this query should be quick. But it's not - it takes about 2 minutes? This does not seem normal at all (even for a slow server).
I'm out of ideas of what to try next, so if anyone has any suggestions or pointers in the right direction, it would be very much appreciated.

Resources