How to avoid DoS attack in the below case?
CREATE OR REPLACE FUNCTION slow_function (
p_in INT
AS
BEGIN
DBMS_LOCK.sleep(p_in);
END;
)
DBMS_LOCK.SLEEP does not use any significant resources. There is nothing specific to that function that would enable a denial of service attack. On the contrary, that is probably one of the safest functions anybody could execute.
To demonstrate this, I ran the below test case on a PC to simulate 100 sessions calling the function at the same time. Even with 100 sessions, oracle.exe used less than 1% of the CPU.
CREATE OR REPLACE FUNCTION slow_function (
p_in INT) RETURN NUMBER
AS
BEGIN
DBMS_LOCK.sleep(p_in);
RETURN 1;
END;
/
select /*+ parallel(100) */ slow_function(60) from dba_tables;
--In another session, check that statement is really using 100 sessions.
select * from v$px_process;
Related
I have an SQL script that is called from within a shell script and takes a long time to run. It currently contains dbms_output.put_line statements at various points. The output from these print statements appear in the log files, but only once the script has completed.
Is there any way to ensure that the output appears in the log file as the script is running?
Not really. The way DBMS_OUTPUT works is this: Your PL/SQL block executes on the database server with no interaction with the client. So when you call PUT_LINE, it is just putting that text into a buffer in memory on the server. When your PL/SQL block completes, control is returned to the client (I'm assuming SQLPlus in this case); at that point the client gets the text out of the buffer by calling GET_LINE, and displays it.
So the only way you can make the output appear in the log file more frequently is to break up a large PL/SQL block into multiple smaller blocks, so control is returned to the client more often. This may not be practical depending on what your code is doing.
Other alternatives are to use UTL_FILE to write to a text file, which can be flushed whenever you like, or use an autonomous-transaction procedure to insert debug statements into a database table and commit after each one.
If it is possible to you, you should replace the calls to dbms_output.put_line by your own function.
Here is the code for this function WRITE_LOG -- if you want to have the ability to choose between 2 logging solutions:
write logs to a table in an autonomous transaction
CREATE OR REPLACE PROCEDURE to_dbg_table(p_log varchar2)
-- table mode:
-- requires
-- CREATE TABLE dbg (u varchar2(200) --- username
-- , d timestamp --- date
-- , l varchar2(4000) --- log
-- );
AS
pragma autonomous_transaction;
BEGIN
insert into dbg(u, d, l) values (user, sysdate, p_log);
commit;
END to_dbg_table;
/
or write directly to the DB server that hosts your database
This uses the Oracle directory TMP_DIR
CREATE OR REPLACE PROCEDURE to_dbg_file(p_fname varchar2, p_log varchar2)
-- file mode:
-- requires
--- CREATE OR REPLACE DIRECTORY TMP_DIR as '/directory/where/oracle/can/write/on/DB_server/';
AS
l_file utl_file.file_type;
BEGIN
l_file := utl_file.fopen('TMP_DIR', p_fname, 'A');
utl_file.put_line(l_file, p_log);
utl_file.fflush(l_file);
utl_file.fclose(l_file);
END to_dbg_file;
/
WRITE_LOG
Then the WRITE_LOG procedure which can switch between the 2 uses, or be deactivated to avoid performances loss (g_DEBUG:=FALSE).
CREATE OR REPLACE PROCEDURE write_log(p_log varchar2) AS
-- g_DEBUG can be set as a package variable defaulted to FALSE
-- then change it when debugging is required
g_DEBUG boolean := true;
-- the log file name can be set with several methods...
g_logfname varchar2(32767) := 'my_output.log';
-- choose between 2 logging solutions:
-- file mode:
g_TYPE varchar2(7):= 'file';
-- table mode:
--g_TYPE varchar2(7):= 'table';
-----------------------------------------------------------------
BEGIN
if g_DEBUG then
if g_TYPE='file' then
to_dbg_file(g_logfname, p_log);
elsif g_TYPE='table' then
to_dbg_table(p_log);
end if;
end if;
END write_log;
/
And here is how to test the above:
1) Launch this (file mode) from your SQLPLUS:
BEGIN
write_log('this is a test');
for i in 1..100 loop
DBMS_LOCK.sleep(1);
write_log('iter=' || i);
end loop;
write_log('test complete');
END;
/
2) on the database server, open a shell and
tail -f -n500 /directory/where/oracle/can/write/on/DB_server/my_output.log
Two alternatives:
You can insert your logging details in a logging table by using an autonomous transaction. You can query this logging table in another SQLPLUS/Toad/sql developer etc... session. You have to use an autonomous transaction to make it possible to commit your logging without interfering the transaction handling in your main sql script.
Another alternative is to use a pipelined function that returns your logging information. See here for an example: http://berxblog.blogspot.com/2009/01/pipelined-function-vs-dbmsoutput.html When you use a pipelined function you don't have to use another SQLPLUS/Toad/sql developer etc... session.
the buffer of DBMS_OUTPUT is read when the procedure DBMS_OUTPUT.get_line is called. If your client application is SQL*Plus, it means it will only get flushed once the procedure finishes.
You can apply the method described in this SO to write the DBMS_OUTPUT buffer to a file.
Set session metadata MODULE and/or ACTION using dbms_application_info().
Monitor with OEM, for example:
Module: ArchiveData
Action: xxx of xxxx
If you have access to system shell from PL/SQL environment you can call netcat:
BEGIN RUN_SHELL('echo "'||p_msg||'" | nc '||p_host||' '||p_port||' -w 5'); END;
p_msg - is a log message
v_host is a host running python script that reads data from socket on port v_port.
I used this design when I wrote aplogr for real-time shell and pl/sql logs monitoring.
I am developing a quizing game in Delphi and I would like to have a timer so that players don´t have unlimited time to answer the questions. I am using the function "Time" to get the current time but I don´t know how to convert it to something like an integer so that when let´s say 10 seconds have passed the player loses it´s chance. It would look like something like this:
Var
CurrentTime,Aux:TDateTime;
Begin
CurrentTime:=Time; //Current Time is assigned to a global variable.
Aux:=CurrentTime;
While (Aux-10000<=CurrentTime) do
Begin
if (Answer:=True) then //If the answer is already given by the player we break the while loop
Break;
Aux:=Time; //We refresh the auxilary variable
if (Aux-10000>=CurrentTime) then //We check if 10 seconds have passed
Begin
showmessage('You have taken too much time, your turn is lost');
exit; //We leave the script
end;
end;
end;
The problem is I can´t do arithmetic operations in DateTimes, as far as I know, So I need a different method for comparing the 2 different time instances. Any help would be appreciated, thanks!
TDate, TTime, and TDateTime are implemented using floating-point numbers, so you can perform arithmetic operations on them.
But you really shouldn't, in this case. The DateUtils unit has many functions for working with date/time values, eg:
uses
..., DateUtils;
var
StartTime, Aux: TDateTime;
begin
StartTime := Time();
Aux := StartTime;
...
while (not Answer) and (MillisecondsBetween(Aux, StartTime) < 10000) do
begin
Sleep(0);
Aux := Time();
end;
if (not Answer) then
begin
ShowMessage('You have taken too much time, your turn is lost');
Exit; //We leave the script
end;
...
end;
Note that this is not really a good use for TDateTime. Your calculations are relying on the system clock in local time being accurate and unchanging, but it can be changed dynamically while your code is running (user updates, network updates, daylight saving time change, etc), throwing off the results.
Consider using TStopWatch instead. It is intended for exactly this kind of use-case (determining elapsed time between actions), eg:
uses
..., System.Diagnostics;
var
SW: TStopWatch;
begin
SW := TStopWatch.StartNew;
...
while (not Answer) and (SW.ElapsedMilliseconds < 10000) do
Sleep(0);
if (not Answer) then
begin
ShowMessage('You have taken too much time, your turn is lost');
Exit; //We leave the script
end;
...
end;
Or, you could use TEvent instead, and have the answer signal the event when ready, eg:
uses
..., SyncObjs;
var
AnsweredEvent: TEvent;
...
// when the answer is submitted:
AnsweredEvent.SetEvent;
...
begin
AnsweredEvent.ResetEvent;
...
if AnsweredEvent.WaitFor(10000) <> wrSignaled then
begin
ShowMessage('You have taken too much time, your turn is lost');
Exit; //We leave the script
end;
end;
initialization
AnsweredEvent := TEvent.Create;
finalization
AnsweredEvent.Free;
I have written a few applications like this, using a TTimer. The timer's interval is set to 1000, which is equivalent to 1 second (you can use a different value); every time the timer's OnTimer event executes, a global variable is incremented which is then checked against the time limit (10 seconds?); if the variable equals this limit then first the timer is stopped, then the code performs whatever is necessary to transfer to the next person, or next question.
There should be similar code that executes when the person enters an answer as this code too needs to first save the answer then transfers to the next person. This portion should also stop the timer.
The 'show next question' part should restart the timer and reset the global variable only after the next question has been displayed, as it might take some time for it to be fetched.
I have some long running (> 1 minute) tasks that use to be run fine through an xQuery in REST. However, we have now placed these servers behind an Amazon load balancer and because of the way Amazon load balancers work, no single query can have a duration exceeding 29 seconds. Amazon will just timeout the query.
NOTE: There is no control over this
So, the solution we came up with is for the xQuery to just trigger a scheduled task to run which works fine. I had thought this would work and it does with one exception --- using something like this:
declare function jobs:create-job ($xquery-resource as xs:string, $period as xs:integer, $job-name as xs:string, $job-parameters as element()?, $delay as xs:integer, $repeat as xs:integer) as xs:boolean {
let $jobstatus := scheduler:schedule-xquery-periodic-job($xquery-resource, $period, $job-name, $job-parameters, $delay, $repeat)
return $jobstatus
And setting the $repeat to 0, it runs once but the job named "$job-title" is still in the list of scheduled jobs as "COMPLETE". Trying to run the code again will error. The error is apparently that another job with the same name cannot be created. If I change the job name it will run, so I know it is the name that is causing the error. The schedule log shows:
<scheduler:job name="Create Vault">
<scheduler:trigger name="Create Vault Trigger">
<expression>30000</expression>
<state>COMPLETE</state>
<start>2019-08-22T20:07:14.775Z</start>
<end/>
<previous>2019-08-22T20:07:14.775Z</previous>
<next/>
<final/>
</scheduler:trigger>
</scheduler:job>
Now, is there a different way that we could execute an xQuery triggered only once so that we do not have this job name issue? Or a way to tell the scheduled task to self-destruct and remove itself? Otherwise we would need to write some complicated code to create another task to delete the job after it is run (or maybe the create task code should delete any $job-name jobs) or leave them behind and use some GUID on the name.
Update I
The cleanest way we found is this (essentially if the job exists when we go to create it, delete it and then create another one):
declare function jobs:create-job ($xquery-resource as xs:string, $period as xs:integer, $job-name as xs:string, $job-parameters as element()?, $delay as xs:integer, $repeat as xs:integer) as xs:boolean {
let $cleanjob:= if(count(scheduler:get-scheduled-jobs()//scheduler:job[#name=$job-name]) > 0) then scheduler:delete-scheduled-job($job-name) else true()
let $jobstatus := if($cleanjob) then scheduler:schedule-xquery-periodic-job($xquery-resource, $period, $job-name, $job-parameters, $delay, $repeat) else false()
return $jobstatus
};
I would say we should also check the status and not delete if it is running ... however these tasks are built to run on-demand but the on-demand is likely once a day at most. The longest task is formatting about 3000 document to PDFSs which takes maybe 20 mins. It is not likely anyone would clash with another running task but we could add that to be sure.
Or should the answer be examining util:eval-async() but that is confusing as it does not really say that it just "shells" out the execution. If it threads it out and waits for the thread then that will not work either.
This is a multiuser application (multithreaded) where various departments will access their own database.The database is SQLite and I am using FireDac.For each department I have assigned a separate ADConnection so I dont get any unexpected locks.
Which connection will be activated (active) depends solely on the number produced by the ADQuery3. This is done on MainForm Show because it needs to be this way (which gets shown after successfull login). I would like to be able to close every connection on FormClose but I run into some bad issues when multiusers use the same database and log in and out.So I would like to ask if this is the right programming logic I am doing or this could be done in a better way?
Also I have never used this many begin end else and I am wondering how to proceed with this?
I mean when I need to check the if the number of another department came up, like
if DataModule1.ADQuery3.FieldByName('DEPARTMENT').AsString = '12' where does the next ELSE come up?
procedure TMainForm.FormShow(Sender: TObject);
begin
if DataModule1.ADQuery3.FieldByName('DEPARTMENT').AsString = '13'
then begin
try
if DataModule1.1_CONNECTION.Connected = true then
DataModule1.1_CONNECTION.Connected := False
else
DataModule1.1_CONNECTION.DriverName:= 'SQLite';
DataModule1.1_CONNECTION.Params.Values['Database']:= ExtractFilePath(Application.ExeName)+ 'mydatabase.db';
DataModule1.1_CONNECTION.Connected := true;
DataModule1.ADTable1.TableName :='DEPT_13';
DataModule1.DEPT_13.Active:=True;
cxGrid1.ActiveLevel.GridView := DEPT_13;
except
on E: Exception do begin
ShowMessage('There was an error... : ' + E.Message);
end;
end;
end;
I want to measure the execution time of each sql statement in sqlite.
I understand in sqlite shell you could just do .timer on, but I am wondering how to do it in a pogrammerable way so that I could know how much time a sql statement takes when applications are accessing the database at real time, not just by replaying that statement in a shell afterwards?
Could I just provide a plugin function for sqlite_profile?
Thanks a lot
After reading the code of shell.c I found the following relevant information on top of the file:
/* Saved resource information for the beginning of an operation */
static struct rusage sBegin;
/*
** Begin timing an operation
*/
static void beginTimer(void){
if( enableTimer ){
getrusage(RUSAGE_SELF, &sBegin);
}
}
/* Return the difference of two time_structs in seconds */
static double timeDiff(struct timeval *pStart, struct timeval *pEnd){
return (pEnd->tv_usec - pStart->tv_usec)*0.000001 +
(double)(pEnd->tv_sec - pStart->tv_sec);
}
/*
** Print the timing results.
*/
static void endTimer(void){
if( enableTimer ){
struct rusage sEnd;
getrusage(RUSAGE_SELF, &sEnd);
printf("CPU Time: user %f sys %f\n",
timeDiff(&sBegin.ru_utime, &sEnd.ru_utime),
timeDiff(&sBegin.ru_stime, &sEnd.ru_stime));
}
}
This means, SQLite does not provide some sort of that profiling itself, so you have to use your language features (e.g. endtime-startime) to do the job.
Well sqliteman provides some basic profiling features. It shows query execution time.
Well though the query execution is on different processor ( i.e X86 as compare to ARM) still it will help you to write optimized raw query on sqlite .
E.g. i just tested a select query with where clause without indexes and it took around 0.019 second and after creating index it's only taking 0.008.
Here is the link for sqlite man http://sourceforge.net/projects/sqliteman/