I want to measure the execution time of each sql statement in sqlite.
I understand in sqlite shell you could just do .timer on, but I am wondering how to do it in a pogrammerable way so that I could know how much time a sql statement takes when applications are accessing the database at real time, not just by replaying that statement in a shell afterwards?
Could I just provide a plugin function for sqlite_profile?
Thanks a lot
After reading the code of shell.c I found the following relevant information on top of the file:
/* Saved resource information for the beginning of an operation */
static struct rusage sBegin;
/*
** Begin timing an operation
*/
static void beginTimer(void){
if( enableTimer ){
getrusage(RUSAGE_SELF, &sBegin);
}
}
/* Return the difference of two time_structs in seconds */
static double timeDiff(struct timeval *pStart, struct timeval *pEnd){
return (pEnd->tv_usec - pStart->tv_usec)*0.000001 +
(double)(pEnd->tv_sec - pStart->tv_sec);
}
/*
** Print the timing results.
*/
static void endTimer(void){
if( enableTimer ){
struct rusage sEnd;
getrusage(RUSAGE_SELF, &sEnd);
printf("CPU Time: user %f sys %f\n",
timeDiff(&sBegin.ru_utime, &sEnd.ru_utime),
timeDiff(&sBegin.ru_stime, &sEnd.ru_stime));
}
}
This means, SQLite does not provide some sort of that profiling itself, so you have to use your language features (e.g. endtime-startime) to do the job.
Well sqliteman provides some basic profiling features. It shows query execution time.
Well though the query execution is on different processor ( i.e X86 as compare to ARM) still it will help you to write optimized raw query on sqlite .
E.g. i just tested a select query with where clause without indexes and it took around 0.019 second and after creating index it's only taking 0.008.
Here is the link for sqlite man http://sourceforge.net/projects/sqliteman/
Related
I tried to make a query from real time database using equalTo().
database.getReference(verifiedProductsDb.dbPartVerifiedProducts).order By Child(verifiedProductsDb.barcode).equalTo(b.toLong()).get().addOnCompleteListener {
but android studio gives out:
None of the following functions can be called with the argument supplied.
equalTo(Boolean) defined in com.google.firebase.database.Query
equalTo(Double) defined in com.google.firebase.database.Query
equal To(String?) defined in com.google.firebase.database.Query
Despite the fact that using setValue, long values are written to the same database quite successfully and without problems.
The Realtime Database API on Android only has support for Double number types. The underlying wire protocol and database will interpret the long numbers correctly though, so you should be able to just do:
database.getReference("VerifiedProducts")
.orderByChild("barcode")
.equalTo(b.toLong().toDouble()) // 👈
.get().addOnCompleteListener {
...
I cant find any documentation of this table on https://flywaydb.org and Google. This doesn't look like seconds. I was leaning towards Milliseconds but logged execution time items in the form of mm:ss.SSS are not lining up with the numbers in the SCHEMA_HISTORY table although they are close.
The execution_time field is measured in milliseconds:
/**
* The execution time (in millis) of this migration.
*/
private final int executionTime;
Source: flyway source code
Is there any possible way to check which query is so CPU intensive in _sqlsrv2 process?
Something which give me information about executed query in that process in that moment.
Is there any way to terminate that query without killing _sqlsrv2 process?
I cannot find any official materials in that subject.
Thank You for any help.
You could look into client database-request caching.
Code examples below assume you have ABL access to the environment. If not you will have to use SQL instead but it shouldn't be to hard to "translate" the code below
I haven't used this a lot myself but I wouldn't be surprised if it has some impact on performance.
You need to start caching in the active connection. This can be done in the connection itself or remotely via VST tables (as long as your remote session is connected to the same database) so you need to be able to identify your connections. This can be done via the process ID.
Generally how to enable the caching:
/* "_myconnection" is your current connection. You shouldn't do this */
FIND _myconnection NO-LOCK.
FIND _connect WHERE _connect-usr = _myconnection._MyConn-userid.
/* Start caching */
_connect._Connect-CachingType = 3.
DISPLAY _connect WITH FRAME x1 SIDE-LABELS WIDTH 100 1 COLUMN.
/* End caching */
_connect._Connect-CachingType = 0.
You need to identify your process first, via top or another program.
Then you can do something like:
/* Assuming pid 21966 */
FIND FIRST _connect NO-LOCK WHERE _Connect._Connect-Pid = 21966 NO-ERROR.
IF AVAILABLE _Connect THEN
DISPLAY _connect.
You could also look at the _Connect-Type. It should be 'SQLC' for SQL connections.
FOR EACH _Connect NO-LOCK WHERE _Connect._connect-type = "SQLC":
DISPLAY _connect._connect-type.
END.
Best of all would be to do this in a separate environment. If you can't at least try it in a test environment first.
Here's a good guide.
You can use a Select like this:
select
c."_Connect-type",
c."_Connect-PID" as 'PID',
c."_connect-ipaddress" as 'IP',
c."_Connect-CacheInfo"
from
pub."_connect" c
where
c."_Connect-CacheInfo" is not null
But first you need to enable connection cache, follow this example
How to avoid DoS attack in the below case?
CREATE OR REPLACE FUNCTION slow_function (
p_in INT
AS
BEGIN
DBMS_LOCK.sleep(p_in);
END;
)
DBMS_LOCK.SLEEP does not use any significant resources. There is nothing specific to that function that would enable a denial of service attack. On the contrary, that is probably one of the safest functions anybody could execute.
To demonstrate this, I ran the below test case on a PC to simulate 100 sessions calling the function at the same time. Even with 100 sessions, oracle.exe used less than 1% of the CPU.
CREATE OR REPLACE FUNCTION slow_function (
p_in INT) RETURN NUMBER
AS
BEGIN
DBMS_LOCK.sleep(p_in);
RETURN 1;
END;
/
select /*+ parallel(100) */ slow_function(60) from dba_tables;
--In another session, check that statement is really using 100 sessions.
select * from v$px_process;
I am using symfony2 and doctrine mongodb odm to import product data from CSV files. I created a console command to create the Product objects and then persist them and flush the DocumentManager. The flush is taking upwards of 30 seconds and I only have a couple thousand products. There will potentially be many more in the future.
I am wondering if there are any optimizations/best practices to make flushing a large quantity of new objects faster in doctrine. It seems like there wouldn't need to be that much processing on the objects since they are all new and just need to be added to the collection.
I experienced a similar problem (loading thousands of products from a csv as it would be). My problem revolved more around running out of memory, but the solution showed a significant increase in speed as well.
Essentially, I put a counter inside the loop and flushed the manager then cleared it every so often. I found that 150 batch size yielded the best results. I am sure it depended largely on how you are processing it as I had LOTS of number crunching going on to clean the data before inserting it.
For reference, it loads about 5,500 products with 100+ fields and does processing on them in about 20 seconds. It was taking 3+ minutes before the modification (if it even finished at all due to running out of memory.)
//LOOP {
if ($count % $batchSize == 0) {
$manager->flush();
$manager->clear();
gc_collect_cycles();
if ($count % $batchSize == 0)
echo $count . ' | ' . number_format((memory_get_usage() / 1024), 4) . " KBs\n";
}
$count++;
}
Don't forget to run the $manager->flush() at least one more time once the loop is complete to catch those 1-149 records that wouldn't trigger it in the loop.
I have a very large database. I find it much more efficient to do a flush every time you insert the code better manages access to the database.
$dm->persist($object);
$dm->flush();
$dm->clear();