MariaDB engine innodb status raw operations reads / s calculation - mariadb

I am trying to find out possible performance issues with our galera cluster and had a look into the engine innodb status output.
Under raw operations the counter for inserted, updated, deleted and read are listed as well as a read/s but I cannot find any information about how this is calculated.
I saw this here before server restart:
Number of rows inserted 4071508, updated 3162114, deleted 1655661, read 25711900253
20.46 inserts/s, 0.62 updates/s, 0.46 deletes/s, 51.38 reads/s
and after server restart I see this here:
Number of rows inserted 4112209, updated 3246564, deleted 1692361, read 26696845630
60.55 inserts/s, 1.48 updates/s, 0.55 deletes/s, 103388.50 reads/s
so the number of reads has increased but not drastically but the reads per seconds has gone from 50 to more than 100.000 per second.
I assume that the read counter is somehow divided by the server up time but cannot find any confirmation on that.
Does anyone has insights in this?
Thanks in advance and greetings.

Related

Redis error : Timeout performing EVAL (5000ms) with asp.net session state

Getting frequent issues as below after migrating session state to redis
Timeout performing EVAL(5000ms), inst: 0,qs:5,in:65536,serverEndpoint: Unspecified/xxxxx, mgr 10 of 10 available, clientName:xxxx, IOCP:(Busy=0,Free=200,Min=100,Max=200),WORKER:(Busy=11,Free=189,Min=100,Max=200), v:2.0.519.65453
The inst, qs, in, busy, free value changes with each error.
We are using on prem Redis instance with Tier 1-1GB and no replication memory allocated.
With each person logging in we see 2 keys added(Internal, Data) and our data size overall is 2MB each. We have 70 keys approx. Keys are very small but 3 have very big values which makes up approx 1.7MB of 2 of which 1 itself may be of 1MB and other 2 make up. 7MB and rest 67 keys/values 0.3 mb
I have seen this issue generally occurs when trying to fetch one of those 3 bigger key values.
Is there any restriction in value size of a key?
Or could it be some other issue?

Talend Oracle Update taking more time

I am having an ETL Process in talend to extract the data from SQL Server and Oracle compare those and then Insert the Transformed data in Oracle
The ETL process is
tSQLInput -- Row1 -----
-Comparison--TMap --->tOracleOutput
tOracleInput -- Row2 ---
The same process as above for insertion of 213058 rows it takes only 3 secs but for update it takes around 15 mins
For Insertion it takes 10000 rows/Sec, but same for update it takes 332 rows/sec
I want to increase this performance of update process
tOracleOutput Basic Settings
tOracleOutput Adavnced Settings
Can anybody please Shed some light on this issue why update alone is taking more time
Many Thanks in advance

What is the expected runtime range of pragma quick_check?

I'm going to run PRAGMA quick_check on a very large SQLite database and would like to estimate the time it will take to complete. Is there a (ballpark) way to do that, assuming a reasonably fast HDD or SSD? Is it O(n) or worse?
I'm obviously not looking for an accurate prediction, just something like "1 to 5 hours per 10 GB".
quick_check checks for out-of-order records, missing pages, malformed records, and CHECK, and NOT NULL constraint errors.
It can be very slow.
This is not intended as an answer, but as a reference point, until someone more knowledgeable than I am can help out with a more general answer.
A 90GB sqlite3 database (1 table, 1 index, 20m rows) took 13 hours on my mid grade SSD with 16 GB RAM, running Windows7/NTFS. The process was clearly disk bound.
Assuming a linear dependency, this comes out as 5-10 minutes per Gigabyte.
According to a few pages I found online, a full PRAGMA check_integrity takes roughly 8 times longer (1h/GB).

Sql Azure - maxing DTU Percentage querying a 'empty table'

I have been having trouble with a database for the last month or so... (it was fine in November).
(S0 Standard tier - not even the lowest tier.) - Fixed in update 5
Select statements are causing my database to throttle (timeout even).
To makes sure it wasn't just a problem with my database, Ive:
Copied the database... same problem on both (unless increasing the tier size).
Deleted the database, and created the database again (blank database) from entity framework code-first
The second one proved more interesting. Now my database has 'no' data, and it still peaks the DTU and makes things unresponsive.
Firstly ... is this normal?
I do have more complicated databases at work that use about 10% max of the dtu at the same level (s0). So i'm perplexed. This is just one user, one database and currently empty, and I can make it unresponsive.
Update 2:
From the copy ("the one with data 10000~ records"). I upgraded it to standard S2 (5x more powerful than s0 potentially. No problems.
Down-graded it to S0 again and
SET STATISTICS IO ON
SET STATISTICS TIME ON
select * from Competitions -- 6 records here...
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(6 row(s) affected)
Table 'Competitions'. Scan count 1, logical reads 3, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 407 ms, elapsed time = 21291 ms.
Am i miss understanding azure databases, that they need to keep warming up? If i run the same query again it will be immediate. If i close the connection and do it again its back to ~20 seconds.
Update 3:
s1 level and it does the same query above for the first time at ~1 second
Update 4:
s0 level again ... first query...
(6 row(s) affected)
Table 'Competitions'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 35 ms.
Nothing is changing on these databases apart from the tier. After roaming around on one of my live sites (different database, schema and data) on s0 ... it peaked at 14.58% (its a stats site)
Its not my best investigation. But im tired :D
I can give more updates if anyone is curious.
** Update: 5 - fixed sort of **
The first few 100% spikes were the same table. After updating the schema and removing a geography field (the data was null in that column) it has moved to the later smaller peaks ~1-4% and a result time back in the very low ms.
Thanks for the help,
Matt
The cause of the problem to the crippling 100% DTO was a GEOGRAPHY field:
http://msdn.microsoft.com/en-gb/library/cc280766.aspx
Removing this from my queries fixed the problem. Removing it from my EF models will hopefully make sure it never comes back.
I do want to use the geography field in Azure (eventually and probably not for a few months), so if anyone knows why it was causing a unexpected amount of DTU to be spent on a (currently always null) column that would be very useful for future knowledge.

Sqlite3 WAL mode commit perfromance

I have a SQLite3 DB where I Insert each second one row of data (500 bytes per row)for each table (around 100 tables). After several minutes, In order to keep the DB size small, I also remove the last line in each table. Ao totally I insert 50K of data, and on the steady state I remove 50K of data. I wrap the inserts and deletions in a transaction, where each second I commit those transaction.
WAL mode is enabled, with sync mode = NORMAL. There is another process that occasionally performs read operation on the DB, but those are very fast.
I'm seeing a strange behaviour. Every several minutes, the commit command itself takes several seconds, while in other times it takes few milliseconds. I tried to play with the wal_autocheckpoint with no success.
It is worth mentioning that the filesystem is working on Linux software raid on Linux VM. Without the raid, the performance are better, but those "hiccups" still occur.
Thanks!

Resources