Version: 10.4.8-MariaDB
engine: ROCKSDB
I have a table labor with 40 Mio rows and a table map with 200,000 rows and I wanted to update some columns of labor with. Since I got performance problems as the table increased I decided to migrate from InnoDB to ROCKSDB engine:
ALTER TABLE some.labor ENGINE=RocksDB;
When I wanted to partition:
ALTER TABLE some.labor PARTITION BY KEY() PARTITIONS 10;
I got this error:
SQL Fehler (1296): Got error 10 'Operation aborted: Failed to acquire lock due to rocksdb_max_row_locks limit' from ROCKSDB
I found this solution in SO:
SET session rocksdb_bulk_load=1;
After I changed the status the table could be partioned.
After that I wanted to do an update:
UPDATE some.labor r
INNER JOIN some.map m
ON r.analysis_1 <=> m.analysis_1
AND r.analysis_2 <=> m.analysis_2
AND r.unit <=> m.unit
AND r.praxis_id <=> m.praxis_id
SET r.analysis = m.analysis_new
, r.unit = m.unit_new
;
<=> is needed since some fields contains NULL. All 4 join variables are indexed.
I got the same error as before
SQL Fehler (1296): Got error 10 'Operation aborted: Failed to acquire lock due to rocksdb_max_row_locks limit' from ROCKSDB
although the status remained
SET session rocksdb_bulk_load=1;
Any idea how I can cope with this?
Related
I use cmake and sqlite native library in my Android app and I am getting error 10 : disk I/O error after a simple select clause on one table.
SELECT * FROM TABLE_3
After doing some experiment, I found out that I won't get this error if I limit the size of my select clause.
SELECT * FROM TABLE_3 LIMIT 100
this makes me wonder if this issue has something to do with the length/size of the data returned from the query (100 seems to be the magic number/threshold, I have 200 records in the table)
All the columns are string/text, among them column_4 is considered the largest in size(about 15000 characters)
I am not familiar with SQL and wonder if there is any limit around the size of the data returned?
I use an app which creates this SQLite DB with this table:
CREATE TABLE expense_report (_id INTEGER PRIMARY KEY, ...)
And for some reason that _id (which is the ROWID) became invalid in that DB.
When I scan the table I see that the last rows got an _id which was already being used long ago:
1,2,3...,1137,1138,...,1147,1149,...,12263,12264,1138,...,1148
I highlighted above the ranges in which I see that I have the same _id for completely different rows (the rest of the values do not match at all).
And querying this DB usually gets me inaccurate results due to that. For instance:
SELECT
(SELECT MAX(_ID) FROM expense_report) DirectMax
, (SELECT MAX(_ID) FROM (SELECT _ID FROM expense_report ORDER BY _ID DESC)) RealMax;
| DirectMax | RealMax |
| 1148 | 12264 |
And inserting a new row into this table via DB Browser for SQLite also generates an _id of 1149 (instead of 12265), so the problem becomes worse if I keep using this DB.
Running PRAGMA QUICK_CHECK or PRAGMA INTEGRITY_CHECK show this error response:
*** in database main ***
On page 1598 at right child: Rowid 12268 out of order
And running VACUUM also detects the problem but doesn't seem to be able to fix it:
Execution finished with errors.
Result: UNIQUE constraint failed: expense_report._id
Anyone knows a way to fix these duplicate ROWID values?
This is a very general question:
Can you think of a reason why the following would break on very large tables (> 1 billion rows)?
sqlite3 sample_DB.db "CREATE INDEX IF NOT EXISTS sample_index ON sample_table(sample_row)"
I have tried this a couple of times and it seems that it does not even give any error message, but the processing stops at some point and there is no index upon .schema.
The hard-disk is not running full. There is basically no memory consumption in the processing and if there were, there is plenty available.
The database is more than 800GB large, but I thought that the file-limit for ext4 is 2TB.
In the current state of the database:
PRAGMA page_size returns 4096
PRAGMA page_count returns 185974887
PRAGMA max_page_count returns 1073741823
PRAGMA freelist_count returns 0
I run into stack depth limit exceeded when trying to store a row from R to PostgreSQL. In order to address bulk upserts I have been using a query like this:
sql_query_data <- sprintf("BEGIN;
CREATE TEMPORARY TABLE
ts_updates(ts_key varchar, ts_data hstore, ts_frequency integer) ON COMMIT DROP;
INSERT INTO ts_updates(ts_key, ts_data) VALUES %s;
LOCK TABLE %s.timeseries_main IN EXCLUSIVE MODE;
UPDATE %s.timeseries_main
SET ts_data = ts_updates.ts_data,
ts_frequency = ts_updates.ts_frequency
FROM ts_updates
WHERE ts_updates.ts_key = %s.timeseries_main.ts_key;
INSERT INTO %s.timeseries_main
SELECT ts_updates.ts_key, ts_updates.ts_data, ts_updates.ts_frequency
FROM ts_updates
LEFT OUTER JOIN %s.timeseries_main ON (%s.timeseries_main.ts_key = ts_updates.ts_key)
WHERE %s.timeseries_main.ts_key IS NULL;
COMMIT;",
values, schema, schema, schema, schema, schema, schema, schema)
}
So far this query worked quite well for updating millions of records while holding the number of inserts low. Whenever I ran into stack size problems so far I simply split my records into multiple chunks and go on from there.
However, this strategy faces some trouble now. I don't have a lot of records anymore, but a handful in which the hstore is a little bit bigger. But it's not really 'large' by any means. I read suggestions by #Craig Ringer who advises not to near the limit of 1GB. So I assume the size of the hstore itself is not the problem, but I receive this message:
Error in postgresqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not Retrieve the result : ERROR: stack depth limit exceeded
HINT: Increase the configuration parameter "max_stack_depth" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.
)
EDIT: I did increase the limit to 7 MB and ran into the same error stating 7 MB is not enough. This is really odd to me, because I the query itself is only 1.7 MB (checked it by pasting it to a text file). Can anybody shed some light on this?
Increase the max_stack_depth as suggested by the hint. [From the official documentation]
(http://www.postgresql.org/docs/9.1/static/runtime-config-resource.html):
The ideal setting for this parameter is the actual stack size limit enforced by the kernel (as set by ulimit -s or local equivalent), less a safety margin of a megabyte or so.
and
The default setting is two megabytes (2MB), which is conservatively small and unlikely to risk crashes.
Super Users can alter this setting per connection, or it can be set for all users through the postgresql.conf file (requires postgres server restart).
I am new to Teradata. Can anyone tell me How exactly the AMPs going to helpful in creation of any table in Teradata.
Lets have a scenario.
I have a Teradata database with 4 AMPs. I learned that AMPs will usefull when we inserting the data into a table. Depending on the indexes it will distribute the data with the help of respected AMPs. But while creating the table, the command needs to execute through AMPs only. So i want to know which AMP will be used at that time??
The actual creation of the table in the data dictionary is a RowHash level operation involving a single AMP to store the record in DBC.TVM. Based on the other actions listed in the EXPLAIN there may be other AMPs involved as well but there is not single All-AMP operation. (This doesn't take into consideration the loading of the data and its distribution across the AMPs.)
Sample EXPLAIN:
1) First, we lock FUBAR.ABC for exclusive use.
2) Next, we lock a distinct DBC."pseudo table" for write on a RowHash
for deadlock prevention, we lock a distinct DBC."pseudo table" for
write on a RowHash for deadlock prevention, we lock a distinct
DBC."pseudo table" for read on a RowHash for deadlock prevention,
and we lock a distinct DBC."pseudo table" for write on a RowHash
for deadlock prevention.
3) We lock DBC.DBase for read on a RowHash, we lock DBC.Indexes for
write on a RowHash, we lock DBC.TVFields for write on a RowHash,
we lock DBC.TVM for write on a RowHash, and we lock
DBC.AccessRights for write on a RowHash.
4) We execute the following steps in parallel.
1) We do a single-AMP ABORT test from DBC.DBase by way of the
unique primary index.
2) We do a single-AMP ABORT test from DBC.TVM by way of the
unique primary index.
3) We do an INSERT into DBC.Indexes (no lock required).
4) We do an INSERT into DBC.TVFields (no lock required).
5) We do an INSERT into DBC.TVM (no lock required).
6) We INSERT default rights to DBC.AccessRights for FUBAR.ABC.
5) We create the table header.
6) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> No rows are returned to the user as the result of statement 1.