I was tryning to restore a MariaDB dump from a Moodle database, when I get this error:
ERROR 1071 (42000) at line 10540: Specified key was too long; max key length is 767 bytes
After a bit of research I targueted to the collation of the schema, which is utf8mb4_unicode_ci.
This error can be solve when I change the size from 255 to 170. However, sometimes it does not matter if it has 255 as size, because it create them.
Now,
1- Why is the dump file giving me this configuration if restoreing it does not work?
2- How is this working if the varchar size is not allowed?
3- There is any easier way to make this work, beside changin from 255 to 170?
As #Rick James says in an edit, the solution was the next steps:
SET GLOBAL innodb_file_format=Barracuda;
SET GLOBAL innodb_file_per_table=1;
SET GLOBAL innodb_large_prefix=1;
logout & login (to get the global values);
ALTER TABLE tbl ROW_FORMAT=DYNAMIC; -- (or COMPRESSED)
In this case the steps were taken in order to reconfigure the database engine of the new server.
Related
Whenever I try to delete ANY field from ANY content type I get the following error:
Uncaught PHP Exception Drupal\\Core\\Database\\DatabaseExceptionWrapper: "SQLSTATE[42S02]: Base table or view not found: 1146 Table 'drupal.field_deleted_data_35ab99eaa1' doesn't exist: SELECT DISTINCT t.entity_id AS entity_id\nFROM \n{field_deleted_data_35ab99eaa1} t\nWHERE bundle = :db_condition_placeholder_0\nLIMIT 10 OFFSET 0; Array\n(\n [:db_condition_placeholder_0] => slider_images\n)\n" at /var/www/html/web/core/lib/Drupal/Core/Database/Connection.php line 685
The only difference being the table hash data, I.E. deleted_data_xxxx, each field i try to delete is referencing a different table. I've tried reinstalling Drupal and reimporting my configuration but no luck.
Any suggestions?
UPDATE:
After checking the database there are many of these tables:
field_deleted_revision_df347fb61b
and
field_deleted_df347fb61b
If that makes any difference.
I experienced this also, dig down the code. I found this all reason why fields not deleted after you did, the following:
executing cron gazillion times
The entries in field_config and field_config_instance have probably a value of 1 in the deleted column.
This means they're marked for deletion, but won't actually be deleted until you run cron (deleted field data is purged in field_cron()).
As an alternative to running cron to remove deleted data, you can manually run field_purge_batch($batch_size).
The $batch_size to use will vary depending on your server environment and needs. I've used values as low a 5 and as high as 10000.
Here more informations about the field_purge_batch() function.
Here a possible solution to resolve your issue, but backup your database first. Don't be lazy, it will save your ass, if something goes wrong.
using drush:
drush eval "field_purge_batch(500)"
you might have to run a few times, or increase the $batch_size then there might still be field_deleted and field_deleted_revision tables, even after running cron
using SQL query:
SELECT * FROM `field_config` WHERE `deleted` = 1
SELECT * FROM `field_config_instance` WHERE `deleted` = 1
if you come up empty, you can safely delete those leftover tables.
The only way I could resolve this was to create the missing table from the sql command line (using drush sql-cli), for example:
CREATE TABLE `field_deleted_data_XXX` (
`bundle` varchar(128) CHARACTER SET ascii NOT NULL DEFAULT '',
`deleted` tinyint(4) NOT NULL DEFAULT '0',
`entity_id` int(10) unsigned NOT NULL,
`revision_id` int(10) unsigned NOT NULL,
`langcode` varchar(32) CHARACTER SET ascii NOT NULL DEFAULT '',
`delta` int(10) unsigned NOT NULL,
`content_translation_source_value` varchar(12) CHARACTER SET ascii NOT NULL,
PRIMARY KEY (`entity_id`,`deleted`,`delta`,`langcode`),
KEY `bundle` (`bundle`),
KEY `revision_id` (`revision_id`) );
Replacing XXX with the code that follows field_deleted_data_ in the error message. Then run:
drush php-eval 'field_purge_batch(1000);'
This may generate the error with a new code. I had to go through the process 3 times, but eventually it resolved the error.
Our database backend business logic creates the following query:
CREATE INDEX TESTINDEX ON TESTTABLE (URI(1024));
In MariaDB 10.1.24, we get the following error:
Specified key was too long; max key length is 767 bytes
On the other hand, for MariaDB 10.2.6, everything works correctly. I would like to undestand why this is. Within the MariaDB knowledge base, I found this article which seems to describe the problem.
Does anybody know, if this is simply a configuration issue, or was there a code change which allows larger keys now?
I guess it could be due to the changed storage engine XtraDB=>InnoDB (see here).
I run into stack depth limit exceeded when trying to store a row from R to PostgreSQL. In order to address bulk upserts I have been using a query like this:
sql_query_data <- sprintf("BEGIN;
CREATE TEMPORARY TABLE
ts_updates(ts_key varchar, ts_data hstore, ts_frequency integer) ON COMMIT DROP;
INSERT INTO ts_updates(ts_key, ts_data) VALUES %s;
LOCK TABLE %s.timeseries_main IN EXCLUSIVE MODE;
UPDATE %s.timeseries_main
SET ts_data = ts_updates.ts_data,
ts_frequency = ts_updates.ts_frequency
FROM ts_updates
WHERE ts_updates.ts_key = %s.timeseries_main.ts_key;
INSERT INTO %s.timeseries_main
SELECT ts_updates.ts_key, ts_updates.ts_data, ts_updates.ts_frequency
FROM ts_updates
LEFT OUTER JOIN %s.timeseries_main ON (%s.timeseries_main.ts_key = ts_updates.ts_key)
WHERE %s.timeseries_main.ts_key IS NULL;
COMMIT;",
values, schema, schema, schema, schema, schema, schema, schema)
}
So far this query worked quite well for updating millions of records while holding the number of inserts low. Whenever I ran into stack size problems so far I simply split my records into multiple chunks and go on from there.
However, this strategy faces some trouble now. I don't have a lot of records anymore, but a handful in which the hstore is a little bit bigger. But it's not really 'large' by any means. I read suggestions by #Craig Ringer who advises not to near the limit of 1GB. So I assume the size of the hstore itself is not the problem, but I receive this message:
Error in postgresqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not Retrieve the result : ERROR: stack depth limit exceeded
HINT: Increase the configuration parameter "max_stack_depth" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.
)
EDIT: I did increase the limit to 7 MB and ran into the same error stating 7 MB is not enough. This is really odd to me, because I the query itself is only 1.7 MB (checked it by pasting it to a text file). Can anybody shed some light on this?
Increase the max_stack_depth as suggested by the hint. [From the official documentation]
(http://www.postgresql.org/docs/9.1/static/runtime-config-resource.html):
The ideal setting for this parameter is the actual stack size limit enforced by the kernel (as set by ulimit -s or local equivalent), less a safety margin of a megabyte or so.
and
The default setting is two megabytes (2MB), which is conservatively small and unlikely to risk crashes.
Super Users can alter this setting per connection, or it can be set for all users through the postgresql.conf file (requires postgres server restart).
I used "drop table" sql command to delete a table from my database.
Strangely, the size of the db file does not change since then, even though I have been adding and deleting data from it.
Does that mean the dropped table is never erased, and if new table is created the new table just overwrite wherever the dropped table was located?
I am using windows 7 64 bits and sqlite3, if relevant.
Yes, SQLite marks the space in the file as free and reuses it as needed.
You can execute VACUUM command to shrink the file (this may be slow).
I have found my max number of cursors per database to be 300 from the following query:
select max(a.value) as highest_open_cur, p.value as max_open_cur
from v$sesstat a, v$statname b, v$parameter p
where a.statistic# = b.statistic#
and b.name = 'opened cursors current'
and p.name= 'open_cursors'
group by p.value;
I tried to update the amount to 1000 with this:
update v_$parameter
set value = 1000
where name = 'open_cursors';
But I am seeing this error:
SQL Error: ORA-02030: can only select from fixed tables/views
02030. 00000 - "can only select from fixed tables/views"
*Cause: An attempt is being made to perform an operation other than
a retrieval from a fixed table/view.
*Action: You may only select rows from fixed tables/views.
What is the proper way to update the open_cursor value? Thanks.
Assuming that you are using a spfile to start the database
alter system set open_cursors = 1000 scope=both;
If you are using a pfile instead, you can change the setting for the running instance
alter system set open_cursors = 1000
You would also then need to edit the parameter file to specify the new open_cursors setting. It would generally be a good idea to restart the database shortly thereafter to make sure that the parameter file change works as expected (it's highly annoying to discover months later the next time that you reboot the database that some parameter file change than no one remembers wasn't done correctly).
I'm also hoping that you are certain that you actually need more than 300 open cursors per session. A large fraction of the time, people that are adjusting this setting actually have a cursor leak and they are simply trying to paper over the bug rather than addressing the root cause.
RUn the following query to find if you are running spfile or not:
SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type"
FROM sys.v_$parameter WHERE name = 'spfile';
If the result is "SPFILE", then use the following command:
alter system set open_cursors = 4000 scope=both; --4000 is the number of open cursor
if the result is "PFILE", then use the following command:
alter system set open_cursors = 1000 ;
You can read about SPFILE vs PFILE here,
http://www.orafaq.com/node/5
you can update the setting under init.ora in
oraclexe\app\oracle\product\11.2.0\server\config\scripts