Varchar PRIMARY_KEY cannot be longer than 191 - mariadb

I just tried to create the following table in a MariaDB using PHP7 with mysqli:
CREATE TABLE $tb_bad_log (
ip varchar(255) NOT NULL default '',
name varchar(255) default NULL,
nr_tries int(1) NOT NULL default '0',
last_try varchar(255) NOT NULL default '',
blocked enum('Y','N') NOT NULL default 'N',
enter_user varchar(255) NOT NULL default '',
PRIMARY KEY (enter_user),
KEY nr_tries (nr_tries),
KEY blocked (blocked)
);
This gives me the error “Specified key was too long; max key length is 767 bytes”, which persists if enter_user is 192 or more characters, but not if I restrict it to 191 characters or less. What is going on here?
(Yes, there is some strange stuff going on here. I'm trying to understand legacy code and get it to run.)

Your database character set is most likely set to utf8mb4 which means that every char will take up 4 bytes. 192 characters therefore requires 768 bytes of key space, making it too big. If you don't need utf8mb4 on specific tables or columns, you can set the character set with the CHARACTER SET parameter at either the table level or the column level.

Related

Deleting a 3GB table took over two hours on MariaDB

As posted on another forum, when upgrading XWiki from v7.0.1 to v13.10.9, a non-critical database table xwikistatsvisit, for the user visiting statistics, was preventing the after-upgrade migrations. It contained over seven million records and sized 3GB in total. As a workaround, we had to delete all records in the table, but the SQL command delete from table xwikistatsvisit took over two hours.
I have verified from ER diagram that the table is stand-alone without any foreign key referring to or from other tables. And the database is MariaDB v10.9.2 installed on the same host.
The host under test is a medium virtual machine with SSD, 4 CPUs of Intel i9 and 8GB of RAM, running MariaDB v10.9.2. Also, the hypervisor needs to enable “PAE/NX” and “Nested VT-x/ADM-V” for higher performance; otherwise, the computing task will get stuck forever.
My Questions:
Why the SQL command took so long? Is there any way to proceed faster? E.g. disabling the keys and restrictions, etc.; but I am unfamiliar with this area.
I will highly appreciate any hints or suggestions.
The definition of the table xwikistatsvisit:
--
-- Table structure for table `xwikistatsvisit`
--
DROP TABLE IF EXISTS `xwikistatsvisit`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `xwikistatsvisit` (
`XWV_ID` bigint(20) NOT NULL,
`XWV_NUMBER` int(11) DEFAULT NULL,
`XWV_NAME` varchar(255) NOT NULL,
`XWV_CLASSNAME` varchar(255) DEFAULT NULL,
`XWV_IP` varchar(255) NOT NULL,
`XWV_USER_AGENT` longtext NOT NULL,
`XWV_COOKIE` longtext NOT NULL,
`XWV_UNIQUE_ID` varchar(255) NOT NULL,
`XWV_PAGE_VIEWS` int(11) DEFAULT NULL,
`XWV_PAGE_SAVES` int(11) DEFAULT NULL,
`XWV_DOWNLOADS` int(11) DEFAULT NULL,
`XWV_START_DATE` datetime DEFAULT NULL,
`XWV_END_DATE` datetime DEFAULT NULL,
PRIMARY KEY (`XWV_ID`),
KEY `XWVS_END_DATE` (`XWV_END_DATE`),
KEY `XWVS_UNIQUE_ID` (`XWV_UNIQUE_ID`),
KEY `XWVS_PAGE_VIEWS` (`XWV_PAGE_VIEWS`),
KEY `XWVS_START_DATE` (`XWV_START_DATE`),
KEY `XWVS_NAME` (`XWV_NAME`),
KEY `XWVS_PAGE_SAVES` (`XWV_PAGE_SAVES`),
KEY `XWVS_DOWNLOADS` (`XWV_DOWNLOADS`),
KEY `XWVS_IP` (`XWV_IP`),
KEY `xwv_user_agent` (`XWV_USER_AGENT`(255)),
KEY `xwv_classname` (`XWV_CLASSNAME`),
KEY `xwv_number` (`XWV_NUMBER`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
The DELETE command deletes all the matching rows one at a time. If you want to delete all the records from such a big table, TRUNCATE should be a lot faster, since it empties the full table at once.
TRUNCATE TABLE xwikistatsvisit;
Difference between DELETE and TRUNCATE.

sqlite integer primary key not null constraint failed

According to the SQLite documentation / FAQ a column declared INTEGER PRIMARY KEY will automatically get a value of +1 the highest of the column if omitted.
Using SQLite version 3.22.0 2018-01-22 18:45:57
Creating a table as follows:
CREATE TABLE test (
demo_id INTEGER PRIMARY KEY NOT NULL,
ttt VARCHAR(40) NOT NULL,
basic VARCHAR(25) NOT NULL,
name VARCHAR(255) NOT NULL,
UNIQUE(ttt, basic) ON CONFLICT ROLLBACK
) WITHOUT ROWID;
Then inserting like this:
INSERT INTO test (ttt, basic, name) VALUES ('foo', 'bar', 'This is
a test');
gives:
Error: NOT NULL constraint failed: test.demo_id
sqlite>
When it is expected to create a record with a demo_id value of 1. Even if the table already contains values, it'll fail inserting the row without explicitly specifying the id with the same error.
What am I doing wrong?
The documentation says that you get autoincrementing values for the rowid. But you specified WITHOUT ROWID.

How to drop a column's default value?

I have a table with this:
MyField VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC DEFAULT ''
I want to drop the default empty string value so that it defaults to null. I've discovered that I can set the default value to null, but then it actually says DEFAULT NULL in the DDL. This toys with the scripts I use to compare DDL diffs between multiple database environments, so I want to look simply like this:
MyField VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC
Edit: I am wanting this to behave the same as if I were changing a column from NOT NULL to nullable.
Non-nullable column:
n INTEGER NOT NULL
Nullable column:
n INTEGER
Notice how the latter doesn't say:
n INTEGER NULL
You need to drop and create the column as below:
alter table DBC.TEST
drop MY_FIELD
alter table DBC.TEST
ADD MY_FIELD VARCHAR(50)
I have tested it and its working.

MySQL Connector field() Auto-Conversion to Lasso Types?

In Lasso 8 with MySQL connector, the field() method seemed to always return a string type, regardless of what data was in the column, or the column's data type. The exception might've been BLOB columns, which might've returned a bytes type. (I don't recall at the moment.)
In Lasso 9 I see that the field() method returns an integer type for integer columns. This is causing some issue with conditionals where I tested for '1' instead of 1.
Is Lasso really using the MySQL data type, or is Lasso just interpreting the results?
Is there any documentation as to what column types are cast to what Lasso data types?
Lasso is using the information MySQL gives it about the column type to return the data as a corresponding Lasso type. Not sure of all the mechanics underneath. Lasso 8 may have done the same thing for integers, but Lasso 8 also allowed you to compare integers and strings with integer values. (In fact, Lasso 8 even allowed for array->get('1') - that's right, a string for the index!).
I don't know of any documentation about what fields are what. Anecdotally, I can tell you that while MySQL decimal and float fields are treated as Lasso decimals, MySQL doubles are not. (I also don't believe MySQL date(time) fields come over as Lasso dates, though that would be awesome.)
It's easy to find out what column types Lasso reports.
With a table looking like this:
CREATE TABLE `addressb` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`type` enum('commercial','residential') COLLATE utf8_swedish_ci DEFAULT NULL,
`street` varchar(50) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`city` varchar(25) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`state` char(2) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`zip` char(10) COLLATE utf8_swedish_ci NOT NULL DEFAULT NULL,
`image` blob,
`created` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_swedish_ci;
You can get the reported column types like this:
inline(-database = 'testdb', -sql = "SELECT * FROM address") => {^
with col in column_names do {^
column(#col) -> type
'<br />'
^}
^}
Result:
id integer
type string
street string
city string
state string
zip null
image bytes
created string
As you can see, everything is reported as string except the integer and blob field. Plus the zip field that in this record happen to contain NULL and is reported as such.
When doing comparisons it is always a good idea to make sure you're comparing apples with apples. That is, making sure you're comparing values of the same type. Also, to check if there's content I always go for size.
string(column('zip')) -> size > 0 ?
In addition, NULL are returned as NULL in 9, whereas in 8 and earlier it was empty string. Beware of comparison of:
field('icanhaznull') == ''
If the field contains a NULL value, the above evaluates as TRUE in 8, and FALSE in 9.
That may mean altering your schema for columns toggling between NOT NULL and not. Or you may prefer to cast the field to string:
string(field('icanhaznull')) == ''
Testing a field or var against the empty string

Limit number of characters of a text field

I want a field "name" be long at most 20 characters...is it possible in sqllite?
Yes with CHECK CONSTRAINTS. Here is an example enforcing TEXT datatype with a length of less than or equal to 20 characters.
CREATE TABLE IF NOT EXISTS "test"
(
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"name" TEXT NOT NULL
CHECK(
typeof("name") = "text" AND
length("name") <= 20
)
);
INSERT INTO "test" ("name") VALUES ("longer than twenty characters");
Result:
Error: CHECK constraint failed: test
Probably too late to help the OP but maybe someone else will find this useful.
No. Per Datatypes In SQLite Version 3,
Note that numeric arguments in parentheses that following the type
name (ex: "VARCHAR(255)") are ignored by SQLite - SQLite does not
impose any length restrictions (other than the large global
SQLITE_MAX_LENGTH limit) on the length of strings, BLOBs or numeric
values.

Resources