Mysql table collation change - mariadb

I have an old table with several Spanish keywords. Its collation is latin1_swedish_ci.
The column with the keywords has a Primary index.
When I try to change collation to utf8_general_ci it is not possible because if finds duplicates.
With that index it is not possible.
What happens is that, for example, "cañada" is taken as "canada" that already exists but they are different words.
That was using phpMyAdmin.
Another try was to export the table as file.sql and using
sed 's/STRING_SOURCE/STRING_REPLACE/'
but at the end mysql source gave me the same error (did expect that :))
I also try that last one with the entire database.
MySQL version 5.5.64-MariaDB
phpMyAdmin, selected the database/table, tab Structure, column with the keywords selected Change and finally from the drop down Collation I selected ut8_general_ci
How can I make this change keeping all the keywords?

Since you are focused on Spanish, use a Spanish collation, not a generic one: utf8_spanish_ci and utf8_spanish2_ci. They treat ñ as a separate letter between n and o. Other collations treat ñ and n as the same.
Meanwhile, ç=c.
However ll is treated as two l by utf8_spanish_ci, while it is treated as coming after lz by utf8_spanish2_ci. (Something about dictionary versus phonebook -- remember those artifacts from ancient history?)
Ref: http://mysql.rjweb.org/utf8_collations.html
Once you upgrade to 8.0, there will be two more choices: utf8mb4_es_0900_ai_ci and utf8mb4_es_trad_0900_ai_ci.
Ref: http://mysql.rjweb.org/utf8mb4_collations.html

Related

Pervasive database: Problem with spaces in column names

We use pervasive Database with Zen Contol-Center And want to perform the following SQl-Statenemt:
insert into MWlog
(Signatur,Datum,Uhrzeit,Auftrag,Probe,Parameter,Matrix,`Messwert neu`,Zugriff,`Messwert alt`,Aenderungsgrund)
values
('ML','2020-12-01','15:04:50','230176','230176','Bas. wirk.','TM','5.62','5','5.62','Neuer Import')
We get the following error-message:
*<<<<<<<<<<<<<<<<<<<<<<<<
insert into MWlog(Signatur,Datum,Uhrzeit,Auftrag,Probe,Parameter,Matrix,`Messwert neu`,Zugriff,`Messwert alt`,Aenderungsgrund) values ('ML','2020-12-01','15:04:50','230176','230176','Bas. wirk.','TM','5.62','5','5.62','Neuer Import')
[Zen][SQL Engine]
Syntax Error: insert into MWlog(Signatur,Datum,Uhrzeit,Auftrag,Probe,Parameter,Matrix,<< ??? >>`Messwert neu`,Zugriff,`Messwert alt`,Aenderungsgrund) values ('ML','2020-12-01','15:04:50','230176','230176','Bas. wirk.','TM','5.62','5',
>>>>>>>>>>>>>>>>>>>>>>>>*
What is wrong in the SQL Statement?
(We found out that the problem are the spaces in the field-Names)
We are using
Zen Control Center
Zen Install Version 14.10.035.
Java Version 1.8.0_222.
Gord Thompson is correct. You need to use double quotes around field names with spaces or field names that are reserved key words. You statements should be:
insert into MWlog
(Signatur,Datum,Uhrzeit,Auftrag,Probe,Parameter,Matrix,"Messwert neu",Zugriff,"Messwert alt",Aenderungsgrund)
values
('ML','2020-12-01','15:04:50','230176','230176','Bas. wirk.','TM','5.62','5','5.62','Neuer Import')

Missing Foreign string characters in sqlExecute() queries

we needed to fetch data from our database to R directly, we employed sqlExecute(). However, because our string columns contain escape letters such as “ş”, “ö”, “ğ” (Turkish characters which don’t exist in US-Char codes), these characters left missing in my query outputs. Do you know any arguments for sqlExecute() to solve this problem?
You need to set your R locales at the very least and possible set your system locale to allow the use of valid codes and fonts. Since you have provided none of the details of your system and applications, specific advice is not possible. Read ?locales which does say that setting this in R should be honored by your system facilities but that exceptions have been observed.
Here's further information from: https://docs.moodle.org/dev/Table_of_locales
cat(hdr)
package_name lang_name locale localewin localewincharset
> cat(trk)
tr_utf8 Turkish tr_TR.UTF-8 Turkish_Turkey.1254 WINDOWS-1254

teradata : to calulate cast as length of column

I need to use cast function with length of column in teradata.
say I have a table with following data ,
id | name
1|dhawal
2|bhaskar
I need to use cast operation something like
select cast(name as CHAR(<length of column>) from table
how can i do that?
thanks
Dhawal
You have to find the length by looking at the table definition - either manually (show table) or by writing dynamic SQL that queries dbc.ColumnsV.
update
You can find the maximum length of the actual data using
select max(length(cast(... as varchar(<large enough value>))) from TABLE
But if this is for FastExport I think casting as varchar(large-enough-value) and postprocessing to remove the 2-byte length info FastExport includes is a better solution (since exporting a CHAR() will results in a fixed-length output file with lots of spaces in it).
You may know this already, but just in case: Teradata usually recommends switching to TPT instead of the legacy fexp.

sqlite3 from windows command promt

CREATE TABLE texhisowntable (age INTEGER, name (32));
in this emty TABLE i write infomation. First age, Second name up to 32, (string, numbers, ore chars i don't know witch one you would try, but i write Words )
..so lets do it:
INSERT INTO texhisowntable (age, name) VALUES(100, ''TheJavaRockS>|<RwithTheGoldenAxE");
so now im TheJavaRockS>|
SELECT * FROM texhisowntable;
and my command would say:"ACDC, let it Play, you are old but you look fine"
//he would print : TheJavaRockS>|
but im old and i forget things, so who knows how i can see, how i created the table.
I only want rom the commandprommt he print's
CREATE TABLE texhisowntable (age INTEGER, name (32));
13 ago, i answer my own Question. With the command ".schema" it shows every table for example in the file "thnkhsu.db" where also the TABLE "texhisowntable" is.
If you got much TABLES, and search only, the one you like, for example the "texhisowntable" in the file "thnkhsu.db" you only need to write
".schemas texhisowntable" and the command shows, how i created the TABLE.
;) its very simple

Can the LIKE statement be optimized to not do full table scans?

I want to get a subtree from a table by tree path.
the path column stores strings like:
foo/
foo/bar/
foo/bar/baz/
If I try to select all records that start with a certain path:
EXPLAIN QUERY PLAN SELECT * FROM f WHERE path LIKE "foo/%"
it tells me that the table is scanned, even though the path column is indexed :(
Is there any way I could make LIKE use the index and not scan the table?
I found a way to achieve what I want with closure table, but it's harder to maintain and writes are extremely slow...
To be able to use an index for LIKE in SQLite,
the table column must have TEXT affinity, i.e., have a type of TEXT or VARCHAR or something like that; and
the index must be declared as COLLATE NOCASE (either directly, or because the column has been declared as COLLATE NOCASE):
> CREATE TABLE f(path TEXT);
> CREATE INDEX fi ON f(path COLLATE NOCASE);
> EXPLAIN QUERY PLAN SELECT * FROM f WHERE path LIKE 'foo/%';
0|0|0|SEARCH TABLE f USING COVERING INDEX fi (path>? AND path<?)
The second restriction could be removed with the case_sensitive_like PRAGMA, but this would change the behaviour of LIKE.
Alternatively, one could use a case-sensitive comparison, by replacing LIKE 'foo/%' with GLOB 'foo/*'.
LIKE has strict requirements to be optimizable with an index (ref).
If you can relax your requirements a little, you can use lexicographic ordering to get indexed lookups, e.g.
SELECT * FROM f WHERE PATH >= 'foo/' AND PATH < 'foo0'
where 0 is the lexigographically next character after /.
This is essentially the same optimization the optimizer would do for LIKEs if the requirements for optimization are met.

Resources