The MariaDB documentation for InnoDB Limitations states that:
A multicolumn index on InnoDB can use a maximum of 16 columns. If you
attempt to create a multicolumn index that uses more than 16 columns,
MariaDB returns an Error 1070.
Is there any way around this limitation, so that I may create a fulltext index on 17 columns?
I am using MariaDB 10.1.37 and Navicat 11.2.11 Standard. When I try to add a fulltext index on 17 columns I get this error:
The multicolumn limitation doesn't apply for FULLTEXT Indexes.
FULLTEXT Indexes have a maximum of 32 parts in InnoDB.
In such a situation, I would recommend having an extra column that is the combination of all the text columns you want to search on.
Related
I'm using sqlite-utils to load a csv into sqlite which will later be served via Datasette. I have two columns, likes and dislikes. I would like to have a third column, quality-score, by adding likes and dislikes together then dividing likes by the total.
The sqlite-utils convert function should be my best bet, but all I see in the documentation is how to select a single column for conversion.
sqlite-utils convert content.db articles headline 'value.upper()'
From the example given, it looks like convert is followed by the db filename, the table name, then the col you want to operate on. Is it possible to simply add another col name or is there a flag for selecting more than one column to operate on? I would be really surprised if this wasn't possible, I just can't find any documentation to support it.
This isn't a perfect answer as it doesn't resolve whether sqlite-utils supports multiple column selection for transforms, but this is how I solved this particular problem.
Since my quality_score column would just be basic math, I was able to make use of sqlite's Generated Columns. I created a file called quality_score.sql that contained:
ALTER TABLE testtable
ADD COLUMN quality_score GENERATED ALWAYS AS (likes /(likes + dislikes));
and then implemented it by:
$ sqlite3 mydb.db < quality_score.sql
You do need to make sure you are using a compatible version of sqlite, as this only works with version 3.31 or later.
Another consideration is to make sure you are performing math on integers or floats and not text.
Also attempted to create the table with the virtual generated column first then fill it with my data later, but that didn't work in my case - it threw an error that said the number of items provided didn't match the number of columns available. So I just stuck with the ALTER operation after the fact.
I have a table with around 65 million rows that I'm trying to run a simple query on. The table and indexes looks like this:
CREATE TABLE E(
x INTEGER,
t INTEGER,
e TEXT,
A,B,C,D,E,F,G,H,I,
PRIMARY KEY(x,t,e,I)
);
CREATE INDEX ET ON E(t);
CREATE INDEX EE ON E(e);
The query I'm running looks like this:
SELECT MAX(t), B, C FROM E WHERE e='G' AND t <= 9878901234;
I need to run this queries for thousands of different values of t and was expecting each query to run in a fraction of a second. However, the above query is taking nearly 10 seconds to run!
I tried running the query plan but only get this:
0|0|0|SEARCH TABLE E USING INDEX EE (e=?)
So this should be using the index. With a binary search I would expect worse case only 26 tests, which I would be pretty quick.
Why is my query so slow?
Each table in a query can use one index. Since your WHERE clause looks at multiple columns, you can use a multi-column index. For these, all but the last column used from the index has to test for equality; the last one used can be used for greater than/less than.
So:
CREATE INDEX e_idx_e_t ON E(e, t);
should give you a boost.
For further reading about how Sqlite uses indexes, the Query Planner documentation is a good introduction.
You're also mixing an aggregate function (max(t)) and columns (B and C) that aren't part of a group. In Sqlite's case, this means that it will pick values for B and C from the row with the maximum t value; other databases usually throw an error.
I have a table t with around 500,000 rows. One of the columns (stringtext) contains a very long string and I have now discovered that that there are in fact only 80 distinct strings. I'd like to declutter table t by moving the strings into a separate table, s, and merely referencing them in t.
I have created a separate table of the long strings, including what is effectively an explicit row-index number using:
CREATE TEMPORARY TABLE stmp AS
SELECT DISTINCT
stringtext
FROM t;
CREATE TABLE s AS
SELECT _ROWID_ AS stringindex, stringtext
FROM stmp;
(It was creating this table that showed me there were only a few distinct strings).
How can I now replace stringtext in t with the corresponding stringindex from s?
I would think about something like Update t set stringtext = (select stringindex from s where s.stringtext = t.stringtext) and would recommend first making an index on s(stringtext) as SQLite might not be smart enough to build a temporary index. And then a VACUUMing would be in order.
Untested.
This question already has an answer here:
How to convert column values into rows in Sqlite?
(1 answer)
Closed 8 years ago.
Suppose you have a three-column table named scoreTotals. It has the weekly points totals for three players.
If you ran this query on scoreTotals:
select *
from scoreTotals;
You would get this:
Jones Smith Mills
50 70 60
How do you reconfigure the output to the end user so it's this way:
player points
Jones 50
Smith 70
Mills 60
The trick is to get the column titles to appear on the left hand side as actual data fields, rather than the titles of the columns.
I saw some things on StackOverflow relating to how to turn columns into rows, but none addressed this exact question, and my attempts to adjust the other ideas to my circumstance did not work.
It needs to work in sqlite, which means the pivot and unpivot keywords won't work. I'm looking to do this without storing a table to the database and then deleting it afterward.
The following code will generate the table I am trying to operate on:
create table scoreTotals(Jones int, Smith int, Mills int);
insert into scoreTotals values (50, 70, 60);
I had a similar problem and my solution depends on which programming language you might be using to process sqlite commands. In my case I am using python to connect to sqlite. After I do a select to return records, I store the result set into a "list of lists" (aka table) which I can then transpose (aka unpivot) with the following single line of code in python:
result = [[row[idx] for row in table] for idx in xrange(len(table[0]))] # transpose logic using list comprehension
SQLite does not have an unpivot command, but this solution by bluefeet for MySQL, may also work for SQLite:
MySQL - turn table into different table
I have a table which I access by 2 int fields all the time so want an index to help. There is no writes ever. The int fields are not unique.
What is the most optimal index?
Table
MyIntA
MyIntB
SomeTextValue
The queries always look like this:
Select SomeTextValue from MyTable where MyIntA=1 and MyIntB=3
You could add an index on (MyIntA, MyIntB).
CREATE INDEX your_index_name ON MyTable (MyIntA, MyIntB);
Note: it might be preferable to make this pair of columns your primary key if the pair of columns (when considered together) contains only distinct values and there isn't another obvious choice for the primary key.
For example, if your table contains only data like this:
MyIntA MyIntB
1 1
1 2
2 1
2 2
Here both MyIntA and MyIntB when considered separately are not unique so neither of these columns individually could be used as a primary key. However, the pair (MyIntA, MyIntB) is unique, so this pair of columns could be used as a primary key.
The selectivity (number of discrete / distinct values) of the data in the columns MyIntA and MyIntB should assist you to decide on whether your index should be (MyIntA, MyIntB), (MyIntB, MyIntA), or just (MyIntA) or (MyIntB)
This link should help, albeit for a different RDBMS