MariaDB provides the INFORMATION_SCHEMA.STATISTICS table. After COMMENT there is a column INDEX_COMMENT, but the meaning is currently undocumented on their site.
Does anybody know the purpose of INDEX_COMMENT?
From https://mariadb.com/kb/en/library/create-index/
index_option:
KEY_BLOCK_SIZE [=] value
| index_type
| WITH PARSER parser_name
| COMMENT 'string' -- It probably comes from here
That existed at least as far back as MySQL 5.5.
Related
I've see older posts around this but hoping to bring this topic up again. I have a table in DynamoDB that has a UUID for the primary key and I created a secondary global index (SGI) for a more business-friendly key. For example:
| account_id | email | first_name | last_name |
|------------ |---------------- |----------- |---------- |
| 4f9cb231... | linda#gmail.com | Linda | James |
| a0302e59... | bruce#gmail.com | Bruce | Thomas |
| 3e0c1dde... | harry#gmail.com | Harry | Styles |
If account_id is my primary key and email is my SGI, how do I query the table to get accounts with email in ('linda#gmail.com', 'harry#gmail.com')? I looked at the IN conditional expression but it doesn't appear to work with SGI. I'm using the go SDK v2 library but will take any guidance. Thanks.
Short answer, you can't.
DDB is designed to return a single item, via GetItem(), or a set of related items, via Query(). Related meaning that you're using a composite primary key (hash key & sort key) and the related items all have the same hash key (aka partition key).
Another way to think of it, you can't Query() a DDB Table/index. You can only Query() a specific partition in a table or index.
Scan() is the only operation that works across partitions in one shot. But scanning is very inefficient and costly since it reads the entire table every time.
You'll need to issue a GetItem() for every email you want returned.
Luckily, DDB now offers BatchGetItem() with will allow you to send multiple, up to 100, GetItem() requests in a single call. Saves a little bit of network time and automatically runs the requests in parallel; but otherwise is the little different from what your application could do itself directly with GetItem(). Make no mistake, BatchGetItem() is making individual GetItem() requests behind the scenes. In fact, the requests in a BatchGetItem() don't even have to be against the same tables/indexes. The cost for each request in a batch will be the same as if you'd used GetItem() directly.
One difference to make note of, BatchGetItem() can only return 16MB of data. So if your DDB items are large, you may not get as many returned as your requested.
For example, if you ask to retrieve 100 items, but each individual
item is 300 KB in size, the system returns 52 items (so as not to
exceed the 16 MB limit). It also returns an appropriate
UnprocessedKeys value so you can get the next page of results. If
desired, your application can include its own logic to assemble the
pages of results into one dataset.
Because you have a GSI with PK of email (from what I understand) you can use PartiQL command to get your batch of emails back. The API is called ExecuteStatment and you use a SQL like syntax:
SELECT * FROM mytable.myindex WHERE email IN ['email#email.com','email1#email.com']
I have an existing sqlite database with a table in it something like this:
+------+----------+--------------------+----------------+
|LogID | UsedOn | UserID | Other fields() |
+------+----------+--------------------+----------------+
| 1 | | soemid03 | SomeDataHere |
+------+----------+--------------------+----------------+
Etc....
The UsedOn field is currently blank, because when I made the table I accidently forgot to set the field type to a timestamp type, so the application was just inserting the other colums and leaving this one blank.
Because I would like to use a comparison at some point later using the timestamp, I would like to update this column for all rows in the table with the current timestamp, I assume I can use datetime() in sqlite to do this. It does not matter too much that some of the dates and times will be out by a few days, but the field cannot be empty or my comparison code would not work.
I tried using:
UPDATE tablename SET UsedOn=datetime()
And this was accepted as a valid query, but it seems to do nothing, this column is still empty.
perhaps I'm doing this wrong in some way?
I can only edit the database/table via either manual queries or by using 'SQLite Administrator' app (from http://sqliteadmin.orbmu2k.de/). I can't use anything else because that is what is available and I'm not allowed to install any other database management tools. When I try to edit any row in the table to add a datetime manually, it does not get accepted, but I just assume this is because the app is trying to insert what I type as a string (even though the format is correct) and it's not a string field type.
I tried your code in SQLite Administrator and it does not work, while it should.
What does work is:
UPDATE tablename
SET UsedOn = CURRENT_TIMESTAMP
This does not mean that your code is wrong.
If you use any other tool like DB Browser for SQLite, both solutions would work.
I'm not a database expert and I'm simply building a prototype app, so nothing really important.
Anyway, the app is about a subway: this subway has many lines and sometimes some stops are shared between lines (so, for example, stops 3 and 4 are stops of lines 2, 7 and 9).
So, I made up a SQLite stops table:
+---------+-------------+------+
| Field | Type | Auto |
+---------+-------------+------+
| id | integer | YES |
| name | varchar(20) | NO |
| lines | ? | NO |
+---------+-------------+------+
What's the best way to deal with shared stops? My idea was to create a lines table and then in the lines field of the stops table put a comma separated list of lines.id. I don't know why, but I feel there could be a better way.
Any suggestion is appreciated, and sorry for the really noob question.
I would keep it simple and use a table lines which has an ID (primary key) along with other metadata for a line (such as name):
lines
(id, name)
Then, create a table for the stops:
stops
(id, name)
Finally, you can create a bridge table which will connect lines with stops:
bridge
(lineId, stopId)
Each record in the bridge table represents one line having a given stop.
Note that using CSV to represent a line having multiple stops is totally not the way to go here, as it renders the powers of your relational database useless.
Update:
If you want to record the position of a stop in a given line (and assuming that positions would differ across lines), you could use the following table:
stopNumbers
(lineId, stopId, stopPosition)
The stop position can be obtained knowing the line's ID and the stop's ID.
You need a many-to-many relation, which is stored in a separate table like this:
table lines_to_stops
line_fk
stop_fk
That's the relational world ...
Note that records in the database are not in any specific order. If you need to put the stops into any specific order (which you most probably do), you have to store this order to the database as well:
table lines_to_stops
line_fk
stop_fk
order_in_line
I am a software engineer, but I am very new to databases and I am trying to hack up a tool to show some demo.
I have an Apache server which serves a simple web page full of tables. Each row in the table has a proposal id and a link to a web page where the proposal is explained. So just two columns.
----------------------
| id | proposal |
|--------------------
| 1 | foo.html |
| 2 | bar.html |
----------------------
Now, I want to add a third column titled Comments where a user can leave comments.
------------------------------------------------
| id | proposal | Comments |
|-----------------------------------------------
| 1 | foo.html | x: great idea ! |
| | | y: +1 |
| 2 | bar.html | z: not for this release |
------------------------------------------------
I just want to quickly hack up something to show this as a demo and get feedback. I am planning to use SQLite to create a table per id and store the userid, comments in the table. People can add comment at the same time. I am planning to use lock to perform operations on the SQLite database. I am not worried about scaling just want to show and get feedback. Are there any major flaw in this implementation?
There are similar questions. But I am looking for a simplest possible implementation.
Table per ID; why would you want to do that? If you get a large number of proposals, the number of tables can get out of hand very quickly. You just need to keep an id column in the table to keep track of things and keep the number of tables in a sane figure.
The other drawback of using a table for each proposal is that you will not be able to use prepared statements for those, because table names cannot be bound as a parameter.
SQLite assumes the table name is 'a'
Add column
alter table a add column Comments text;
Insert comment
insert into a values (4,"hello.html","New Comment");
You need to provide values for the other two columns along with the new comment.
I read enough to know that this occurs when a string contains some characters that Postgres doesn't like. However, I cannot figure out if there is a way to validate strings before writing them. In particular, I'm doing batch inserts.
insert into foo(col1,col2,col3) values ('a',2,3),('b',4,0),....
My DB is setup like this:
Name | Owner | Encoding | Collate | Ctype | Access privileges
------------+--------+----------+---------+-------+-------------------
stats | me | UTF8 | C | C |
Periodically, some bad string will get in and the whole insert will fail(e.g. change���=). I batch up quite a few values in a single insert so I'd like to ideally validate the string rather than bomb the whole insert. Is there a list of which characters are not allowed in a Postgres insert?
Using postgresql-jdbc 9.1-901.jdbc4
This message means that your string data has a null character "\0" in it.
I can't find an authoritative cite for this (let me know if you have one).
It is discussed at https://www.postgresql.org/message-id/alpine.BSO.2.00.0906031639270.2432%40leary.csoft.net
It is mentioned in passing in the official docs at https://www.postgresql.org/docs/9.3/static/functions-string.html
All other characters are allowed.
date type is not match target type.for example int4->int8
On my case, I was loading query from an sql file. The problem was due to the encoding.
I change it to UTF-8 and it works. Hope that helps !