I have a database test.db with the first column containing 123456;abcdef;ghijk etc. Is it possible to split the data into its own colums?
123456 never changes length.
abcdef and ghijk change length and also may contain nothing.
I have tried the below string but i get the ; appear in either t2 or t3 depending on the length of abcdef and ghijk.
select substr(column,1,6) AS "t1",
substr(column,8,6) AS "t2",
substr(column,15,10) AS "t3" test.db
Is the ; seperator causing the issue?
Or can i output the database to .sql, format the txt then import to a new database?
There is no built-in SQLite function that can split strings like this.
If your are using the SQLite C API or a wrapper like APSW, you could create your own function (C, APSW).
If you want to do nothing more than a one-time conversion, export/import through a text file would be the simplest solution.
You can split your rows into columns by:
create table t1 as
select substr(c1,0,instr(c1,';')) as column1,
substr(c1,instr(c1,';')+1,instr(c1,';')-1) as column2,
substr(c1,instr(c1,';')+1+instr(substr(c1,instr(c1,';')+1),';')) as column3
from table_test;
where c1 is the column you are selecting from.
Related
I have a database A.db, which contains tables t1, t2 and t3.
Now I want to create a new database B.db, which contains t1 and some chosen columns col1 and col4 from t2.
With .import I get hundreds of errors and it seems to work only for full tables.
.output sounds like I just save the output as it would be printed.
Basically, I need an insert into foo select ... across different files. How can I do this?
First you must attach A.db to your current database and give it an alias like adb.
Then write the insert statement just like you would if all the tables existed in the same database, qualifying the column names with the database alias.
It's a good practice to include in the insert into... statement inside parentheses all the column names of the table foo for which you will set values from the other 2 tables, but also be sure that the order of the columns is the same with the order of the columns in the select list:
attach database 'pathtoAdatabase/A.db' as adb;
insert into foo (column1, column2, .......)
select adb.t1.column1, adb.t1.column2, ...., adb.t2.col1, adb.t2.col4
from adb.t1 inner join adb.t2
on <join condition>
Replace <join condition> with the conditions on whichyou will join the 2 tables to makes the rows that you will insert into foo, something like:
adb.t1.id = adb.t2.id
I have a column C of type REAL in table F in SQLite. I want to join this everywhere where in another table the negative value of F exists (along with some other fields).
However -C or 0-C etc.. all return the rounded value of C e.g. when C contains "123,456" then -C returns "-123".
Should I cast this via a string first or is the syntax differently?
Looks like the , in 123,456 is meant to be a decimal separator but SQLite treats the whole thing as a string (i.e. '123,456' rather than 123.456). Keep in mind that SQLite's type system is a little different than SQL's as values have types but columns don't:
[...] In SQLite, the datatype of a value is associated with the value itself, not with its container. [...]
So you can quietly put a string (that looks like a real number in some locales) into a real column and nothing bad happens until later.
You could fix the import process to interpret the decimal separator as desired before the data gets into SQLite or you could use replace to fix them up as needed:
sqlite> select -'123,45';
-123
sqlite> select -replace('123,45', ',', '.');
-123.45
I am new to RSQLite.
I have an input document in text format in which values are seperately by '|'
I created a table with the required variables (dummy code as follows)
db<-dbconnect(SQLite(),dbname="test.sqlite")
dbSendQuery(conn=db,
"CREATE TABLE TABLE1(
MARKS INTEGER,
ROLLNUM INTEGER
NAME CHAR(25)
DATED DATE)"
)
However I am struck at how to import values into the created table.
I cannot use INSERT INTO Values command as there are thousands of rows and more than 20+ columns in the original data file and it is impossible to manually type in each data point.
Can someone suggest an alternative efficient way to do so?
You are using a scripting language. The deal of this is literally to avoid manually typing each data point. Sorry.
You have two routes:
1: You have corrected loaded a database connection and created an empty table in your SQLite database. Nice!
To load data into the table, load your text file into R using e.g. df <-
read.table('textfile.txt', sep='|') (modify arguments to fit your text file).
To have a 'dynamic' INSERT statement, you can use placeholders. RSQLite allows for both named or positioned placeholder. To insert a single row, you can do:
dbSendQuery(db, 'INSERT INTO table1 (MARKS, ROLLNUM, NAME) VALUES (?, ?, ?);', list(1, 16, 'Big fellow'))
You see? The first ? got value 1, the second ? got value 16, and the last ? got the string Big fellow. Also note that you do not enclose placeholders for text in quotation marks (' or ")!
Now, you have thousands of rows. Or just more than one. Either way, you can send in your data frame. dbSendQuery has some requirements. 1) That each vector has the same number of entries (not an issue when providing a data.frame). And 2) You may only submit the same number of vectors as you have placeholders.
I assume your data frame, df contains columns mark, roll, and name, corrsponding to the columns. Then you may run:
dbSendQuery(db, 'INSERT INTO table1 (MARKS, ROLLNUM, NAME) VALUES (:mark, :roll, :name);', df)
This will execute an INSERT statement for each row in df!
TIP! Because an INSERT statement is execute for each row, inserting thousands of rows can take a long time, because after each insert, data is written to file and indices are updated. Insert, enclose it in an transaction:
dbBegin(db)
res <- dbSendQuery(db, 'INSERT ...;', df)
dbClearResult(res)
dbCommit(db)
and SQLite will save the data to a journal file, and only save the result when you execute the dbCommit(db). Try both methods and compare the speed!
2: Ah, yes. The second way. This can be done in SQLite entirely.
With the SQLite command utility (sqlite3 from your command line, not R), you can attach a text file as a table and simply do a INSERT INTO ... SELECT ... ; command. Alternately, read the text file in sqlite3 into a temporary table and run a INSERT INTO ... SELECT ... ;.
Useful site to remember: http://www.sqlite.com/lang.html
A little late to the party, but DBI provides dbAppendTable() which will write the contents of a dataframe to an SQL table. Column names in the dataframe must match the field names in the database. For your example, the following code would insert the contents of my random dataframe into your newly created table.
library(DBI)
db<-dbConnect(RSQLite::SQLite(),dbname=":memory")
dbExecute(db,
"CREATE TABLE TABLE1(
MARKS INTEGER,
ROLLNUM INTEGER,
NAME TEXT
)"
)
df <- data.frame(MARKS = sample(1:100, 10),
ROLLNUM = sample(1:100, 10),
NAME = stringi::stri_rand_strings(10, 10))
dbAppendTable(db, "TABLE1", df)
I don't think there is a nice way to do a large number of inserts directly from R. SQLite does have a bulk insert functionality, but the RSQLite package does not appear to expose it.
From the command line you may try the following:
.separator |
.import your_file.csv your_table
where your_file.csv is the CSV (or pipe delimited) file containing your data and your_table is the destination table.
See the documentation under CSV Import for more information.
I would like to run a query involving joining a table to a manually generated list but am stuck trying to generate the manual list. There is an example of what I am attempting to do below:
SELECT
*
FROM
('29/12/2014', '30/12/2014', '30/12/2014') dates
;
Ideally I would want my output to look like:
29/12/2014
30/12/2014
31/12/2014
What's your Teradata release?
In TD14 there's STRTOK_SPLIT_TO_TABLE:
SELECT *
FROM TABLE (STRTOK_SPLIT_TO_TABLE(1 -- any dummy value
,'29/12/2014,30/12/2014,30/12/2014' -- any delimited string
,',' -- delimiter
)
RETURNS (outkey INTEGER
,tokennum INTEGER
,token VARCHAR(20) CHARACTER SET UNICODE) -- modify to match the actual size
) AS d
You can easily put this in a Derived Table and then join to it.
inkey (here the dummy value 1) is a numeric or string column, usually a key. Can be used for joining back to the original row.
outkey is the same as inkey.
tokennum is the ordinal position of the token in the input string.
token is the extracted substring.
Try this:
select '29/12/2014'
union
select '30/12/2014'
union
...
It should work in Teradata as well as in MySql.
I've got a table with the following structure:
CREATE TABLE "mytable" ("column01" INTEGER NOT NULL , "column02" INTEGER NOT NULL )
And I want to switch the values between columns - I want column02 to become column01 and column01 to become column02.
i.e.:
column01 / column02
apple / 01
day / 05
light / 28
And I want it to become:
column01 / column02
01 / apple
05 / day
28 / light
Is there a way to achieve this, using only SQL query?
Thanks.
I just tested the below query and it works:
update mytable set column01 = column02, column02 = column01
I can't try it here, but this might work. It's a swap that needs no temporary variables.
The algorithm is:
x -= y;
y += x; // y gets the original value of x
x = (y - x); // x gets the original value of y
so that would make your UPDATE statement like this:
UPDATE mytable
SET column01 = column01 - column02,
column02 = column02 + column01,
column01 = column02 - column01
It will only work if the columns are evaluated in left-to-right order, and they are evaluated in place, as opposed to from a snapshot of the buffer, which I believe is the case for SQLite.
SQLITE is extremely limited in the ALTER table commands allowed. As stated on the official project site:
SQLite supports a limited subset of
ALTER TABLE. The ALTER TABLE command
in SQLite allows the user to rename a
table or to add a new column to an
existing table. It is not possible to
rename a colum, remove a column, or
add or remove constraints from a
table.
Because of this, and because the two columns you want to swap are seemingly of different types (INTEGER and VARCHAR) I think you will need to export the table contents as sql/csv, drop the table, create a table with the structure you want, and then import the file you dropped back into that table.
One possible way to do this:
sqlite> .output my_outfile.sql
This changes output from displaying on screen to being written to a file.
sqlite> .dump my_table
This method will dump the CREATE TABLE SQL and all the INSERT statements as a transaction. You'll need to modify my_outfile.sql with vi or another editor to manually remove the CREATE TABLE statements, and I think you'll need to also remove the BEGIN, END transaction commands as I've had trouble importing data with them.
sqlite> .output stdout
This brings your "vision" back as command output will show on your screen again.
sqlite> CREATE TABLE. . .
Re-create the table with the column order you want, being sure to change the types (INTEGER/VARCHAR) appropriately.
sqlite> .read my_outfile.sql
This will execute all the SQL commands in the file you dumped earlier, which should result in achieving the goal you were after as the INSERT statements dumped do not associate column names with specific values.
So, that should do it. A bit verbose, but it may be the only way with sqlite.