I've got a table with the following structure:
CREATE TABLE "mytable" ("column01" INTEGER NOT NULL , "column02" INTEGER NOT NULL )
And I want to switch the values between columns - I want column02 to become column01 and column01 to become column02.
i.e.:
column01 / column02
apple / 01
day / 05
light / 28
And I want it to become:
column01 / column02
01 / apple
05 / day
28 / light
Is there a way to achieve this, using only SQL query?
Thanks.
I just tested the below query and it works:
update mytable set column01 = column02, column02 = column01
I can't try it here, but this might work. It's a swap that needs no temporary variables.
The algorithm is:
x -= y;
y += x; // y gets the original value of x
x = (y - x); // x gets the original value of y
so that would make your UPDATE statement like this:
UPDATE mytable
SET column01 = column01 - column02,
column02 = column02 + column01,
column01 = column02 - column01
It will only work if the columns are evaluated in left-to-right order, and they are evaluated in place, as opposed to from a snapshot of the buffer, which I believe is the case for SQLite.
SQLITE is extremely limited in the ALTER table commands allowed. As stated on the official project site:
SQLite supports a limited subset of
ALTER TABLE. The ALTER TABLE command
in SQLite allows the user to rename a
table or to add a new column to an
existing table. It is not possible to
rename a colum, remove a column, or
add or remove constraints from a
table.
Because of this, and because the two columns you want to swap are seemingly of different types (INTEGER and VARCHAR) I think you will need to export the table contents as sql/csv, drop the table, create a table with the structure you want, and then import the file you dropped back into that table.
One possible way to do this:
sqlite> .output my_outfile.sql
This changes output from displaying on screen to being written to a file.
sqlite> .dump my_table
This method will dump the CREATE TABLE SQL and all the INSERT statements as a transaction. You'll need to modify my_outfile.sql with vi or another editor to manually remove the CREATE TABLE statements, and I think you'll need to also remove the BEGIN, END transaction commands as I've had trouble importing data with them.
sqlite> .output stdout
This brings your "vision" back as command output will show on your screen again.
sqlite> CREATE TABLE. . .
Re-create the table with the column order you want, being sure to change the types (INTEGER/VARCHAR) appropriately.
sqlite> .read my_outfile.sql
This will execute all the SQL commands in the file you dumped earlier, which should result in achieving the goal you were after as the INSERT statements dumped do not associate column names with specific values.
So, that should do it. A bit verbose, but it may be the only way with sqlite.
Related
I understand that SQLite does not have If-Else condition check, and people have been using case statements to get around it. However I want to do a if condition check before executing a certain portion of the script, like the following:
IF (condition = true)
INSERT INTO tableA(A, B)
VALUES (a, b)
....
END
From what I have been trying, case statement doesn't seem to work. Is there any way I can accomplish the above in SQLite?
Thanks for all your help!
You could perhaps use an INSERT SELECT
INSERT INTO table SELECT ...;
The second form of the INSERT statement contains a SELECT statement
instead of a VALUES clause.
A new entry is inserted into the table for
each row of data returned by executing the SELECT statement.
If a
column-list is specified, the number of columns in the result of the
SELECT must be the same as the number of items in the column-list.
Otherwise, if no column-list is specified, the number of columns in
the result of the SELECT must be the same as the number of columns in
the table.
Any SELECT statement, including compound SELECTs and SELECT
statements with ORDER BY and/or LIMIT clauses, may be used in an
INSERT statement of this form.
extract from SQL As Understood By SQLite - INSERT
e.g.
INSERT into xxx
SELECT null as id,
CASE
WHEN filesize < 1024 THEN 'just a little bit'
WHEN filesize >= 1024 THEN 'quite a bit'
END AS othercolumn
FROM filesizes
WHERE filesize < 1024 * 1024
The above will insert rows into table xxx which consists of 2 columns id (rowid alias) and othercolumn according to the results (2 columns id (always set as null) and othercolumn) of the SELECT, which is selecting from the filesizes table where the value of the filesize column is less than 1024 * 1024 (1048576), thus conditionally inserting.
Furthermore, if the filesize is less than 1024 the othercolumn is set to just a little bit, if the filesize is greater than 1023 then the othercolumn is set to quite a bit. So making the conditional insert more complex.
Assuming the filesizes table were :-
The running the above would result in :-
I am new to RSQLite.
I have an input document in text format in which values are seperately by '|'
I created a table with the required variables (dummy code as follows)
db<-dbconnect(SQLite(),dbname="test.sqlite")
dbSendQuery(conn=db,
"CREATE TABLE TABLE1(
MARKS INTEGER,
ROLLNUM INTEGER
NAME CHAR(25)
DATED DATE)"
)
However I am struck at how to import values into the created table.
I cannot use INSERT INTO Values command as there are thousands of rows and more than 20+ columns in the original data file and it is impossible to manually type in each data point.
Can someone suggest an alternative efficient way to do so?
You are using a scripting language. The deal of this is literally to avoid manually typing each data point. Sorry.
You have two routes:
1: You have corrected loaded a database connection and created an empty table in your SQLite database. Nice!
To load data into the table, load your text file into R using e.g. df <-
read.table('textfile.txt', sep='|') (modify arguments to fit your text file).
To have a 'dynamic' INSERT statement, you can use placeholders. RSQLite allows for both named or positioned placeholder. To insert a single row, you can do:
dbSendQuery(db, 'INSERT INTO table1 (MARKS, ROLLNUM, NAME) VALUES (?, ?, ?);', list(1, 16, 'Big fellow'))
You see? The first ? got value 1, the second ? got value 16, and the last ? got the string Big fellow. Also note that you do not enclose placeholders for text in quotation marks (' or ")!
Now, you have thousands of rows. Or just more than one. Either way, you can send in your data frame. dbSendQuery has some requirements. 1) That each vector has the same number of entries (not an issue when providing a data.frame). And 2) You may only submit the same number of vectors as you have placeholders.
I assume your data frame, df contains columns mark, roll, and name, corrsponding to the columns. Then you may run:
dbSendQuery(db, 'INSERT INTO table1 (MARKS, ROLLNUM, NAME) VALUES (:mark, :roll, :name);', df)
This will execute an INSERT statement for each row in df!
TIP! Because an INSERT statement is execute for each row, inserting thousands of rows can take a long time, because after each insert, data is written to file and indices are updated. Insert, enclose it in an transaction:
dbBegin(db)
res <- dbSendQuery(db, 'INSERT ...;', df)
dbClearResult(res)
dbCommit(db)
and SQLite will save the data to a journal file, and only save the result when you execute the dbCommit(db). Try both methods and compare the speed!
2: Ah, yes. The second way. This can be done in SQLite entirely.
With the SQLite command utility (sqlite3 from your command line, not R), you can attach a text file as a table and simply do a INSERT INTO ... SELECT ... ; command. Alternately, read the text file in sqlite3 into a temporary table and run a INSERT INTO ... SELECT ... ;.
Useful site to remember: http://www.sqlite.com/lang.html
A little late to the party, but DBI provides dbAppendTable() which will write the contents of a dataframe to an SQL table. Column names in the dataframe must match the field names in the database. For your example, the following code would insert the contents of my random dataframe into your newly created table.
library(DBI)
db<-dbConnect(RSQLite::SQLite(),dbname=":memory")
dbExecute(db,
"CREATE TABLE TABLE1(
MARKS INTEGER,
ROLLNUM INTEGER,
NAME TEXT
)"
)
df <- data.frame(MARKS = sample(1:100, 10),
ROLLNUM = sample(1:100, 10),
NAME = stringi::stri_rand_strings(10, 10))
dbAppendTable(db, "TABLE1", df)
I don't think there is a nice way to do a large number of inserts directly from R. SQLite does have a bulk insert functionality, but the RSQLite package does not appear to expose it.
From the command line you may try the following:
.separator |
.import your_file.csv your_table
where your_file.csv is the CSV (or pipe delimited) file containing your data and your_table is the destination table.
See the documentation under CSV Import for more information.
I've got an SQLite database that I populate directly from txt files. However, my textfiles has commas to show decimal. After insepcting the already appointed records, this leads to confusion as SQLite don't interpret these numbers correctly.
Is it possible to change records with a comma to a point in place (or should I rather populate the database over again?
If you want to have repeatable and consistent processes, you should fix your import and execute it again.
If you want to change the characters in place, use the replace() function:
UPDATE MyTable
SET MyColumn = replace(MyColumn, ',', '.')
WHERE MyColumn LIKE '%,%';
If you want the result to be numbers, you also have to change the type with CAST:
UPDATE MyTable
SET MyColumn = CAST(replace(MyColumn, ',', '.') AS NUMERIC)
WHERE MyColumn LIKE '%,%';
I want to get a subtree from a table by tree path.
the path column stores strings like:
foo/
foo/bar/
foo/bar/baz/
If I try to select all records that start with a certain path:
EXPLAIN QUERY PLAN SELECT * FROM f WHERE path LIKE "foo/%"
it tells me that the table is scanned, even though the path column is indexed :(
Is there any way I could make LIKE use the index and not scan the table?
I found a way to achieve what I want with closure table, but it's harder to maintain and writes are extremely slow...
To be able to use an index for LIKE in SQLite,
the table column must have TEXT affinity, i.e., have a type of TEXT or VARCHAR or something like that; and
the index must be declared as COLLATE NOCASE (either directly, or because the column has been declared as COLLATE NOCASE):
> CREATE TABLE f(path TEXT);
> CREATE INDEX fi ON f(path COLLATE NOCASE);
> EXPLAIN QUERY PLAN SELECT * FROM f WHERE path LIKE 'foo/%';
0|0|0|SEARCH TABLE f USING COVERING INDEX fi (path>? AND path<?)
The second restriction could be removed with the case_sensitive_like PRAGMA, but this would change the behaviour of LIKE.
Alternatively, one could use a case-sensitive comparison, by replacing LIKE 'foo/%' with GLOB 'foo/*'.
LIKE has strict requirements to be optimizable with an index (ref).
If you can relax your requirements a little, you can use lexicographic ordering to get indexed lookups, e.g.
SELECT * FROM f WHERE PATH >= 'foo/' AND PATH < 'foo0'
where 0 is the lexigographically next character after /.
This is essentially the same optimization the optimizer would do for LIKEs if the requirements for optimization are met.
I need to select v_col1, from table_x and that column gives me string that i need to put(update) into same
rowid but into diffrent column(h_col2) in sama table table_x - sorry it seems easy but i am beginner....
tabl_x
rowid V_col1, h_col2 etc .....
1 672637263 GVRT1898
2 384738477 GVRT1876
3 263237863 GVRT1832
like in this example i need to put GVRT1898 (update) instead of 672637263 and i need to
go into every row in this table_x and fix -
like next line would be (rowid2 would be GVRT1876 instead of 384738477 :-)
this table has 40000 lines like this and i need to loop for every rowid
THX for your responce Justin - this is a little more complex,
i have this string in h_col and need to take only GVRTnumber out and put into v_col - but it's
hard becouse GVRTnumber is in various place in column see down here....
"E_ID"=X:"GVRT1878","RCode"=X:"156000","Month"=d:1,"Activate"=d:5,"Disp_Id"=X:"4673498","Tar"=X:"171758021";
2"E_ID"=X:"561001760","RCode"=X:"156000","Month"=d:1,"Activate"=d:5,"Disp_Id"=X:"GVRT1898","Tar"=X:"171758021";
h_col column have this number that i want but in various place like somethimes it's in this 600byte column it's in byte nr 156 - sometimes in 287 but the only unique is "GVRT...." how can i take that string and put it to v_col -
Can you show me how to write such SQL pl/sql ?
regards & thanks
It sounds like you just want
UPDATE tabl_x
SET h_col2 = v_col1
Of course, if you do something like this, that implies that one of the two columns should be dropped or the data model needs to get fixed. Having two copies of the same data in each row is a bad idea from a normalization standpoint if nothing else.