sqlite3 from windows command promt - sqlite

CREATE TABLE texhisowntable (age INTEGER, name (32));
in this emty TABLE i write infomation. First age, Second name up to 32, (string, numbers, ore chars i don't know witch one you would try, but i write Words )
..so lets do it:
INSERT INTO texhisowntable (age, name) VALUES(100, ''TheJavaRockS>|<RwithTheGoldenAxE");
so now im TheJavaRockS>|
SELECT * FROM texhisowntable;
and my command would say:"ACDC, let it Play, you are old but you look fine"
//he would print : TheJavaRockS>|
but im old and i forget things, so who knows how i can see, how i created the table.
I only want rom the commandprommt he print's
CREATE TABLE texhisowntable (age INTEGER, name (32));

13 ago, i answer my own Question. With the command ".schema" it shows every table for example in the file "thnkhsu.db" where also the TABLE "texhisowntable" is.
If you got much TABLES, and search only, the one you like, for example the "texhisowntable" in the file "thnkhsu.db" you only need to write
".schemas texhisowntable" and the command shows, how i created the TABLE.
;) its very simple

Related

SQLite - Display in the terminal content of a list of tables time it

I achieved to display a list of table names of interest I have in my database with the following function:
SELECT name
FROM sqlite_master
WHERE type = 'table'
AND name LIKE '%#_1' ESCAPE '#';
(It is not the subject but it return me a list of table names finishing by "_1")
Now what I would like to do is to display the content of all these tables in one command (just like if I was using cat *) and I would like to time this command.
So what should be the command ?
Thank you for your help.
This is not possible with a single SQL command.
You have to generate a series of SELECT statements, one for each table, and execute all of them.

Adding value to existing database table in RSQLite

I am new to RSQLite.
I have an input document in text format in which values are seperately by '|'
I created a table with the required variables (dummy code as follows)
db<-dbconnect(SQLite(),dbname="test.sqlite")
dbSendQuery(conn=db,
"CREATE TABLE TABLE1(
MARKS INTEGER,
ROLLNUM INTEGER
NAME CHAR(25)
DATED DATE)"
)
However I am struck at how to import values into the created table.
I cannot use INSERT INTO Values command as there are thousands of rows and more than 20+ columns in the original data file and it is impossible to manually type in each data point.
Can someone suggest an alternative efficient way to do so?
You are using a scripting language. The deal of this is literally to avoid manually typing each data point. Sorry.
You have two routes:
1: You have corrected loaded a database connection and created an empty table in your SQLite database. Nice!
To load data into the table, load your text file into R using e.g. df <-
read.table('textfile.txt', sep='|') (modify arguments to fit your text file).
To have a 'dynamic' INSERT statement, you can use placeholders. RSQLite allows for both named or positioned placeholder. To insert a single row, you can do:
dbSendQuery(db, 'INSERT INTO table1 (MARKS, ROLLNUM, NAME) VALUES (?, ?, ?);', list(1, 16, 'Big fellow'))
You see? The first ? got value 1, the second ? got value 16, and the last ? got the string Big fellow. Also note that you do not enclose placeholders for text in quotation marks (' or ")!
Now, you have thousands of rows. Or just more than one. Either way, you can send in your data frame. dbSendQuery has some requirements. 1) That each vector has the same number of entries (not an issue when providing a data.frame). And 2) You may only submit the same number of vectors as you have placeholders.
I assume your data frame, df contains columns mark, roll, and name, corrsponding to the columns. Then you may run:
dbSendQuery(db, 'INSERT INTO table1 (MARKS, ROLLNUM, NAME) VALUES (:mark, :roll, :name);', df)
This will execute an INSERT statement for each row in df!
TIP! Because an INSERT statement is execute for each row, inserting thousands of rows can take a long time, because after each insert, data is written to file and indices are updated. Insert, enclose it in an transaction:
dbBegin(db)
res <- dbSendQuery(db, 'INSERT ...;', df)
dbClearResult(res)
dbCommit(db)
and SQLite will save the data to a journal file, and only save the result when you execute the dbCommit(db). Try both methods and compare the speed!
2: Ah, yes. The second way. This can be done in SQLite entirely.
With the SQLite command utility (sqlite3 from your command line, not R), you can attach a text file as a table and simply do a INSERT INTO ... SELECT ... ; command. Alternately, read the text file in sqlite3 into a temporary table and run a INSERT INTO ... SELECT ... ;.
Useful site to remember: http://www.sqlite.com/lang.html
A little late to the party, but DBI provides dbAppendTable() which will write the contents of a dataframe to an SQL table. Column names in the dataframe must match the field names in the database. For your example, the following code would insert the contents of my random dataframe into your newly created table.
library(DBI)
db<-dbConnect(RSQLite::SQLite(),dbname=":memory")
dbExecute(db,
"CREATE TABLE TABLE1(
MARKS INTEGER,
ROLLNUM INTEGER,
NAME TEXT
)"
)
df <- data.frame(MARKS = sample(1:100, 10),
ROLLNUM = sample(1:100, 10),
NAME = stringi::stri_rand_strings(10, 10))
dbAppendTable(db, "TABLE1", df)
I don't think there is a nice way to do a large number of inserts directly from R. SQLite does have a bulk insert functionality, but the RSQLite package does not appear to expose it.
From the command line you may try the following:
.separator |
.import your_file.csv your_table
where your_file.csv is the CSV (or pipe delimited) file containing your data and your_table is the destination table.
See the documentation under CSV Import for more information.

sqlloader: how to use substr in when-clause correctly?

I'm having a problem that I thought to be rather common, but trying to look it up in the "Oracle Database 10g2 Utilities_b14215.pdf" didn't help. After that I've surfed through numerous threads but no luck so far.
I'm having a tab-delimited file (x'09') e. g. name, userid, persnr. The values for the userids begin with either P, R or T e. g. P2198, P2199, R7288, T1229.
I want to load only the records with userids beginning with P.
Isolating a single record with a controlfile like this works splendidly:
OPTIONS (SKIP=1)
LOAD DATA
INFILE UserlistLoader.dat
APPEND
INTO TABLE Z_USERLIST
WHEN USERID = 'P2198'
FIELDS TERMINATED BY x'09'
TRAILING NULLCOLS
(name, userid, persnr)
But every attempt at using SUBSTR in the when-clause fails.
This:
OPTIONS (SKIP=1)
LOAD DATA
INFILE UserlistLoader.dat
APPEND
INTO TABLE Z_USERLIST
WHEN SUBSTR(USERID, 1, 1) = 'P'
FIELDS TERMINATED BY x'09'
TRAILING NULLCOLS
(name, userid, persnr)
ends in an SQL*Loader-350: Syntax-Error.
This
OPTIONS (SKIP=1)
LOAD DATA
INFILE UserlistLoader.dat
APPEND
INTO TABLE Z_USERLIST
WHEN "SUBSTR(:USERID, 1, 1)" = 'P'
FIELDS TERMINATED BY x'09'
TRAILING NULLCOLS
(name, userid, persnr)
ends in an SQL*Loader-403: Referenced column USERID not present in table Z_USERLIST.
But IT IS PRESENT - as the first example proves. I've found that the column should be preceded by : but that obviously isn't the issue.
What am I doing wrong?
From SQL Loader docs the left-hand side of a WHEN condition can only be a full field name e.g. USERID or a position spec e.g. (3:5).
The docs aren't very clear though on what is allowed - e.g. can LIKE be used as the operator?
USERID LIKE 'P%'
I strongly suspect it can't though.
I would load the entire file into a staging table that matches the file layout, then run a procedure that inserts the rows you want from there into the production table. That is a more common way to handle loads with criteria like this without having to edit source data.
If you can preprocess the source file, move the userid to the first field or copy the first letter of the userid to it's own field and construct the WHEN like this so sqlldr looks at the first position (this will cause sqlldr to return non-zero though, as not all rows meet WHEN clause criteria):
WHEN (1) = 'P'

BizTalk Varying Length Flat File using Single Schema for Transform

I have a pipe delimited .txt Flat File that I'm using to do bulk insert to SQL. Everything works well for straight one to one. However, the Flat File now contains 2 new fields that can repeat an unknown number of times.
Is there a way to create a single flat file schema where I can have an unbounded child within the main unbounded child? I think the place I'm getting tripped up is how to make the ChildRoot listed below just a "group heading" like Root is where ChildRoot doesn't correspond to a location in the flat file. How do I insert something like that?
Schema:
-Roots
--Root (unbounded)
---ChildID
---ChildName
Roots gets a direct link to my sql stored procedure to do a bulk insert on as many "Root" rows that come in.
Now I have:
Schema:
-Roots
--Root (unbounded)
---Child
---ChildName
---ChildRoot (unbounded)
----ChildRootID
----ChildRootName
**EDIT
I should also add that ChildRootID & ChildRootName can repeat an indefinite number of times until the row delimiter (carriage return) is found

Extract first word in a SQLite3 database

I have a SQLITE3 database wherein I have stored various columns. One column (cmd) in particular contains the full command line and associated parameters. Is there a way to extract just the first word in this column (just before the first space)? I am not interested in seeing the various parameters used, but do want to see the command issued.
Here's an example:
select cmd from log2 limit 3;
user-sync //depot/PATH/interface.h
user-info
user-changes -s submitted //depot/PATH/build/...#2011/12/06:18:31:10,#2012/01/18:00:05:55
From the result above, I'd like to use an inline SQL function (if available in SQLITE3) to parse on the first instance of space, and perhaps use a left function call (I know this is not available in SQLITE3) to return just the "user-sync" string. Same for "user-info" and "user-changes".
Any ideas?
Thanks.
My soluion:
sqlite> CREATE TABLE command (cmd TEXT);
sqlite> INSERT INTO command (cmd) VALUES ('ls'),('cd ~'),(' mpv movie.mkv ');
sqlite> SELECT substr(trim(cmd),1,instr(trim(cmd)||' ',' ')-1) FROM command;
ls
cd
mpv
Pros:
it's not that a dirty hack
it only uses core functions
"Finds the first occurrence" function is one of the SQLite3 Core Functions (http://www.sqlite.org/lang_corefunc.html).
Of course, it is much better to use instr(X,Y).
So you can write:
SELECT substr(cmd,1,instr(cmd,' ')-1) FROM log2
As the position of your first space character is unknown, I don't think there is a corefunction in SQLite that will help.
I think you'll have to create one http://www.sqlite.org/c3ref/create_function.html
Here's a hack
sqlite> create table test (a);
sqlite> insert into test values ("This is a test.");
sqlite> select * from test;
This is a test.
sqlite> select rtrim(substr(replace(a,' ','----------------------------------------------------------------------------------------'),1,80),'-') from test;
This
It works as long as your longest command is less than 80 characters (and you include 80 '-' characters in the substitution string -- I didn't count them!). If your commands can contain '-' just use a different character that is not allowed in the commands.
I don't believe that's something you'll be able to do within the SQL itself. SQLite's support for string handling functions is not as extensive as other RDBMSs (some of which would let you do a SUBSTR with a Reg Exp).
My suggestion is either to write your own SQL function as suggested by #Jon or just do it as a post-processing step in your app code.

Resources