I have started evaluating Zumero with the Zumero Cloud Hosting option.
I have followed the 'Getting Started' guide to the dot.
Once I have created a virtual table, I have tried to insert data via a SQLite Gui Management tool, but the command failed. I have tried 'manually' with 'INSERT INTO' commands but got a error on every field that the Field doesn't exists. It took me quite a while until I figured out that I cannot insert data with the option of specifying Field Names.
So, if I want to insert data, the only option is to do so without specifying field names (and providing values for all fields, of course).
I wonder if this is normal behavior for a Zumero Virtual Table? For any SQLite Virtual table?
I have not taken it to next stage, accessing the table from within a development SDK, but it's hard to believe for me, that inserting any data through the SQLite library will not occur by the library specifying the Field name in his INSERT INTO command implementation.
Any comments is much appreciated.
Thanks.
Looks like you've uncovered a limitation of Zumero's parser. The short answer is, don't use MSSQL-style square brackets to delimit fields when talking to Zumero cloud DBs, instead use standard-SQL-style double quotes.
So:
create virtual table foo using zumero ("Id", "FirstName", "LastName", "Address");
instead of:
create virtual table foo using zumero ([Id], [FirstName], [LastName], [Address]);
Using the first example, you can then insert via:
insert into foo (FirstName, LastName) values ('Fred', 'Flintstone');
or
insert into foo ("FirstName", "LastName") values ('Barney', 'Rubble');
or any combination thereof.
Related
I need to create trigger dynamically and don't need drop them in future.
So, I need a code to do this.
Likely
CREATE TRIGGER random() BEFORE INSERT... (with random name)
Or
CREATE TRIGGER BEFORE INSERT... (without name)
Can I do this in sqlite shell?
I know, it's bad practise, but it's experiment.
Thanks.
I'm afraid that's not possible. According to the documentation, trigger-name is an atomic syntactical unit (it can be seen from it from being lowercase, in a rounded-cornered rectangle), in the sense it cannot be constructed by evaluating complex expressions. You are only allowed to enter a literal there (same with table-name and index-name by the way, see here and here).
What you can do instead is dynamically constructing the whole query string, before passing it to SQLite. E.g. if you are interacting with SQLite through Python, then you can write something like:
tableName = "someRandomString"
db.execute("CREATE TABLE " + tableName + " (A INT, B TEXT)")
Or if you are using the Windows command prompt:
set tableName=someRandomString
sqlite3.exe test.sqlite "CREATE TABLE %tableName% (A INT, B TEXT)"
I know you were asking about triggers, not tables but it's basically the same thing from your question's perspective, and the table creation syntax is shorter.
The only thing I don't have an automated tool for when working with Oracle is a program that can create INSERT INTO scripts.
I don't desperately need it so I'm not going to spend money on it. I'm just wondering if there is anything out there that can be used to generate INSERT INTO scripts given an existing database without spending lots of money.
I've searched through Oracle with no luck in finding such a feature.
It exists in PL/SQL Developer, but errors for BLOB fields.
Oracle's free SQL Developer will do this:
http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html
You just find your table, right-click on it and choose Export Data->Insert
This will give you a file with your insert statements. You can also export the data in SQL Loader format as well.
You can do that in PL/SQL Developer v10.
1. Click on Table that you want to generate script for.
2. Click Export data.
3. Check if table is selected that you want to export data for.
4. Click on SQL inserts tab.
5. Add where clause if you don't need the whole table.
6. Select file where you will find your SQL script.
7. Click export.
Use a SQL function (I'm the author):
https://github.com/teopost/oracle-scripts/blob/master/fn_gen_inserts.sql
Usage:
select fn_gen_inserts('select * from tablename', 'p_new_owner_name', 'p_new_table_name')
from dual;
where:
p_sql – dynamic query which will be used to export metadata rows
p_new_owner_name – owner name which will be used for generated INSERT
p_new_table_name – table name which will be used for generated INSERT
p_sql in this sample is 'select * from tablename'
You can find original source code here:
http://dbaora.com/oracle-generate-rows-as-insert-statements-from-table-view-using-plsql/
Ashish Kumar's script generates individually usable insert statements instead of a SQL block, but supports fewer datatypes.
I have been searching for a solution for this and found it today. Here is how you can do it.
Open Oracle SQL Developer Query Builder
Run the query
Right click on result set and export
http://i.stack.imgur.com/lJp9P.png
You might execute something like this in the database:
select "insert into targettable(field1, field2, ...) values(" || field1 || ", " || field2 || ... || ");"
from targettable;
Something more sophisticated is here.
If you have an empty table the Export method won't work. As a workaround. I used the Table View of Oracle SQL Developer. and clicked on Columns. Sorted by Nullable so NO was on top. And then selected these non nullable values using shift + select for the range.
This allowed me to do one base insert. So that Export could prepare a proper all columns insert.
If you have to load a lot of data into tables on a regular basis, check out SQL Loader or external tables. Should be much faster than individual Inserts.
You can also use MyGeneration (free tool) to write your own sql generated scripts. There is a "insert into" script for SQL Server included in MyGeneration, which can be easily changed to run under Oracle.
I'm using Lazarus and trying to insert payment records into a SQLite database, but apparently InsertSQL isn't autogenerating the correct INSERT statement, and I have discovered that I can't simply assign parameters to .InsertSQL by using .ParamByName like I can to the .SQL property.
My database has tables of Customers and Payments. The Payments table is as follows:
Pay_Key: Primary key, Integer, Unique, Not NULL. Identifies a single payment row in the table.
Pay_Customer: Integer, Not NULL. Foreign-key linked to an integer-type Not NULL primary key in my Customers table.
Pay_Sum: Integer. (Yes, I'm storing the payment sum as integer, but this isn't really important here.)
I'm using SELECT * FROM Payments WHERE Pay_Customer=:CustomerKey in my SQLQuery.SQL, and assigning :CustomerKey each time programmatically via SQLQuery.ParamByName('CustomerKey').Text. This lets me navigate existing records nicely in a DBGrid, but when I try to insert a new payment, the operation fails on the "Pay_Customer Not NULL" condition. Apparently Lazarus doesn't know what value to use in the Pay_Customer field, since I passed it programmatically.
Is there a way to remedy this? I can write my own InsertSQL if need be, I just don't understand how I can pass this customer parameter to it. I would very much like to use InsertSQL/UpdateSQL/DeleteSQL, since they would make it easy to use stock DBGrid/DBNavigator components and logic for what I'm doing.
Thanks for any suggestions, and sorry for being so verbose about my question.
-Sam
Edit: I'm using Lazarus 1.6 (FPC 3.0.0). I have enabled foreign keys in SQLite in SQLite3Connection.Params (foreign_keys=on)
I need to add FTS to an existing database.
Started to test external content FTS tables, where the FTS indexes reside in the default (main) DB. Everything worked satisfactorily, except a few things (such as index rebuild) could take considerable amount of time.
Then I read about the possibility to put FTS index into attached DB. This seemed to promise several advantages, hence I decided to give it a try. However, all my trials failed. Here are a few examples:
Situation
We have a table 'account' with a text column 'code', and
Want to create FTS index for that column and place it into separate database file
Test1) ERROR: near ".": syntax error
ATTACH 'ZipFts.sdf' AS ZipFts; CREATE VIRTUAL TABLE ZipFts.account USING fts4(content=account, code);
INSERT INTO ZipFts.account(ZipFts.account) VALUES('rebuild');
Test 2) ERROR: Stack overflow (infinite recursion inside sqlite engine)
ATTACH 'ZipFts.sdf' AS ZipFts; CREATE VIRTUAL TABLE ZipFts.account USING fts4(content=account, code);
INSERT INTO ZipFts.account(account) VALUES('rebuild');
Test3) ERROR: no such table: ZipFts.account
ATTACH 'ZipFts.sdf' AS ZipFts; CREATE VIRTUAL TABLE ZipFts.ZipFts_account USING fts4(content="account", code);
INSERT INTO ZipFts_account(ZipFts_account) VALUES('rebuild');
Test4) ERROR: no such table: ZipFts.main.account
ATTACH 'ZipFts.sdf' AS ZipFts; CREATE VIRTUAL TABLE ZipFts.ZipFts_account USING fts4(content="main.account", code);
INSERT INTO ZipFts_account(ZipFts_account) VALUES('rebuild');
Does anybody know how these things work? Thanks in advance.
After some searching in sqlite3.c I found what might be the answer.
Look at the bottom of the function fts3ReadExprList(). The name of the content table is prefixed with the DB name here! This explains everything.
Moreover, this seems to be the only non-trivial use of zContentTbl (= the name of the content table). When I slightly modified fts3ReadExprList() function as shown in the code underneath, the problem disappeared.
// Code inserted by #JS-->
// Do not prefix zContentTbl with the database name. The table might reside in main database, for example.
if( p->zContentTbl){
fts3Appendf(pRc, &zRet, " FROM '%q' AS x", p->zContentTbl);
}
else
// <--#JS
fts3Appendf(pRc, &zRet, " FROM '%q'.'%q%s' AS x",
...
Note that I did not test the code sufficiently. (So far I only know that the FTS index was created.)
Anyway, for the time being, I consider this an SQLite bug and I'll try to go on with my fix.
I think this is as designed.
If it were otherwise, the underlying table for an external content table could change as databases are attached or detached.
You might be able to achieve this using a contentless FTS Table though.
Dan Kennedy.
Is there a way to further restrict the lookup performed by a database lookup functoid to include another column?
I have a table containing four columns.
Id (identity not important for this)
MapId int
Ident1 varchar
Ident2 varchar
I'm trying to get Ident2 for a match on Ident1 but wish it to only lookup where MapId = 1.
The functoid only allows the four inputs any ideas?
UPDATE
It appears there is a technique if you are interested in searching across columns that are string data types. For those interested I found this out here...
Google Books: BizTalk 2006 Recipes
Seeing as I wish to restrict on a numberic column this doesn't work for me. If anyone has any ideas I'd appreciate it. Otherwwise I may need to think about my MapId column becoming a string.
I changed the MapId to MapCode of type char(3) and used the technique described in the book I linked to in the update to the original question.
The only issue I faced was that my column collations where not in line so I was getting an error from the SQL when they where concatenated in the statement generated by the map.
exec sp_executesql N'SELECT * FROM IdentMap WHERE MapCode+Ident1= #P1',N'#P1 nvarchar(17)',N'<MapCode><Ident2>'
Sniffed this using the SQL Profiler