How to use variables in sqlite - sqlite

Created the following code in SQL however need to use it in sqlite (phonegap specifically).
INSERT INTO actions(Action) VALUES ('Go to the pub');
SET #aid = LAST_INSERT_ID();
INSERT INTO statements(statement, Language) VALUES ('Have a pint', 'English');
SET #sid = LAST_INSERT_ID();
INSERT INTO Relationships(SID,AID) VALUES (#sid,#aid);
The issue we are having however is how to declare the variables in sqlite.
The LAST_INSERT_ID() will become last_insert_rowid(), however what is the sqlite version of SET #aid = ?

SQLite does not have variables.
In an embedded database such as SQLite, there is no separate server machine or even process, so it would not make sense to add a programming language to the DB engine when the same control flow and processing logic could be just as well done in the application itself.
Just use three separate INSERT statements.
(In WebSQL, the result object has the insertId property.)

Related

SQLite Importer will overwrite my database when I load my application?

I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.

Is there any way to check the presence and the structure of tables in a SQLite3 database?

I'm developing a Rust application for user registration via SSH (like the one working for SDF).
I'm using the SQLite3 database as a backend to store the information about users.
I'm opening the database file (or creating it if it does not exist) but I don't know the approach for checking if the necessary tables with expected structure are present in the database.
I tried to use PRAGMA schema_version for versioning purposes, but this approach is unreliable.
I found that there are posts with answers that are heavily related to my question:
How to list the tables in a SQLite database file that was opened with ATTACH?
How do I retrieve all the tables from database? (Android, SQLite)
How do I check in SQLite whether a table exists?
I'm opening the database file (or creating it if it does not exist)
but I don't know the approach for checking if the necessary tables
I found querying sqlite_master to check for tables, indexes, triggers and views and for columns using PRAGMA table_info(the_table_name) to check for columns.
e.g. the following would allow you to get the core basic information and to then be able to process it with relative ease (just for tables for demonstration):-
SELECT name, sql FROM sqlite_master WHERE type = 'table' AND name LIKE 'my%';
with expected structure
PRAGMA table_info(mytable);
The first results in (for example) :-
Whilst the second results in (for mytable) :-
Note that type is blank/null for all columns as the SQL to create the table doesn't specify column types.
If you are using SQLite 3.16.0 or greater then you could use PRAGMA Functions (e.g. pragma_table_info(table_name)) rather than the two step approach need prior to 3.16.0.

Export Multpile Tables Data as Insert Statements in to single file Oracle DB [duplicate]

The only thing I don't have an automated tool for when working with Oracle is a program that can create INSERT INTO scripts.
I don't desperately need it so I'm not going to spend money on it. I'm just wondering if there is anything out there that can be used to generate INSERT INTO scripts given an existing database without spending lots of money.
I've searched through Oracle with no luck in finding such a feature.
It exists in PL/SQL Developer, but errors for BLOB fields.
Oracle's free SQL Developer will do this:
http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html
You just find your table, right-click on it and choose Export Data->Insert
This will give you a file with your insert statements. You can also export the data in SQL Loader format as well.
You can do that in PL/SQL Developer v10.
1. Click on Table that you want to generate script for.
2. Click Export data.
3. Check if table is selected that you want to export data for.
4. Click on SQL inserts tab.
5. Add where clause if you don't need the whole table.
6. Select file where you will find your SQL script.
7. Click export.
Use a SQL function (I'm the author):
https://github.com/teopost/oracle-scripts/blob/master/fn_gen_inserts.sql
Usage:
select fn_gen_inserts('select * from tablename', 'p_new_owner_name', 'p_new_table_name')
from dual;
where:
p_sql – dynamic query which will be used to export metadata rows
p_new_owner_name – owner name which will be used for generated INSERT
p_new_table_name – table name which will be used for generated INSERT
p_sql in this sample is 'select * from tablename'
You can find original source code here:
http://dbaora.com/oracle-generate-rows-as-insert-statements-from-table-view-using-plsql/
Ashish Kumar's script generates individually usable insert statements instead of a SQL block, but supports fewer datatypes.
I have been searching for a solution for this and found it today. Here is how you can do it.
Open Oracle SQL Developer Query Builder
Run the query
Right click on result set and export
http://i.stack.imgur.com/lJp9P.png
You might execute something like this in the database:
select "insert into targettable(field1, field2, ...) values(" || field1 || ", " || field2 || ... || ");"
from targettable;
Something more sophisticated is here.
If you have an empty table the Export method won't work. As a workaround. I used the Table View of Oracle SQL Developer. and clicked on Columns. Sorted by Nullable so NO was on top. And then selected these non nullable values using shift + select for the range.
This allowed me to do one base insert. So that Export could prepare a proper all columns insert.
If you have to load a lot of data into tables on a regular basis, check out SQL Loader or external tables. Should be much faster than individual Inserts.
You can also use MyGeneration (free tool) to write your own sql generated scripts. There is a "insert into" script for SQL Server included in MyGeneration, which can be easily changed to run under Oracle.

Stored Procedure works fine from SQL Mgt Studio but throws Invalid Object name #AllActiveOrders from MVC app

I can run the 'guts' of my stored procedure as a giant query.. just fine from SQL Management Studio. Furthermore, I can even right click and 'execute' the stored procedure - .. y'know.. run it as a stored procedure - from SQL Management Studio.
When my ASP.NET MVC app goes to run this stored procedure, I get issues..
System.Data.SqlClient.SqlException: Invalid object name '#AllActiveOrders'.
Does the impersonation account that ASP.NET runs under need special permissions? That can't be it.. even when I run it locally from my Visual Studio (under my login account) I also get the temp table error message.
EDIT: Furthermore, it seems to work fine when called from one ASP.NET app (which is using a WCF service / ADO.NET to call the stored procedure) but does not work from a different ASP.NET app (which calls the stored proc directly using ADO.NET)
FURTHERMORE: The MVC app that doesn't crash, does pass in some parameters to the stored procedure, while the crashing app runs the Stored Proc with default parameters (doesn't pass any in). FWIW - when I run the stored procedure in SQL Mgt. Studio, it's with default parameters (and it doesn't crash).
If it's of any worth, I did have to fix a 'String or Binary data would be truncated' issue just prior to this situation. I went into this massive query and fixed the temptable definition (a different one) that I knew to be the problem (since I had just edited it a day or so ago). I was able to see the 'String/Binary truncation' issue in SQL Mgt. Studio / as well as resolve the issue in SQL Mgt Studio.. but, I'm really stumped as to why I cannot see the 'Invalid Object name' issue in SQL Mgt. Studio
Stored procedures and temp tables generally don't mix well with strongly typed implementations of database objects (ado, datasets, I'm sure there's others).
If you change your #temp table to a #variable table that should fix your issue.
(Apparently) this works in some cases:
IF 1=0 BEGIN
SET FMTONLY OFF
END
Although according to http://msdn.microsoft.com/en-us/library/ms173839.aspx, the functionality is considered deprecated.
An example on how to change from temp table to var table would be like:
create table #tempTable (id int, someVal varchar(50))
to:
declare #tempTable table (id int, someval varchar(50))
There are a few differences between temp and var tables you should consider:
What's the difference between a temp table and table variable in SQL Server?
When should I use a table variable vs temporary table in sql server?
Ok. Figured it out with the help of my colleague who did some better Google-fu than I had done prior..
First, we CAN indeed make SQL Management Studio puke on my stored procedure by adding the FMTONLY option:
SET FMTONLY ON;
EXEC [dbo].[My_MassiveStackOfSubQueriesToProduceADigestDataSet]
GO
Now, on to my two competing ASP.NET applications... why one of them worked and one of them didn't? Under the covers, both essentially used an ADO.NET System.Data.SqlClient.SqlDataAdapter to go get the data and each performed a .Fill(DataSet1)
However, the one that was crashing was trying to get the schema in advanced of the data, instead of just deriving the schema after the fact.. so, it was this line of code that was killing it:
da.FillSchema(DataSet1, SchemaType.Mapped)
If you're struggling with this same issue that I've had, you may have come across forums like this from MSDN which are all over the internets - which explain the details of what's going on quite adequately. It had just never occurred to me that when I called "FillSchema" that I was essentially tripping over this same issue.
Now I know!!!
Following on from bkwdesign's answer about finding the problem was due to ADO.NET DataAdapter.FillSchema using SET FMTONLY ON, I had a similar problem. This is how I dealt with it:
I found the simplest solution was to short-circuit the stored proc, returning a dummy recordset FillSchema could use. So at the top of the stored proc I added something like:
IF 1 = 0
BEGIN;
SELECT CAST(0 as INT) AS ID,
CAST(NULL AS VARCHAR(10)) AS SomTextCol,
...;
RETURN 0;
END;
The columns of the select statement are identical in name, data type and order to the schema of the recordset that will be returned from the stored proc when it executes normally.
The RETURN ensures that FillSchema doesn't look at the rest of the stored proc, and so avoids problems with temp tables.

Oracle SQL Update passed as parameter (into stored procedure) string from .NET

I would like to know how to accomplish this task. I've looked at CASE, DECODE or IF condition and I'm not able to make it work. My goal is to pass a block of predefined column/value pair constructed from ASP.NET data to my Oracle stored procedure. I am trying to only update certain columns out of many to preserve other columns not needing updates. So here's my set up:
Stored procedure:
UpdateSelectedColumns(myValuePairString, updatedBy)
-- Passed variable from ASP.NET, myValuePairString = 'col1 = 10,col2 = 'Dog''
-- update statement final
UPDATE MyTable
SET
col1 = 10,
col2 = 'Dog'
col3 = 'john';
COMMIT;
Thank you in advance...
Ricky
For once I'm gonna advise to not use a stored proc. There is no point here in using a stored procedure.
As it is your stored procedure would blindly accept its arguments and execute the update without adding any value. Furthermore, by using this procedure, you preclude the use of binds and exposes yourself to bugs (whenever you encouner a value with a quote '), performance hit and SQL injection vulnerability.
The advantage of PL/SQL (simple transparent binding, transparent use and reuse of cursors, strict static SQL parsing and metadata dependancy) are all pointless if you take an aribtrary string as argument and put it in a dynamic cursor.
You'll be better off to use your language native cursors and use bind variables.
If you really want to use PL/SQL, replace your single argument with a couple of tables. One for the column names, one for the variable values. You could then use DBMS_SQL to parse the statement and use appropriate bind variables. You'll need some convention to be able to parse date, number and character values. You'll need to read metadata from the database to check the datatypes. This would be a lot of code for not a bit of value.

Resources