Replacing data in tables with constraints - constraints

How can I import data into a total of 7 tables which have constraints. I've tried to import new data using the import mode copy: delete all records in destination, repopulate from the source but I get this error:
Could not truncate table. Import aborting. Error code:
ORA-02266: unique/primary keys in table referenced by enable foreign keys
My guess is that I need to work out what sequence I can update the data in all 7 tables then I shouldn't get the error code but not sure how to work out what sequence to do this in?
Any help appreciated.

1)You can replace all keys/constraints, then restore them after data import.
2)You can try to import data starting from dictionaries tables...

Related

SQLite Importer will overwrite my database when I load my application?

I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.

way to mass-add columns in sqlite studio?

I'm new to creating databases, and right now all I want to do is import a csv file into an empty sqlite3 database using sqlite studio. I created an extremely basic table with only a single unnamed empty column, and then attempted to import my file into that table; however, I keep getting an error saying that my table has less columns than the file, and any extra columns will be ignored. I'd really like not to have to create 52 dummy columns; is there some kind of way to work around this?
Skip table creatioon by yourself. Import into inexisting table and SQLiteStudio will create it for you, with all columns required.

How to avoid cartesian-product in a cypher query and still create links between objects?

I imported a table with thousands of Equipments. Then imported another table with types of equipments, which contain around 20 types.
When I wrote the cypher query below to associate them, Neo4j warned me about a cartesian product. Is there a better way to create the associations? Should I have done it during the CSV import?
MATCH (te:Equipment_Type),(e:Equipment)
WHERE te.type_id = e.type_id
CREATE (e)-[:TYPE_OF]→(te)
Update
I tryed what Brian sugested, during the CSV import, and worked like a charm.
Imported the Equipment Types first;
Then created and index on Equipment(type_id);
Modified the code to search during CSV import.
From Neo4j Console:
Added 100812 labels, created 100812 nodes, set 414307 properties,
created 100812 relationships, statement executed in 33902 ms.
The Code:
CREATE INDEX ON :Equipment(type_id)
USING PERIODIC COMMIT 1000
LOAD CSV WITH HEADERS FROM "http://localhost/Equipments.csv" AS row
MERGE (e:Equipment {eqp_id: row.eqp_id, name: row.name, type_id: row.type_id})
WITH e, row
MATCH (te:Equipemnt_Type)
WHERE te.type_id = row.type_id
CREATE (e)-[:TYPE_OF]->(te)
With the size of data that you're talking about it's not a big deal, especially if you have indexes on Equipment_Type:type_id and Equipment:type_id. It's warning you because a cartesian project in a query can seem quick when you first write it on a small dataset and then grow quickly as you get more data.
But yes, creating the relationships during the CSV import would be the best way to approach it, probably.

Import CSV to SQL using schema.ini as validator

I'm using schema.ini to validate the data types/columns in my CSV file before loading into SQL. If there is a datatype mismatch in a row, it will still import the row but leaves that particular mismatch cell blank. Is there a way in which I can stop user from importing the CSV file if there is any issues and/or provide a error report (i.e. which row has problems).
The best approach is to check the file for any mismatch; but in the case of a large file, then this is not feasible.
You might need to load it first the check the loaded data in the table for the mismatch. This is much faster than checking the file (You can use a simple T-SQL script to check for nulls in the table).
IF mismatches are found, the user can then be notified and the table can then be cleared.
have a look at he FileHelpers library: http://www.filehelpers.com/
This is a very powerful library to do all kinds of imports, including csv and they also have a pretty neat error handling part
Using the Differents Error Modes The FileHelpers library has support
for 3 kinds of error handling.
In the standard mode you can catch the exceptions when something fail.
This approach not is bad but you lose some info about the current
record and you can't use the records array because is not asigned.
A more intelligent way is usign the ErrorMode.SaveAndContinue of the
ErrorManager:
Using the engine like this you have the good records in the records
array and in the ErrorManager you have te records with errors and can
do wherever you want.
Another option is to ignore the errors and continue how is showed in
this example
1 engine.ErrorManager.ErrorMode = ErrorMode.IgnoreAndContinue; 2
3 records = engine.ReadFile(... Copy to Clipboard | View Plain |
Print | ? engine.ErrorManager.ErrorMode =
ErrorMode.IgnoreAndContinue;
records = engine.ReadFile(... In the records array you only have the
good records.

How do I speed up the import of data from a CSV file into a SQLite table (in Windows)?

When I was searching for a tool to create and update SQlite databases for use in an Android application I was recommended to use SQLite Database Browser. This has a windows GUI and is reasonably powerful, offering in particular a menu option to import data to a new table from a CSV file.
This has proved perfectly capable for initial creation of the database and I have been using the CSV Import option to update the database whenever I have new data to be added.
When there were only a few records to import this worked well, however as the volume of data has grown the process has become painfully slow. A data file of 11,000 records (800 kilobytes) takes about 10 minutes to import on my averagely slow laptop. Using SQLite Database Browser the whole process of deleting the old table, running the import command, then correcting the data types of the new table created by the import command takes the best part of 15 minutes.
How can the import be speeded up?
You could use the built-in csv import (using the sqlite3 command line utility):
create table test (id integer, value text);
.separator ","
.import no_yes.csv test
Importing 10,000 records took less than 1 second on my Laptop.
By googling I have found several people asking this question, however I have not found the answer set out in once place in simple terms that I could understand. So, I hope the following will help.
The command line utility sqlite3.exe offers a very simple solution. The reason why the "import CSV" option in SQLite Database Browser is so slow is that it executes and commits to the database a separate SQL 'insert' statement foreach line in the CSV file. However, sqlite3.exe includes an "import" command which will process the whole in one go. What's more this is done virtually instantaneously: my 11,000 records are imported in well under a second.
There is a slight drawback in that the import command does not deal with commas in the same way as other programs such as Excel. For example,
if cell A1 in Excel contains Joe Bloggs
and cell B1 contains 123 Main Street, Anytown
the row is exported into a CSV file as:
Joe Bloggs,"123 Main Street, Anytown"
However, if you tried to import this using sqlite3 into a 2-column table, sqlite3 would report an error because it would treat each of the commas as a field separator and so would try to import Joe Bloggs, "123 Main Street and Anytown" as 3 separate fields.
Because it is unusual for text fields (especially in Excel) to include tabs this problem can usually be avoided by using a file where the fields are delimited by tabs rather than by commas.
Since sqlite3.exe can execute any SQL statement and a number of additional commands (like 'import') it is very flexible. However, a routine job like my need to import a delimited data file into a database table can be automated by:
listing the SQL statements and sqlite3.exe commands in a small text file, and feeding this file into sqlite3.exe as a command line parameter
writing a short Windows (MS-DOS) batch file to run sqlite3.exe with the specified list of commands.
These are the steps I followed:
Download and unzip sqlite3.exe
Convert the raw data from comma separated values to tab separated values.
Create a script file listing commands to be executed by sqlite3.exe as follows:
drop table tblTableName;
create table tblTableName(_id INTEGER PRIMARY KEY, fldField1 TEXT, fldField2 NUMERIC, .... );
.mode tabs
.import SubfolderName/DataToBeImported.tsv tblTableName
(Note: SQL statements are followed by a semi-colon; sqlite3.exe commands are preceded by a full stop (period))
Create a .bat file as follows:
cd "c:\users\UserName\FolderWhereSqlite3DatabaseFileAndScriptFileAreStored"
sqlite3 DatabaseName < textimportscript.txt
Having set this up, all I need to do whenever I have new data to add is run the batch file and the data is imported in an instant.
If you are generating INSERT statements, enclose them in a single transaction as stated in the official SQLite FAQ:
BEGIN; -- or BEGIN TRANSACTION;
INSERT ...;
INSERT ...;
END; -- can be COMMIT TRANSACTION; also
Have you tried wrapping all of your updates into a transaction? I had a similar problem and doing that sped it up no end.
Assuming Android Device:
db.beginTransaction();
// YOUR CODE
db.setTransactionSuccessful();
db.endTransaction();
Try that :)
sqlite> PRAGMA journal_mode=WAL;
sqlite> PRAGMA synchronous = 0;
sqlite> PRAGMA journal_mode=MEMORY;
memory
sqlite> BEGIN IMMEDIATE;
.import --csv blah.csv <tablename>
sqlite> COMMIT;
This turns off sync() on write, and puts the WAL file in memory, so it's not "safe", but as long as you are doing this "offline" as it were, and were OK re-creating the DB if power went out, disk gets full, etc, then this will def. speed up the import.

Resources