I am scripting out our database and because we have multiple stacks of drives assigned to each file group we have the files split into multiple files per file group also to split the IO / distribute storage among drives.
This currently is in SQL Server 2017 / SSDT in VS 2019
We always gave logical names to our database that started with the name of the database like how it is default in SSMS ex "MyDbName_FileGroup" then the file name similar ex: "MyDbName_FileGorup.ndf" but we never scripted this part before, we manually set that up.
I would like to get this scripted as part of the SSDT Deployment package so it can be used to set up new DB's easily also.
Everything is great so far made scripts for each file group that will create the files, but it of course will not let me use a SQLCMD variable as part of an object name.
So trying this
ALTER DATABASE [$(DatabaseName)]
ADD FILE
(
NAME=[$(DatabaseName)_FileGroupName],
FILENAME= '$(DefaultDataPath)$(DefaultFilePrefix)_FileGroupName.mdf'
) TO FILEGROUP [MESSAGING];
GO
does not work since I can't prepend the database name to the logical name like I want it to.
Yes this is purely cosmetic to match a pattern, but how would you go about doing something like this in SSDT?
I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.
I just tested the commandline tool and I was able to migrate my database schema changes (DDL scripts) as expected. But I had to move all my scripts under the sql dir.
Is there a way to point flyway to the directory where my real scripts will reside (git or svn repository)? Looks like flyway.locations is only for relative paths.
The schema_version table name and column names are all created in lower case in my database (Oracle). The vast majority of people using Oracle are used to upper case object names and column names (default in Oracle). I found a property in the config file to set my own table name. Is there any to get flyway to use upper case for column names?
I checked the data inserted into version_schema after my test run. All look good except that the first character of the "script" column seem to be removed.
My prefix is "db_". Here is what I see in schema_version,
SQL> select "script" from schema_version;
script
b_1_0__test10.sql
b_1_1__test10.sql
b_1_0_1__test10.sql
atabase/db_2012_11_20__query.sql
<< Flyway Init >>
Lots of questions here (It's easier if you keep them separate). I'll try my best to answer them:
Not currently supported. See https://github.com/flyway/flyway/issues/108 . Symlinking can be used as a workaround.
No, there is no configuration property for the column names. The schema_version table is private to Flyway and not meant for outside consumption.
This sounds like a bug. Please file an issue containing your configuration (OS + version, DB + version, Flyway version, config file contents) and exact steps to reproduce.
I have created a database "MyDB.sqlite" using the command line sqlite3 MyDB.sqlite in a specific folder(my desktop) and then created a table "tbl11" using create table syntax. I am able to insert record and can check the inserted records.
But when I exit command (terminal in Mac) line and re enter I can't see my database and tables in that folder. I guess this database and table are temporary by default. I even check the .databases command to see the database, but I can only see two database main!
please help!
Be sure to finish off with COMMIT; :-)
For practice, work through the short working example at http://souptonuts.sourceforge.net/readme_sqlite_tutorial.html
Good luck.
When I was searching for a tool to create and update SQlite databases for use in an Android application I was recommended to use SQLite Database Browser. This has a windows GUI and is reasonably powerful, offering in particular a menu option to import data to a new table from a CSV file.
This has proved perfectly capable for initial creation of the database and I have been using the CSV Import option to update the database whenever I have new data to be added.
When there were only a few records to import this worked well, however as the volume of data has grown the process has become painfully slow. A data file of 11,000 records (800 kilobytes) takes about 10 minutes to import on my averagely slow laptop. Using SQLite Database Browser the whole process of deleting the old table, running the import command, then correcting the data types of the new table created by the import command takes the best part of 15 minutes.
How can the import be speeded up?
You could use the built-in csv import (using the sqlite3 command line utility):
create table test (id integer, value text);
.separator ","
.import no_yes.csv test
Importing 10,000 records took less than 1 second on my Laptop.
By googling I have found several people asking this question, however I have not found the answer set out in once place in simple terms that I could understand. So, I hope the following will help.
The command line utility sqlite3.exe offers a very simple solution. The reason why the "import CSV" option in SQLite Database Browser is so slow is that it executes and commits to the database a separate SQL 'insert' statement foreach line in the CSV file. However, sqlite3.exe includes an "import" command which will process the whole in one go. What's more this is done virtually instantaneously: my 11,000 records are imported in well under a second.
There is a slight drawback in that the import command does not deal with commas in the same way as other programs such as Excel. For example,
if cell A1 in Excel contains Joe Bloggs
and cell B1 contains 123 Main Street, Anytown
the row is exported into a CSV file as:
Joe Bloggs,"123 Main Street, Anytown"
However, if you tried to import this using sqlite3 into a 2-column table, sqlite3 would report an error because it would treat each of the commas as a field separator and so would try to import Joe Bloggs, "123 Main Street and Anytown" as 3 separate fields.
Because it is unusual for text fields (especially in Excel) to include tabs this problem can usually be avoided by using a file where the fields are delimited by tabs rather than by commas.
Since sqlite3.exe can execute any SQL statement and a number of additional commands (like 'import') it is very flexible. However, a routine job like my need to import a delimited data file into a database table can be automated by:
listing the SQL statements and sqlite3.exe commands in a small text file, and feeding this file into sqlite3.exe as a command line parameter
writing a short Windows (MS-DOS) batch file to run sqlite3.exe with the specified list of commands.
These are the steps I followed:
Download and unzip sqlite3.exe
Convert the raw data from comma separated values to tab separated values.
Create a script file listing commands to be executed by sqlite3.exe as follows:
drop table tblTableName;
create table tblTableName(_id INTEGER PRIMARY KEY, fldField1 TEXT, fldField2 NUMERIC, .... );
.mode tabs
.import SubfolderName/DataToBeImported.tsv tblTableName
(Note: SQL statements are followed by a semi-colon; sqlite3.exe commands are preceded by a full stop (period))
Create a .bat file as follows:
cd "c:\users\UserName\FolderWhereSqlite3DatabaseFileAndScriptFileAreStored"
sqlite3 DatabaseName < textimportscript.txt
Having set this up, all I need to do whenever I have new data to add is run the batch file and the data is imported in an instant.
If you are generating INSERT statements, enclose them in a single transaction as stated in the official SQLite FAQ:
BEGIN; -- or BEGIN TRANSACTION;
INSERT ...;
INSERT ...;
END; -- can be COMMIT TRANSACTION; also
Have you tried wrapping all of your updates into a transaction? I had a similar problem and doing that sped it up no end.
Assuming Android Device:
db.beginTransaction();
// YOUR CODE
db.setTransactionSuccessful();
db.endTransaction();
Try that :)
sqlite> PRAGMA journal_mode=WAL;
sqlite> PRAGMA synchronous = 0;
sqlite> PRAGMA journal_mode=MEMORY;
memory
sqlite> BEGIN IMMEDIATE;
.import --csv blah.csv <tablename>
sqlite> COMMIT;
This turns off sync() on write, and puts the WAL file in memory, so it's not "safe", but as long as you are doing this "offline" as it were, and were OK re-creating the DB if power went out, disk gets full, etc, then this will def. speed up the import.