Database name as part of file name for file group - sql-server-data-tools

I am scripting out our database and because we have multiple stacks of drives assigned to each file group we have the files split into multiple files per file group also to split the IO / distribute storage among drives.
This currently is in SQL Server 2017 / SSDT in VS 2019
We always gave logical names to our database that started with the name of the database like how it is default in SSMS ex "MyDbName_FileGroup" then the file name similar ex: "MyDbName_FileGorup.ndf" but we never scripted this part before, we manually set that up.
I would like to get this scripted as part of the SSDT Deployment package so it can be used to set up new DB's easily also.
Everything is great so far made scripts for each file group that will create the files, but it of course will not let me use a SQLCMD variable as part of an object name.
So trying this
ALTER DATABASE [$(DatabaseName)]
ADD FILE
(
NAME=[$(DatabaseName)_FileGroupName],
FILENAME= '$(DefaultDataPath)$(DefaultFilePrefix)_FileGroupName.mdf'
) TO FILEGROUP [MESSAGING];
GO
does not work since I can't prepend the database name to the logical name like I want it to.
Yes this is purely cosmetic to match a pattern, but how would you go about doing something like this in SSDT?

Related

SQLite Importer will overwrite my database when I load my application?

I have an Ionic App using SQLite. I don't have any problems with implementation.
The issue is that I need to import an SQL file using SQLitePorter to populate the database with configuration info.
But also, on the same database I have user info, so my question is:
Everytime I start the app, it will import the sql file, fill the database and probably overwrite my user data too? Since it is all on the same base?
I assume that you can always init your table using string queries inside your code. The problem is not that you are importing a .sql file. Right?
According to https://www.sqlitetutorial.net/sqlite-create-table/ it is obvious that you always create a table with [IF NOT EXISTS] switch. Writing a query like :
CREATE TABLE [IF NOT EXISTS] [schema_name].table_name (
column_1 data_type PRIMARY KEY);
you let sqlite to decide if it's going to create a table with the risk to overwrite an existing table. It is supposed that you can trust that sqlite is smart enough, not to overwrite any information especially if you use 'BEGIN TRANSACTION' - 'COMMIT' procedure.
I give my answer assuming that you have imported data and user data in distinct tables, so you can manipulate what you populate and what you don't. Is that right?
What I usually do, is to have a sql file like this:
DROP TABLE configutation_a;
DROP TABLE configutation_b;
CREATE TABLE configutation_a;
INSERT INTO configutation_a (...);
CREATE TABLE configutation_b;
INSERT INTO configutation_b (...);
CREATE TABLE IF NOT EXIST user_data (...);
This means that every time the app starts, I am updating with the configuration data I have at that time (that's is why we use http.get to get any configuration file from a remote repo in the future) and create user data only if user_data table is not there (hopefully initial start).
Conclusion: It's always a good practice, in my opinion, to trust a database product 100% and abstractly let it do any transaction that might give you some risk if you implemented your self in your code; since it gives a tool for that.For example, the keyword [if not exists], is always safer than implementing a table checker your self.
I hope that helps.
PS: In case you refer in create database procedure, SQLite, connects to a database file and it doesn't exist, it creates it. For someone comfortable in sqlite command line, when you type
sqlite3 /home/user/db/configuration.db will connect you with this db and if the file is not there, it will create it.

Write to intended DB location with odbc::dbWriteTable() function

There's a bug-like phenomenon in the odbc library that has been a known issue for years with the older, slower RODBC library, however the work-around solutions for RODBC do not seem to work with odbc.
The problem:
Very often a person may wish to create a SQL table from a 2-dimensional R object. In this case I'm doing so with SQL Server (i.e. T-SQL). The account used to authenticate, e.g. "sysadmin-account", may be different from the owner and creator of the database that will house tables being created but the account has full read/write permissions for the targeted DB.
The odbc() call to do so goes like this and runs "successfully"
library(odbc)
db01 <- odbc::dbConnect(odbc::odbc(), "UserDB_odbc_name")
odbc::dbWriteTable(db01, "UserDB.dbo.my_table", r_data)
This connects and creates a table, but instead of creating the table in the intended location of UserDB.dbo.my_table, it gets created in UserDB.sysadmin-account.dbo.my_table.
Technically, .dbo is a child of the UserDB database. what this is doing is creating a new child object of UserDB called sysadmin-account with a child .dbo of its own, and then creating the table within there.
With RODBC and some other libraries/languages we found that a work-around solution was to change the reference to the target table location in the call to ".dbo.my_table" or in some cases "..dbo.my_table". Also I think running a query to use UserDB sometimes used to help with RODBC.
None of these solutions seems to any effect with odbc().
Updates
Tried the DBI library as a potential substitute to no avail
Found a work-around of sending the data to a global temp table, then using a SQL statement to copy from the temp table to the intended location

Is there any way to check the presence and the structure of tables in a SQLite3 database?

I'm developing a Rust application for user registration via SSH (like the one working for SDF).
I'm using the SQLite3 database as a backend to store the information about users.
I'm opening the database file (or creating it if it does not exist) but I don't know the approach for checking if the necessary tables with expected structure are present in the database.
I tried to use PRAGMA schema_version for versioning purposes, but this approach is unreliable.
I found that there are posts with answers that are heavily related to my question:
How to list the tables in a SQLite database file that was opened with ATTACH?
How do I retrieve all the tables from database? (Android, SQLite)
How do I check in SQLite whether a table exists?
I'm opening the database file (or creating it if it does not exist)
but I don't know the approach for checking if the necessary tables
I found querying sqlite_master to check for tables, indexes, triggers and views and for columns using PRAGMA table_info(the_table_name) to check for columns.
e.g. the following would allow you to get the core basic information and to then be able to process it with relative ease (just for tables for demonstration):-
SELECT name, sql FROM sqlite_master WHERE type = 'table' AND name LIKE 'my%';
with expected structure
PRAGMA table_info(mytable);
The first results in (for example) :-
Whilst the second results in (for mytable) :-
Note that type is blank/null for all columns as the SQL to create the table doesn't specify column types.
If you are using SQLite 3.16.0 or greater then you could use PRAGMA Functions (e.g. pragma_table_info(table_name)) rather than the two step approach need prior to 3.16.0.

Pentaho DI Sqlite.db in job not showing transformation changes

1) Pentaho/Spoon DI job unzips files & creates sqlite.db file with all those files converted to tables.( used Shell)
2)Couple of tables are to be added to the Sqlite.db from Mysql db using queries.( used table i/p o/p)
2) is a transformation that runs fine individually.
Although the whole job with transformations runs fine too - no error- only that step2) data from queries are not in the Sqlite.db.
In short, a job to create sqlite.db file is able to add tables from files ( maybe bcz it is done in the job itself & not in tranformation) but the ones from Mysql queries in transformation dont make it to the same Sqlite.db.

Base directory and schema_version

I just tested the commandline tool and I was able to migrate my database schema changes (DDL scripts) as expected. But I had to move all my scripts under the sql dir.
Is there a way to point flyway to the directory where my real scripts will reside (git or svn repository)? Looks like flyway.locations is only for relative paths.
The schema_version table name and column names are all created in lower case in my database (Oracle). The vast majority of people using Oracle are used to upper case object names and column names (default in Oracle). I found a property in the config file to set my own table name. Is there any to get flyway to use upper case for column names?
I checked the data inserted into version_schema after my test run. All look good except that the first character of the "script" column seem to be removed.
My prefix is "db_". Here is what I see in schema_version,
SQL> select "script" from schema_version;
script
b_1_0__test10.sql
b_1_1__test10.sql
b_1_0_1__test10.sql
atabase/db_2012_11_20__query.sql
<< Flyway Init >>
Lots of questions here (It's easier if you keep them separate). I'll try my best to answer them:
Not currently supported. See https://github.com/flyway/flyway/issues/108 . Symlinking can be used as a workaround.
No, there is no configuration property for the column names. The schema_version table is private to Flyway and not meant for outside consumption.
This sounds like a bug. Please file an issue containing your configuration (OS + version, DB + version, Flyway version, config file contents) and exact steps to reproduce.

Resources