I have a problem with file names in my s3 location after I use flyway migrate. Somehow the suffix is added to filenames and I don't know why. Example:
I create a schema with a query: create schema MY_SCHEMA with (location='s3:my_bucket/my_folder').
Then I run flyway command to migrate tables in this schema.
What I am expecting to get: migrated tables and metadata files stored in my location like /my_bucket/my_folder/my_table/metadata/my_metadata_file.
And what I actually get is: migrated tables and metadata files stored in my location like /my_bucket/my_folder/my_table-<some_weird_suffix>/metadata/my_metadata_file.
No problem with table names in DB, just in s3 location.
Does anyone knows how to fix it and make it work in a desired way?
Related
I have a Node-RED flow. It uses a sqlite node. I am using node-red-node-sqlite. My OS is Windows 10.
My sql database is configured just with name "db" :
My question is, where is located the sqlite database file?
I already search in the following places, but didn't found:
C:\Users\user\AppData\Roaming\npm\node_modules\node-red
C:\Users\user\.node-red
Thanks in advance.
Edit
I am also using pm2 with pm2-windows-service to start Node-RED.
If you don't specify a full path to the file in the Database field it will create the file in the current working directory for the process, which will be where you ran either node-red or npm start.
Use full path location with file name.
It should work i guess.
This isn't a valid answer, just a workaround for those who have the same problem.
I could't find my database file. But inside Node-RED everything worked just great. So. this is what I have done as a workaround:
In Node-RED, make some select nodes to get all data from tables
Store the tables values somewhere (in a .txt file or something like that)
Create your database outside Node-RED, somewhere like c:\sqlite\db.db. Check read/write permissions
Create the tables and insert the values stored from old database
In Node-RED, inside "Database", put the complete path of the database. For example, c:\sqlite\db.db
In my case this was easy because I only had two database with less than 10 rows.
Hope this can help others.
Anyway, still waiting for a valid answer :)
I have a database project and want to keep a table as a backup on the production database but it shouldn't be part of the code anymore.
Even if I rename the table before generating the deployment script the rename is detected (via a search for named constraints I guess) and the renamed table will be dropped.
Any ideas on that?
It's a bit of a workaround, but if the goal is to prevent this table from being created on new deployments (where it doesn't already exist), but keep it on deployments where it's already been added, then you could keep it in the code and add a post-deploy script to delete it if it doesn't contain any data.
Or you could write your own "plugin" for the database deployment Customize Database Build and Deployment by Using Build and Deployment Contributors
I am working with doctrine:migrations:diff in order to prepare database evolutions.
This command creates files into app/DoctrineMigrations
Thoses files contains sql commands in order to upgrade or downgrade database scructure.
I want to store those sql commands into the database itself. In fact, i have several instances of databases. If sql commands are store into files, it is a big problem.
I have read somewhere that DoctrineMigrations bundle can create a table called "migration_versions", but i do not manage to find where i have read this...
I cannot really understand what you're trying to do.
Migrations are used when your code needs altered database structure. For example, a new table or a new column. These new requirements for a table or column comes from your newly written code, so it's only natural to place the migrations also as a code in your repository.
How and when would migrations even get to your database? How would you guarantee that migration is executed before the code changes, which use that new structure?
Generally, migrations are used in this way:
You develop your code, add new features, change existing ones. Your code needs changes to database.
You generate doctrine migration class, which contains needed SQL statements for your current database to get to the needed state.
You alter the class adding any more required SQL statements. For example, UPDATE statements to migrate your data, not only the structure.
You execute migration locally.
You test your code with database changes. If you need more changes, you either add new migration, or execute migration down, delete it and regenerate it. Never change the migration class, as you'll loose what's supposed to be in the database and what's not.
You commit your migration together with code that uses it.
Then comes the deployment part:
- For each server, upload the code, clear and warm-up cache, run other installation scripts. Then run migrations. And only then switch to the new code.
This way your database is always in-sync with current code in the server that uses that database.
migration_versions database table is created automatically by doctrine migrations. It holds only the version numbers of migration classes - it's used for keeping track which migrations were already run and which was not.
This way when you run doctrine:migrations:migrate all not-yet-ran migrations are executed. This allows to migrate few commits at once, have several migrations in a commit etc.
Is there any way to copy data from one remote sqlite database to another? I have file replication done across two servers; however, some changes are recorded in an sqlite database local to each server. To get my file replication to work correctly, I need to copy the contents of one table and enter them into the table on the opposite system. I understand that sqlite databases are not meant for remote access; but is there any way to do what I need? I suppose I could write the contents of the table to a file, copy that file, then add the contents to the other database. This doesn't seem like the best option though, so I'm looking for another solution.
If you have access to the other database file, you can ATTACH it:
ATTACH '/some/where/else/other.db' AS remote;
INSERT INTO MyTable SELECT * FROM remote.MyTable;
I am working on a firm application in which I need to create a local database on my device.
I create my local database through create statement[ It works well]
Then I use that file and perform insert operation through fire-fox sqlite plugin, I need to insert aprox 2000 rows at a time so I can not use code. I just run insert manually through sqlite plugin in fir-fox.
After that I just use that file in my place of my local database.
When I run select query through my code, It show Exception:java.lang.Exception: Exception: In create or prepare statement in DBnet.rim.device.api.database.DatabaseException: SELECT distinct productline FROM T_Electrical ORDER BY productline: file is encrypted or is not a database
I got the solution of this problem, I was doing a silly mistake by creating a file manually by right click in my RES folder, that is not correct. We need to create the database completely from SQlite plugin, then it will work fine. "Create data base from SQLITE(FIle too) and perform insertion operation from SQLITE, then it will work fine"
This is very rare problem, but i think it might be helpful for someone like me....!:)
You should check to see if there is a version problem between the SQLite used by your Firefox installation and that on the BlackBerry. I think I had the same error when I tried to build a database file with SQLite version 2.
You also shouldn't need to create the database file on the device. To create large tables I use a Ubuntu machine and the sqlite3 command line. Create the file, create the tables, insert the data and build indexes. Then I just copy the file onto the device in the proper directory.
For me it was a simple thing. One password was set to that db. I just used it and prolem got solved.