Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Despite reading all available docs on Flyway website, I still don't understand what is baseline good for. Could somebody explain it in plain English and mention some use cases for this command?
Baselining a database at version V1_0__baseline.sql, for example, instructs Flyway to only apply migration scripts that have been created after V1_0. It does this by inserting a migration entry in the SCHEMA_VERSION table used by Flyway. When you run a migration, available migration scripts will only be applied if their version is higher than the baseline version.
Example
You wish to use Flyway on a production and development database. You create a schema only dump of production. This will be the first migration script applied when you create a new empty database using Flyway.
However, because your existing production and development machines are already on this version, you don't want to apply this initial script. To prevent this, you create a SCHEMA_VERSION table and insert "1_0" into the table to instruct Flyway that the database is already at 1_0. Instead of manually creating this table and inserting a row via SQL, you can run the Flyway baseline command.
Then, a few weeks later, there is another database that you haven't brought onto Flyway, but have still been applying update scripts to (maybe you didn't have time). When you bring this database onto Flyway, you may need to baseline it at V3_0 or some other version.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
We are using Alfresco Community Edition 5.0d. Unfortunately the best practices are not followed since beginning. Due to this all the documents are stored in the Repository root folder. This folder now has 800,000 records. This is causing performance issues in application.
After looking at several recommendations for keeping fewer number of files in a folder, we want to move all the existing document in to year wise folders. What is the recommended way to move the documents?
I would suggest to use a BatchProcessor in Java.
Your implementation of BatchProcessWorkProvider would get the documents under the repository root folder, and your implementation of BatchProcessWorker would move each document in a date folder (after creating the folder if it doesn't exist).
The batchProcessor could be launched either manually from a java webscript, either automatically with a patch on startup.
If you choose this method you might have to perform a full reindex of Solr after the execution of the batch because I remember of a bug in 5.0 causing a node to be duplicated in Solr indexes after being moved, with one version indexed in its original path and the copy indexed in its new path.
You can try to move a node and search it by name (or whatever way which ensure that you only recover this node) in Share. If you have 2 results for this node then you have the bug.
The full Solr reindex can take a lot of time depending on the number of files you have in the repo and their size.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Are there any benefits to using settings.json instead of just storing this information in mongodb?
I would have thought that if you store it in a collection, then you can secure it by not publishing the collection in the first place. But you could also then build an admin page where you could update these details in your app, and it would be available straight away with no code reload.
In the meteor Chef's article on building a SaaS using Meteor (https://themeteorchef.com/recipes/building-a-saas-with-meteor-stripe-part-1/), Ryan Glover espouses the use of settings.json to store your Stripe keys.
Later on he uses it again, but this time to hold the details about the stripe plans. Wouldn't it be better to store this in a collection on mongodb?
Commonly, the settings.json file is meant to be used for non changing data like API keys and config info.
Considering that this data doesn't need to be manipulated, it is better to just use a settings.json file rather than a collection. If your going to use a collection than you have to go through the extra steps of pub sub.
With settings.json you have an opportunity to use different configuration for different environments (DEV, PROD etc.). Of course you can keep config information in MongoDB but then you need to save also environment information and have to pub/sub based on this.
Why would you do this, while you already have a mechanism for app configuration? In addition, you can set METEOR_SETTINGS environment variable.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am having an requirement to add a project into the existing one. Existing one is MVC 4.0 and using Dot net framework 4.5 application and is using DBML (Linq to SQL classes). The one which I want to add is using EDMX in Dot net Framework 4.0.Edmx is also of lower version. Versioning problem is solved by explicitly defining in Web.Config File. But the thing is both of the projects are building successfully but throwing exception at run time and the exception is ::
Schema specified is not valid. Errors:
App_Code.Model.csdl(3,4) : error 0019: The EntityContainer name must be unique. An EntityContainer with the name 'calendarEntities' is already defined.
and this is coming for every controller where it is being used.
Please help me As I am not clear with how to get rid off this error.
Just because there are specific instances where different versions can be supported, when used incorrectly trying to make mismatched versions of things can lead to what is essentially an unsupported project. This is because mismatched version configurations are not extensively tested and even if you were to call Microsoft support the first thing that they would tell you is to upgrade all of your projects to the same version of the runtime. So that is what I would recommend and if you have a specific issue please post the code and enough of the error so that help can be provided.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Given a Page that has a Component A and Component A has been published to LIVE target.
Later, the same component A has been modified and published again, this time, to Staging target.
We need to know which version of component A has been published to LIVE and which version of component A has been published to Staging.
Is it possible to obtain the version of the component that has been published in each target?
What you are looking for is not possible OOTB. You can infer however this information using PublishEngine.GetPublishInfo(IdentifiableObject item), which gives you back a ICollection<PublishInfo>, which contains the Date when the item has been published.
You can then combine this information with version info of the item, and identify (by looking at the Publish time vs version check-in time) the version used when publishing.
However, versions could be deleted, so this method is not guaranteed to give you back the right information.
I suggest you publish the version of your Page as CustomMeta (perhaps using something similar to MetaDataProcessor, part of the TDFramework, to create meta data on-the-fly). You can then interrogate the Content Delivery DB and retrieve this information.
Alternatively, for a CM-side solution, you can use the Event System and intercept the publishing action. Then it's up to you to store the version of the Page (e.g. Application Data might be a good candidate).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm an asp.net developer, but I haven't found a good workflow for deployment. Especially for small quick fixes that might not even require compiling.
The way I work now is to have two VS instances up while copy pasting a lot of code and files between the project and the folder on the IIS server. Is there an automated process that moves changes as I save in the VS project?
Generally speaking, what you are doing is a pretty big no no for a lot of reasons.
When you make changes one of the big advantages ASP.Net has over something like PHP is simply that obvious problems (like misspelling a variable name) are caught during the build phase. This is a huge benefit.
Next, if you are simply modifying a file and copying it's content to the server then it sounds like you are doing your testing in production instead of leveraging your local debugger. Again, very bad practice.
Finally, VS includes a publish command. The purpose of this is to compile and publish your site to the server. It can do so through the regular file system, FTP, web deployment packages or even FPSE. That last one is NOT recommended and is probably kept for backwards compatibility only.
Point is, develop and test locally. When your ready for it to go to the server, use the publish command.