We are having many projects running on many servers looking up into one database, we"re thinking to setup Flyway to every project for control our database structure.
But we are worrying about concurrent migration problem, if some projects re-deploy in sametime.( Off-coures, we always take care the "If exist" things in sql syntax )
How Flyway work when concurrent change on same data table or other struture things ?
It works as expected. See the answer in the FAQ: https://flywaydb.org/documentation/learnmore/faq.html#parallel
Can multiple nodes migrate in parallel?
Yes! Flyway uses the locking technology of your database to coordinate multiple nodes. This ensures that even if multiple instances of your application attempt to migrate the database at the same time, it still works. Cluster configurations are fully supported.
Related
Few questions:
1. Is SQL server installation needed to run Windows Work Flow?
2. If yes, where does work flow stores (persists) data for a long running process
3. I see that some files are created in .\windows\Microsft.NET\Framework\v4.0\SQL\en\ (some sql scripts to create persistense points)
4. Do we need to run these scripts to manually create database?
5. Can we persist data on file system instead? so that we don't need to install SQL Server?
Thanks
I see one supposed answer already, but "read the docs" answers really aren't good answers, especially in an area so poorly documented as WF, so in case anyone else stumbles across this thread:
(1) SQL Server doesn't have to be installed just to use workflows, but if you want persistence for long running workflows, (2) SQL Server is your easiest way to get it.
(3) and (4) You can let AppFabric do most of the heavy lifting in setting up the persistence database for you.
(5) you could persist on a file system instead of SQL Server but IMHO, from what I've seen in my short time with WF and persistence so far, you'd be crazy to try to implement your own persistence provider like that, especially when just starting out. You can use SQL Server Express to get started. Why reinvent the wheel?
In the publishing scenario I have, we have multiple deployers pushing content to both file system and database (broker). Pages and Binaries are put on the file system, everything else in the Broker. We have one of the deployers putting the content into the database. Is this the recommended best practice?
If the storage configurations in all deployers also put the content into the database, how does Tridion handle this? Could this cause duplicate entries, locking failures etc?
I'm afraid at the time of writing I don't have access to an environment to test how this would work.
SDL best practice is to have a one-to-one relationship between a deployer and a publication; that means so long as two deployers do not publish the same content (from the same publication) then they will not collide providing, if a file system, there is separation between the deployed sites e.g. www/pub1 & www/pub2.
Your explanation of your scenario needs some additional information to make it complete but it sounds most likely that there are multiple broker databases (albeit hosted on a single database server). This is the most common setup when dealing with multiple file systems on webservers, combined with a single database server.
I personally do not like this set up as I think it would be better to host file system content in a shared location & share single DB. Or better still deploy everything to the database and uses something like DD4T/CWA.
I have seen (and even recommended based on customer limitations) similar configurations where you have multiple deployers configured as destinations of a given target.
Only one of the deployers can write to the database for the same transaction, otherwise you'll have concurrency issues. So one deployer writes to the database, while all others write to the file system.
All brokers/web applications are configured to read from the database.
This solves the issue of deploying to multiple servers and/or data centers where using a shared file system (preferred approach) is not feasible - be it for cost or any other reason).
In short - not a best practice, but it is known to work.
Julian's and Nuno's approaches cover most of the common scenarios. Indeed a single database is a single point of failure, but in many installations, you are expected to run multiple schemas on the same database server, so you still have a single point of failure even if you have multiple "Broker DBs".
Another alternative to consider is totally independent delivery nodes. This might even mean running a database server on your presentation box. These days it's all virtual anyway so you could run separate small database servers. (Licensing costs would be an important constraint)
Each delivery server has it's own database and file system. Depending on how many you want, you might not want to set up multiple destinations/deployers, so you deploy to one, and use file system replication and database log shipping to mirror the content to the rest.
Of course, you could configure two deployment systems (or three) for redundancy, assuming you can manage all the clustering etc.
OK - to come clean - I've never built one like this, but I'm fairly sure elements of this kind of design will become more common as virtualisation increases, and licensing models which support it. (Maybe we have to wait for Tridion to support an open source database!)
We have a ASP.NET web application and need to maintain the database creation and initialization script.
Are there any industry best practices that people know of for maintaining database creation and initialization scripts. I can think of two main approaches.
Maintain a tsql creation script directly by hand.
Maintain a master database and create the script that is then checked into source safe.
Also the script should be able to be tracked through source control, i.e. table order should be controllable.
If possible should also include the ability to track initialisation data either in the same or a seperate script.
Currently we generate the script from management studio but the order of the tables seems to be random.
And the more automated the solution the better.
The problem is not maintaining the script, nor maintaining a 'master' copy of the database. The real problem is upgrading existing database(s). You do your modification in the developer environment, which are then propagated to the test environment, and finally pushed into production environment. While at developer and test environment stage is possible to start from scratch, in production you always have to upgrade the existing deployment.
In my experience the best practice is to use upgrade scripts. This practice is useful even with a single deployed site, but it becomes invaluable with multiple locations that may be at different versions. But even with one single operational site is still useful to be able to test the upgrade repeatedly (starting from backups of current version), keep the changes in source control, have a well formalized and peer reviewed change procedure (the upgrade script). And upgrade scripts can be tailored to specific needs of the operational site, like handling a large table with special care, or deal with encrypted data, or whatever one of the myriad of the details diff based tools neglect or ignore. The main disadvantage is the the scripts have to be written, which require real T-SQL knowledge (forget all the 'designers' in you favorite management tool).
You might want to check out RedGate SQL Source Control.
Are you looking for Visual Studio Database Projects?
I use database projects to store all database objects (tables, views, functions, keys, triggers, indexes across schemas) and keep versioning in TFS. You can build the database to ensure that everything is valid. You can deploy to a fresh database, or do a schema comparison with an existing database.
I also keep all reference and setup data in post deployment scripts which are automatically run after deployment.
I have got a Bizspark account from Microsoft and they are providing a basic Azure account. I have been told that it can run PHP, however I would like to use a more tested solution like WAMP. On top of that, I want to place a quite heavy WordPress / BuddyPress installation (that I hope will bring a lot of trafic :)
Has anyone done something similar to this? If so, what is your experience / pitfalls etc.?
Thanks
Stelios
Yes, you can do this. At the end of the day you are just using Windows Server, so anything that installs there will install in the cloud as well. I have done this myself for hosting WordPress in Windows Azure.
However, there are some pitfalls here. Mostly the pitfalls are around the M (MySQL). To setup MySQL in Windows Azure is not really that hard, but you have several considerations on how to make sure it is always available. You can:
Setup a single instance of MySQL in
a role and store the db on local
disk (this is a bad idea).
Setup a single instance of MySQL in
a role and store the db on a drive
(blob backed storage)
Setup 2 instances of MySQL to each
point to a shared drive
(hot-failover). Only one drives will
be able to mount. Now, you have reliability and failover, but a single instance at a time working for you.
Setup 1 writer of MySQL on a drive,
and multiple readers on a snapshot
of a drive. Put in some logic via
connection strings to make sure only
writes goto a single one and reads
to the others. Snapshot every X
mins to update readers.
Setup multiple instances of MySQL
and use native replication features
(each storing to local disk) and
rely on that if you lose an
instance.
There are probably more permutations, but the gist of the problem is how you scale out MySQL to be available and reliable. In Windows Azure, you don't get to rely on the fact that the local disk will always be around or that you will always have the same instance. In fact, you can guarantee that your instances will be down for some period of time each month and eventually, given enough time, you will lose the local disk.
Overall, with multiple instances however, you can guarantee they won't be down simultaneously (to the service SLA level at least). So, you need to make sure MySQL works with multiple instances (or live with single instance downtime) and that your data is backed by blob storage to guarantee it is persisted.
Or you can scrap all that crap and just use SQL Azure, which solves all those problems. So, it become WASP. SQL Azure can also be more economical as well for smaller DBs.
Or you can scrap all that crap and just use SQL Azure, which solves all those problems. So, it become WASP. SQL Azure can also be more economical as well for smaller DBs.
Ditto.
Installing MySQL on an Azure role is not a good idea for plenty of reasons, most notably (lack of) scalability and reliability. (That's just for deploying on Azure, MYSQL itself is great)
To set it up remotely reliably you're going to need a dedicated instance which will run you at least $40 a month, going with SQL Azure is $10/Gb, or free if you get an introductory offer or Bizspark.
If you're just looking to play around with a single instance app, I'd suggest you rather use SQLite or some other in memory db, it'll be a lot less painful.
Ok, so here's the thing.
I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project
This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code.
So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it.
We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets.
I don't have idea how to put the pieces together
I would like to:
Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed.
Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere.
Copy the site files and executing the generated sql script in an easy and automated way.
I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy.
If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option.
I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy.
Any help will be really appreciated,
Thank you all,
Regards
Have you heard of the term Multi-Tenancy? It might be worth look that up to see if that applied to your "Multiverse" especially if one universe is never accessed by another...
See:
http://en.wikipedia.org/wiki/Multitenancy
http://msdn.microsoft.com/en-us/library/aa479086.aspx
UPDATE:
If the application and database is the same for each client (or Tenant) I believe there are applications that may help in providing the same code/db as an SaaS application? ie another application/configuration layer on top that can handle the deployments etc?
I think these are called Platform as a Service (PaaS) applications:
see: http://en.wikipedia.org/wiki/Platform_as_a_service
Multi-Tenancy in your case may be possible, depending on client security requirements, with a bit of work (or a lot of work):
Option 1:
You could use the one instance of the application, ie deploy the site once and connect to a different database for each client. You would need to differentiate each client by URL to isolate content/data byt setting a connection string for each etc. (This would reduce your site deployments to one deployment)
Option 2:
You could create both a single instance of the application and use a single database. You would need to add a "TenantID" to each table and adjust all your code to accept a TenantID to ensure data security/isolation. Again you wold need to detect/differentiate the Tenant based on the URL to set the TenantID for the session used for every database call. (This would reduce your site and database deployment to one of each)