I wanted to do some experiments on our on-prem DevOps collection database. Alas, I'm having a bit trouble actually duplicating said collection. I mean, it's easy enough to take a backup of the database, but now I'd like to restore the collection database under a new name in DevOps.
Except DevOps doesn't even see the restored collection database. Now, granted, chances are it's because the backup was performed while the collection was attached (since this is for testing I'm not particularly worried about breaking anything on said duplicated collection; the same cannot be said about the rest of the system, obviously). There's there's a very good chance that some collection IDs might be duplicated, given this is a copy.
Is it possible to duplicate a collection in DevOps on-prem? Or would any kind of duplication / testing require a whole new DevOps server instance?
PS. While not important to this question, it might shed some light on what I am trying to do. I wanted to see if the SQL changes suggested by someone here https://developercommunity.visualstudio.com/t/bring-inherited-process-to-existing-projects-for-a/614232 are actually worth it. I don't want to stop our main / production DevOps collection for obvious reasons and instead wanted to duplicate it and run the SQL scripts on such a cloned collection...
Related
I have a Firestore collection that I need to rename.
To do that I'll have to do two things. One, rename the collection, two, update my app (only web right now) to use the new collection name.
My problem is that if I just go ahead and do that, any user that has not refreshed the app won't be able to find the renamed collection.
So, my question is: Is there any best practice to handle this scenario?
I can think of a couple of options:
Somehow forcing a reload of the web apps immediately after renaming the collection.
Set a feature flag so that the web apps enter into maintenance mode while I update everything and then reload the web apps once the change is finished. Unfortunately the currently deployed web app doesn't have a maintenance mode to enable so this doesn't seem to be a valid solution.
However, I'd like to hear about other options. There might be some best practice that I'm missing. Moreover, I'm aware this is a problem that might be more general than just related to Firestore. For example when changing a REST API endpoint, so I guess there must be some tried and tested solutions out there.
I tried searching for best practices regarding this and couldn't find any.
Also, if I was consuming a REST API it would be easier to solve because I could change the DB and keep the DB unchanged. But given that Firestore gets consumed directly from the web app I don't have this benefit.
Locking out outdated clients is a common practice, but leads to a lesser user experience. It also requires that you have a mechanism for the clients to detect that they're outdated, which you don't seem to have.
The most common practice I know of is to perform dual writes to both the old and the new collection while clients are updating.
Is it possible to build an ASP.NET website using EF where each customer logging in has separately stored data? We have customers demanding that their data won’t be stored in the same tables as other customers’ data.
I’ve read that EF can’t work with several databases but is it possible to switch database at runtime depending on input parameters? I have a feeling it won’t be possible since the migration features are tightly connected to the database being used, but I'm not sure.
One solution could be to have a separate website deployment and database for each customer. They’ll get separate domains to access but that’s not a problem. But this solution feels a bit clumsy if you’re having many customers, especially with deployment and future upgrades.
Am I missing some smart ways of solving this or is this a very tricky issue?
is structure (of the db) the same ?
if so you could switch connections - not w/o issues though, but should work. For details on how that should be done check the long discussion we've had here (and linked previous questions etc.)...
Code first custom connection string and migrations without using IDbContextFactory
We have a ASP.NET web application and need to maintain the database creation and initialization script.
Are there any industry best practices that people know of for maintaining database creation and initialization scripts. I can think of two main approaches.
Maintain a tsql creation script directly by hand.
Maintain a master database and create the script that is then checked into source safe.
Also the script should be able to be tracked through source control, i.e. table order should be controllable.
If possible should also include the ability to track initialisation data either in the same or a seperate script.
Currently we generate the script from management studio but the order of the tables seems to be random.
And the more automated the solution the better.
The problem is not maintaining the script, nor maintaining a 'master' copy of the database. The real problem is upgrading existing database(s). You do your modification in the developer environment, which are then propagated to the test environment, and finally pushed into production environment. While at developer and test environment stage is possible to start from scratch, in production you always have to upgrade the existing deployment.
In my experience the best practice is to use upgrade scripts. This practice is useful even with a single deployed site, but it becomes invaluable with multiple locations that may be at different versions. But even with one single operational site is still useful to be able to test the upgrade repeatedly (starting from backups of current version), keep the changes in source control, have a well formalized and peer reviewed change procedure (the upgrade script). And upgrade scripts can be tailored to specific needs of the operational site, like handling a large table with special care, or deal with encrypted data, or whatever one of the myriad of the details diff based tools neglect or ignore. The main disadvantage is the the scripts have to be written, which require real T-SQL knowledge (forget all the 'designers' in you favorite management tool).
You might want to check out RedGate SQL Source Control.
Are you looking for Visual Studio Database Projects?
I use database projects to store all database objects (tables, views, functions, keys, triggers, indexes across schemas) and keep versioning in TFS. You can build the database to ensure that everything is valid. You can deploy to a fresh database, or do a schema comparison with an existing database.
I also keep all reference and setup data in post deployment scripts which are automatically run after deployment.
We have an ASP.NET app with SQL Server & it is a photo & video sharing site.
Details of photos and videos are stored in tables & the files are in the file system.
Database has 75 tables and 225 stored procedures. The app will be ready for production deployment within next 6 months.
Due to longer time growth concerns, we decided to switch to NoSQL (MongoDB) database.
We have few questions regarding the best way to approach this:
Is it better to deploy the app with SQL Server backend and migrate to NoSQL later?
OR re-architecture now and rewrite/recreate database, tables, procedures and data layer
How difficult will it be re-architecture/recode with MongoDB? Any tools or BKMs?
EDIT:
Our app is Youtube+Flickr type site where user will share photos and videos with lots of comments, tags and ratings (photo\video & comments).
Is NoSQL a better database to move to? Reason for moving: cost + read query speed
Please help me with you valuable advise.
Thank you very much.
Change is always exponentially more expensive the later it is introduced to a project. This is a core principle of software engineering. You should do this now.
That said, I question your long-term vision. Relational databases, used properly, have a lot of performance in them.
This question raises more questions than answers.
Have you benchmarked your current implementation in terms of requests/responses?
Why MongoDB out of all possible NoSQL databases? (Don't get me wrong, I love Mongo, but love and hype should not weigh in technology choices)
Are you certain you will get the large userbase you're expecting? Why are you so certain?
Using stored procs seems to tip off that you aren't using an ORM? Why not?
Generally, I'm against these types of re-architectures. Firstly, you need to get your whole team acclimated to how Mongo affects development. Secondly, your ops team needs to get acclimated to how to deploy and maintain a Mongo installation. More likely than not, this will prevent you from launching in a timeline you want to launch.
I'd say that you should probably launch as is, fix the ORM part if you aren't using one, benchmark your app, benchmark a prototype of your app backed by Mongo and if the performance advantages are so big that it warrants the pain of re-architecture do it.
To your latter question, there aren't any tools right now, as far as I can tell, that'll automate or semi-automate the database import/export from SQL Server to Mongo. There are barely tools to do that for MySQL.
I've done such a migration a few month ago, during the early developement stage of a website in ASP.NET. It was a hard decision, but I could concentrate on that migration. The reason why I did this migration was the ORM that I couldn't trust anymore and some very slow queries that I had no idea how to optimize.
During coding phase, what I figured out was : I was spending a lot of time with the data model in SQL Server (using Entity) and all the plumbery code.
Now, no more store procedures (C# and Linq code instead), no more 2 layers to maintain (the code is the model).
My small experience says : The earlier the better but don't get me wrong, before migrating you really have to think in Document rather than in RDBMS. This means you may have to partially change the businness DataModel to correctly utilize MongoDB features, otherwise you could get bad performances and Mongo DB is useless for bad models.
Another point is the admin stuff. You'll have to quickly learn Mongo DB admin to be up to speed. And even if the tools are good, they completely differ from SQL Server tools.
In conclusion, If you're convinced MongoDB is your future data store and search database,
(and it was in my case), read documentation, take time to do some Proof Of Concept. Then you can think Document and load test you new model.
Your core question appears to be whether to make the switch to MongoDB now, or deploy on SQL and go to MongoDB in a future release.
You do not appear to be using an ORM (e.g. NHibernate, Entity Framework.) Setting other concerns aside, if you're convinced that you want to go to NoSQL, then I would do it now rather than later. Unless you integrate a Provider model for your data access, changing the underlying data access strategy after it is already established would be difficult.
I agree. Switching now is better, if only to avoid the data migration headache switching post-deployment will require.
What is the best way for developing a database based application? We can have two approaches.
One common database for all the developers.
Separate database for all the developers.
What are the pros and cons of each? And which one is better way?
Edit: More then one developer is supposed to update the database and we already have SqlExpress 2005 on each developer machine.
Edit: Most of us are suggesting a common database. However if one of the dev has modified the code and database schema . He has not committed the code changes but the schema changes has gone to the common database. Will it not possibly break the other developers code.
Both -
I like a single database that changes are tested on before going live, or going to a 'formal' test environment. This is your developer's sanity check; it stays up to date with the live system and it makes sure they always consider each others changes. The rule should be that changes don't go on here if they might break something else.
A database per developer is great (even essential) when more than one developer is making updates. It allows them all the development flexibility they want without breaking things for other developers.
The key is to have a process for moving database changes from development through to your live system, and stick to your process.
Shared database
Simpler
Less cases of "It works on my machine".
Forces integration
Issues are found quickly (fail fast)
Individual databases
Never affect other developers, but this is also a bad thing, in continuous integration
We use a shared development database and it works out nicely. Our schema rarely changes in a way that makes it backwards incompatible, but occasionally a design change will occur before we go live, and we simply ask the other developers to update.
We do have separate development application (web) servers, but they share the same database. Our developers do have the option to use their own database, as they know how to set this up, and will do that on occasion, but only temporarily. The norm, for us, is to share the database.
Thought I'd throw this out there, but why not let every developer host their own instance of SQL Server Developer on their desktops and then have a shared server for each of the other environments (development, QA, and prod)? I think even the basic MSDN that comes with Visual Studio Pro (if you opt for it) includes a license for SQL Server Developer.
The developer can work on their desktop without impacting the others and then you can have them move the code to the next shared environment as you see fit (at will, with daily/weekly builds, etc.).
EDIT:
I should add that the desktop instance allows developers to do things that he DBAs often restrict on shared environments. This includes database creation, backup/restore, profiler, etc.. These things are not essential but they allow the developer to become so much more productive while reducing the demands they make against your DBAs.
The shared environment is completely necessary for testing - I would not recommend going from desktop to production. But you can add so much by allowing the developers to have 100% control over a given database environment (including isolation from others) with a relatively minor cost.
Depends on your development, testing and maintenance cycles. Also on the size and location of the development team (and of course organization). If you support several versions of the database you might need even more environments.
In real world I found the following approach rather satisfying:
single central database/application for testing purposes, gets all the changes by various developers periodically merged into it
local copies for development (so you are free to drop and reload the whole database)
upgrade scripts are maintained for any changes to schema, auxiliary and sample data sets
Here are some further points:
If two developers (two teams) are working on changes that can affect each other then they should complete their tasks independently and then integrate/merge and test. For this it is much better to have separate development environments (unless they have to work together in which case I consider them to be a part of the same team; still they can work on their own copies of the database and share it if necessary)
If they work on the changes that do not influence each other they could work on the main server. Or on their own local copies of the database.
So, developing on the local copy has all the benefits with no risk in a general case (when you support multiple versions of the system and maintain upgrade scripts anyway).
Still it is great if you can share test cases so ability to dump/restore the database easily and quickly is a big plus.
EDIT:
All of the above assume that having a copy on the local machine of the whole system for testing purposes is feasible (size, performance, licenses, etc).
I would opt for solution #1 : One common database for all the developers.
Pros
Less expensive for the infrastructure;
Only one dump is required when it's time to refresh the development database;
Everyone develops with the same data, so it closely represents the production environment;
Cons
If one developer performs a bad operation, this could impact a larger amount of developers.
As for solution #2 : One independant database for each of the developers;
Pros
This could be useful for new features developments, when development requires isolation;
Cons
More expensive for the company (infrastructure, licences...);
Multiplication of problems caused by eager isolation development environment (works in devloper's environement, not integrated);
Multiplication of dumps by the DBAs of the same copy from the production environment.
Considering the above, I would recommend, depending on your company size:
One database for development;
One database for testing the integration;
One database for acceptance tests;
One for new feature development that will perhaps require integration tests.
If your company doesn't require integration tests, then go with acceptance tests, this step is crucial before going to production.
One per developer plus a continuous integration and build server to run unit and integration tests. That gives you the best of both worlds.
Having all developers modify a single dev database quickly becomes less productive once the amount of database change reaches a certain level because it forces a developer to deploy changes to the shared database before he is ready to check-in, which means other parts of the code line may break unnecessarily.
Simple answer:
Have one development database, and if the developers want their own, they can just run their own instance on their own machines. Just be sure to test/publish on the shared.
We do both:
We use code generation where I'm at and our database is generated as well. So we have an instance on each developer's box where the database is generated. Then we use the scripts that are generated to apply the changes to a central test database. If that goes well we apply the changes to the production database during a release.
What's nice with this approach is that when our "source of truth" is checked in to source control, all the database changes are automatically distributed to the other developers when they rebase and regenerate. It works well for us.
The best way is single database on Test/QA server and one database (probably on developer's local computer) for each developer (so, 10 developers work with 10 + 1 databases).
The same approach as for general development: each developer has own copy of source code on local machine.
Also, multiple-database approach simplifies the keeping database schema in version control systems. We are keeping database creation scripts in SVN.
We are using the approach, described here:
http://www.sqlaccessories.com/Howto/Version_Control.aspx
You might also want to look at Refactoring Databases. Aside from discussing database changes, he includes discussions on going from development to production in a way that reduces risk.
Why on earth would you want a separate database for all developers?
Have one common database for all, that way the table structure is consistent and the sql statements are as well.
The biggest problems with developers having their own databases are:
First it is unlikely to be the size
of the real production database (if
you take all the databases we need to
work with here, they would take up
several hundred gigabytes of space, I
don't have that available on my
machine), this causes bad code to be
written that will never work on a
large database for performance
reasons. SQL code should never be written against a data set significantly smaller than the one on prod.
Second, developers who use their own
database create problems when they
spend a long time developing
something and then find out only
after they merge with a real datbase
that it affects something else. You
find this stuff much faster when you
share the environment. So there is
inthe end less wasted development
time.
Third developers working on related
things need to know about the changes
you are making, it will affect their
change.
When you know you are going to affect others, I think you tend to be more careful what you do which isa plus in my book.
Now the shared database server should have what we call a scratch database, a place where people can create and test table changes, so if they are doing something that might need to drop and recreate a table (which should be a rare case!), they can test the process first by copying the table to the scratch database and running their process there and then changin to the real database when they are sure it works. Or we often copy a backup table to the scratch database before testing a particular change, so we can easily recreate the old data if it goes bad.
I see no advantages at all to using individual databases.