Improve productivity when working on a shared salt-stack repository with multiple team members? - salt-stack

My team uses salt-stack to manage our fleet of servers. Currently we have a single master, and two GitHub repositories for our SLS logic and Pillar data. The issue is when a team member needs to do work they have to "checkout" the salt master so that they can work uninterrupted on their branch. My question is, what are some better ways to solve this problem so that we can have multiple team members working on multiple branches without having to checkout the master?
One thing I've thought of is setting up multi-master so that we may have multiple "development" salt masters and then a true salt master that we execute automation from.
Any thoughts or feedback on the setup is welcomed!

one thing to do is separate out the dev, test, and prod environments completely like you are thinking. this will at least separate the prod environment from any testing accidents. next thing. implement localized develop systems. such as using kitchen-salt or local vms with mini versions of your dev environment. this will at least make development time faster as they are not relying on the centralized dev system that should be saved more more developed code, and is more for testing integration instead of overall development. if possible get a ci/cd system that can spin up tests against pr's to the git repos. that way you are testing the changes before you test the merge. basically local dev->PR->dev->test->prod with testing at each step.

Related

What is the best practice to develop CD/CI when you use ML studio APIs?

In our backend development process, we have two environments: testing and production. We develop our code, and then we push the code into the testing repository. Then on the release date, we push everything into production.
Now that we are going to use ML studio, I'm struggling with setting up testing and production environments for my ML studio experiments.
I created two identical experiments with independent APIs; one experiment for testing and the other experiment is used by the production. When it comes to moving the trained experiment from testing to production, I make all the changes I made in the testing environment to the production environment, which is a very time demanding process.
Do you know any better solution so we can deploy and test our changes and then deploy the latest changes to the production? How people use ML studio in their CD/CI process?
The attached image shows the design that I have now. I'd appreciate if you can help me in improving this process. Maybe ML studio has some features to manage this scenario that I don't know.
In MLStudio, when you doing publishing an experiment as a API, the current API get replaced with a prevailing one. One of the practices you can do is like this.
Maintain a test experiment.
Maintain an identical copy of that and push it for the production.
When you done with the changes in the test experiment keep it as that (Then you can change it later) and make a copy of that(Using Save as) - publish it as the production service.
There are some drawbacks in this too. You have to update the API endpoints on the production code once you going to release and you may have to document the versions going for the production manually. The only advantage is the time of updating two experiments goes off.

Proper DTAP setup for Content Delivery

I've had this setup, but it didn't seem quite right.
How would you improve Content Delivery (CD) development across multiple .NET (customer) development teams?
CMS Server -> Presentation Server Environments
CMS Production -> Live and Preview websites
CMS Combined Test + Acceptance (internally called "Staging") -> Live ("Staging")
CMS Development (DEV) -> Live (Dev website) and sometimes Developer local machines (laptops)
Expectations and restrictions:
Multiple teams and multiple websites
Single DEV CMS license (typical for customers, I believe?)
Enough CD licenses for each developer
Preferably developer could program and run changes locally--was this a reasonable expectation?
Worked
We developed ASP.NET pages using the Content Delivery API against the same broker database for local machines and CD DEV. Local machines had CD dlls, their own license files, and ran/debug fine with queries and component presentation calls.
Bad
We occasionally published to both the Dev presentation server and Developer machines which doesn't seem right now, but I think it was to get schema files on our local machines. But yes, we didn't trust the Dev broker database.
Problematic:
Local machines sometimes needed Tridion-published pages but we couldn't reliably publish to local machines:
Setting multiple publication destinations for a single "Local Machine" publication target wouldn't work--we'd often take these "servers" home.
VPN blocked access to laptops offsite (used "incoming" folder at the time).
Managing publication targets for each developer and setting up CD for each new laptop was good practice (as in exercise, not necessarily as a good idea) but just a little tedious.
Would these hindsight approaches apply?
Synchronize physical files from Dev to local machines on our own?
Don't run presentation sites locally (localhost) but rather build, upload dll, and test from Dev?
We were simply missing a fourth CMS environment? As much as we liked our Sales Guy, we weren't interested in purchasing another CM license.
How could you better setup .NET CD for several developers in an organization?
Edit: #DominicCronin pointed out this is only a subset of a proper DTAP setup. I updated my terms and created a separate question to clarify DTAP with Tridion.
The answer to this one is heavily depending on the publish model you choose.
When using a dynamic model with a framework like DD4T you will suffice with just a single dev environment. There is one CMS, and one CD server in that environment and everything is published to a broker database. The CD environment could be used as an auto build system, the developers purely work locally on a localhost website (which gets the data from the dev broker database), and their changes are checked in an VCS (based on which the auto build could be done).
This solution can do with only a single CMS because there is hardly any code developed on the CMS side (templates are standardized and all work is done on the CD side).
It gets more complex if you are using a static or broker publishing model. Then I think the solution is to split Dev up in Unit-Dev and Dev indeed as indicated by Nuno and Chris.
This solution requires coding on both the CMS and CD side, so every developer has a huge benefit in having its own local CMS and CD env.
Talk to your Tridion account manager and agree a license package that suits the development model you want to have. Of course, they want to maximise their income, but the various things that get counted are all really meant to ensure that big customers pay accordingly, and smaller customers get something they can afford at a price that reflects the benefits they get. In fact, setting up a well-thought-out development street with a focus on quality is the very thing that will ensure good customer satisfaction and a long-running engagement.
OK - so the account managers still have internal rules to follow, but they also have a fair amount of autonomy in coming to a sensible deal with a customer. I'm not saying this will always work, but its way better than blindly assuming that they are going to insist on counting every server the same way.
On the technical side - sure, try to have local developer setups and a common master dev server a-la Chris's 5th. These days, your common dev environment should probably be seen as a build/integration server: the first place where the team guarantees all the tests will run.
Requirements for CM and CD development aren't very different, although you may be able to publish to multiple developer targets from one CM if there's not much CM development going on. (This is somewhat true of MVC-ish approaches, but it's no silver bullet.)

Selective builds using TFS with ASP.NET

I am currently trying to setup our complete development process (from dev to production).
We will be using Microsoft Team Foundation Server and I was wondering if there was way to put what version of programs you want in a build.
Let's say we are 20 programmers working on the same project and we only want to deploy changes done by one or two programmers. Is there a way to do that?
I was thinking about using continuous integration to our dev / QA server and than deploy what is ready and fully tested to our production servers.
Thanks for your help!
we only want to deploy changes done by one or two programmers. Is there a way to do that?
This leads to a much larger discussion of your branching and merging strategy. Basic answer: suggest you have those developers develop in their own dev branch. Publish from there when ready. Keep other devs out of that branch either by convention or setup security measures.
Their branch can be merged with the other developers at some point. That could be straightforward, or a fun time for someone managing/resolving merge conflicts.
Re: the comment
Are there any other ways of managing the development cycle of an ASP.NET application? It is very important that I can deploy what I want, when I want it?
Yes, you can absolutely pick and choose which features you want in a release/branch/build. Suggest looking into creating branches for the 'streams' of development(branches), and merge into a 'main' branch from those 'dev' branches. You can have many concurrent branches merging into one branch, culminating in the mix of features you want.

Development and Test Environment Best Practices?

This question is for ASP.NET and SQL Server developers. What are your best practices with respect to setting up your development and test environment? I'm interesting in the following issues:
How many tiers do you recommend and what goes on on each tier? Just dev, test, and production or perhaps dev, test, staging, and production?
Which types of applications and/or servers should run on actual physical hardware and which can get away with a VM?
What are your strategies for loosely coupling users from web sites, web developers from their web/app/DB servers, and DB developers from their DB servers?
How do developers stay "DRY?" (no deodorant jokes, please ;)
What are the pros and cons to putting web, app, and DB servers on their own machines? Does putting servers on separate machines in order to minimize contention for a machine's resources trump any NIC and network latencies that might be introduced by putting them on different machines?
How do you configure your web apps to minimize contention for resources (e.g. virtual directories, separate application pools, etc.)
How and how often do you refresh your databases on each tier? Do you just refresh the data or both the data and objects?
Thanks.
I can't comment on all of these but here's what i've found to work best in my experience.
1) Depends on your resources but ideally i like to have 4.
Dev is hyper flexible and owned by your dev team. It can get updated whenever they feel is best or as features are completed.
QA is updated on a scheduled or delivery basis depending on your process. If you do waterfall its updated when your in the testing phase, if you do iterative agile its updated each iteration. It should mimic prod as closely as possible but you may be able to get away with some compromise (see #2)
Staging should be identical in every way to prod. It should even use real production data if possible (potentially restored from a recent backup of the true production environment.) It should be used for acceptance testing prior to any release.
& Prod
2) Dev can be on a VM usually. QA can too most of the time. Staging and prod should match. I've seen folks run prod on VMs before, it depends on your resources and the demand for your app.
3) Our devs use a backup of prod on local SQL servers for development. This keeps everyone off of a central dev SQL server. Dev web and dev sql are separate boxes (just out of necessity they manage a bunch of projects.) same with QA, Staging and Prod.
4) A lot of testing and communication. If you have one small / medium team this isnt that hard. If you have lots of teams look at something like scrum, formal code reviews, something to keep communication going between teams. Don't treat DRY issues like suggested fixes, treat them like bugs, that need to be fixed. You'll spend way more time maintaining the code than writing it up front so treat maintenance as a 1st class citizen and make sure management is on board with that.
5 & 6) Not really qualified to comment
7) Dev whenever the teams need to, QA and up on a schedule depending on deployments. QA is every iteration / sprint, Staging and Prod is every release.

How to Improve Web Development Using Virtualization in asp.net?

Improving Web Development Using Virtualization
https://web.archive.org/web/20090207084158/http://aspnet.4guysfromrolla.com:80/articles/102908-1.aspx
Virtualization is, in essence, creating multiple miniature (virtual) PCs inside of your primary PC. One of the great benefits of this is that it allows you to isolate and test an application or set of applications in an environment that is free of other things to interfere. It used to be that in order to get a new machine with a new development environment on it you had to have another piece of hardware, or you had to rebuild your system to the new environment. With virtualization, you simply install the new environment that you need into one of the virtual machines and you run it as necessary. When you're done you can shut it down.
Virtualization is the ultimate in isolation -- it can allow you to do things on one piece of hardware that are simply not possible without it. For instance, you can install software in a test environment on a member server because it won't run on a domain controller. You simply fire up two virtual machines at the same time -- one being the domain controller and the other being the member server. Both virtual machines can run on the same physical hardware at the same time without either being aware that they are sharing a machine. The result is a quick way to implement testing environments.
Virtualization technology allows for the virtual systems to be frozen in place. In other words, the exact spot in the machine that you are at can be frozen for an indefinite period of time. If you work on one project until it's released and stable and need to come back in a year and start working on it again, you can freeze the system when you stop working on the project and then restart it a year -- or more – later. When the system is restarted it will be like time had not passed. The system will be restored exactly as it was left.
This particular feature is great for developers who support multiple systems including consultants who have different clients with different projects that they will have to support over time. You don't have to worry about recreating an environment to test a bug fix; you simply thaw out your virtual machine and go.
Virtualization programs have a feature described as Undo disks. Undo disks allow you to operate on the system and if you decide that you don't want to save your work you simply don’t' accept the changes in the undo disks. Poof. Like magic everything that you did is undone and it's like it never happened.

Resources