Not sure if I'm asking this correctly, but how can you separate states between different teams. As in, how can an infrastructure team use Salt to build out machines (add users, networking, etc) and keep their states separate from a release engineering team's states (installing applications, 3rd party dependencies, etc).
Ideally, I would like to be able to keep the states separate enough that release engineering could run a high state that will apply all of their states across the environment while infrastructure could have a separate high state to apply their states to the environment.
Is this possible? Any chance anyone can point me in the direction of some documentation that goes over how to build this out?
Related
My team uses salt-stack to manage our fleet of servers. Currently we have a single master, and two GitHub repositories for our SLS logic and Pillar data. The issue is when a team member needs to do work they have to "checkout" the salt master so that they can work uninterrupted on their branch. My question is, what are some better ways to solve this problem so that we can have multiple team members working on multiple branches without having to checkout the master?
One thing I've thought of is setting up multi-master so that we may have multiple "development" salt masters and then a true salt master that we execute automation from.
Any thoughts or feedback on the setup is welcomed!
one thing to do is separate out the dev, test, and prod environments completely like you are thinking. this will at least separate the prod environment from any testing accidents. next thing. implement localized develop systems. such as using kitchen-salt or local vms with mini versions of your dev environment. this will at least make development time faster as they are not relying on the centralized dev system that should be saved more more developed code, and is more for testing integration instead of overall development. if possible get a ci/cd system that can spin up tests against pr's to the git repos. that way you are testing the changes before you test the merge. basically local dev->PR->dev->test->prod with testing at each step.
I am exploring the possible solutions for orchestrating my flows across multiple services via some infrastructure. Searching shows me a few options such as Conductor, Camunda, Airflow etc.
I am wondering what would fit my use case better
One of my service is in Java, the other is in Python
I need to pass info to the Java service, then take the output and pass it to the Python service
Final output is then published to another queue
It feels like Conductor is a good choice, but would love to hear your inputs!
All options can fulfill the requirement stated. Think about further / future requirements. Is it only a data pipe? Is it about orchestrating a larger end-to end business process? Do you need support for long-running processes? Is end-to-end transparency in a graphical form a benefit? Is graphical process modelling in BPMN2 standard going to be a benefit? Are there going to be audit or reporting requirements? Or is it going to be a simple, isolated, technical solution?
This article gives a great overview of tools in the market and what their primary use cases are: https://blog.bernd-ruecker.com/understanding-the-process-automation-landscape-9406fe019d93
All listed tools might technically be able to execute your workflow (I have no experience working with Conductor & Camunda). A few characteristics on which a decision is usually made are:
open vs closed source
how do you define workflows? (e.g. Python code in Airflow. Others use e.g. JSON/XML/something custom)
does it come with a UI?
can it scale out in case my workloads start growing?
is it agnostic to any technology or limited to running certain technologies? (e.g. Oozie is built for scheduling jobs on Hadoop)
other requirements could be e.g. security, logging, monitoring, etc.
There are many orchestration-tool-comparisons on the internet, e.g. 1 or 2.
Introduction to Container Orchestration
The practice of automating the administration of container-based microservice applications across different clusters is known as container orchestration. Within corporations, this notion is gaining popularity. In addition, a variety of Container Orchestration technologies have become indispensable in the deployment of microservice-based applications.
Software development in the modern era is no longer monolithic. Instead, it generates component-based apps that run across many containers. These adaptable and scalable containers work together to accomplish a specified purpose or microservice.
Depending on the complexity of the application and other requirements like load balancing, they may span many clusters.
Containers encapsulate application code as well as its dependencies. To function efficiently, they receive the resources they require from physical or virtual hosts. When complicated systems are built as containers, clustering them for deployment requires adequate management and priority.
How to Choose a Container Orchestration Tool?
We've looked at a number of Orchestration Tools that you may examine when selecting which is ideal for your business. To do so, make sure to understand your company's requirements and operations. Then you'll be able to more readily weigh the benefits and drawbacks of each option.
Kubernetes
Kubernetes has a lot of features and is ideally suited for container and cluster management at the corporate level. Kubernetes is managed by a number of platforms, including Google, AWS, Azure, Pivotal, and Docker. As the containerized workload grows, you have a lot of options.
The biggest disadvantage is that it does not work with Docker Swarm and Compose CLI manifests. It might also be difficult to understand and set up. Despite these flaws, it is one of the most used systems for cluster deployment and management.
Docker Swarm
For individuals who are already familiar with Docker Compose, Docker Swarm is a better option. It's easy to use and doesn't require any additional software. Unlike Kubernetes and Amazon ECS, however, Docker Swarm lacks sophisticated features such as built-in logging and monitoring. As a result, it is better suited to small-scale businesses that are just starting started with containers.
Amazon ECS
If you're already familiar with Amazon Web Services, Amazon ECS is a great way to install and configure clusters. It's a quick and easy method to get started, and it scales to match demand. It also connects with a number of other AWS services. It's also excellent for small teams with limited resources for container maintenance.
One of its disadvantages is that it is incompatible with nonstandard deployments. It also contains ECS-specific configuration files, which complicates debugging.
We have been working on a incremental project for like 4 or 5 years using the technologies mentioned at the end.
The project has been growing, and now i feel our methodology is not effective enough. Until now every programer that has worked on the project has had to learn the entire layer structure and technologies surrounding them, and every new feature is assigned to a single person.
So we are delaying on delivery times, its really hard to train someone and make them productive, and people on the team feel overwhelmed, i don´t think is a matter of money and resources, a debate is on, and i really feel like we should work in pairs and in layers, becoming specialized in certain areas and working in teams. How ever some argument that we can´t work in layers because a person might not be able to finish his part because he won´t be able to test it until the other member is over with his layer. Right now we are only 3 programers.
So if you think that these suggestions make sense, what i need is some nutshell effective references of how can we turn this in a more positive dynamic as a team, how to work on layers with these technologies, i need to have practical solutions and arguments so we can turn the ship to the right direction. Can any one direct us to the right direction ? it will be deeply appreciated. Thank you in advance!
Technologies:
Backend: Java + Spring + Hibernate + Mysql
Frontend HTML: Jstl + html
Frontend Flex: Flex SDK 3.5 + Blaze DS, cairngorm, third party libraries and sources.
Development OS: Mac or windows
Development Tools: Trac for management, svn repository
Production Environment: Linux Debian or Centos, tomcat 5.5
Tools: Intellij and Flash Builder
This is a pretty open ended question, with no real "right" answer, I think. One thing that can help enable working independently on different layers is to first design a contract/interface between the layers. Then you can work on both layers independently, on one side working to fulfill the contract/interface, and on the other side working to build on the data/functionality provided by the contract. You can start out with some kind of mock implementations of the contract/interface on one side, and a mock consumer of the data/functionality on the other side. This can work within your Java/Spring/Hibernate/MySQL backend as well as across the backend and frontend. You're still going to have times where you need to actually integrate your layers and test that integration, which will create dependencies between the completion of work in different layers.
What is the best branching and merging strategy for a small development team making a lot of concurrent development on the same project?
We need to be able to easily apply a hotfix to a production release while other developers are working.
I was looking at the TFS Branching Guidance guide from codeplex and can't seem to figure out what is the best option for us.
Thanks
Without knowing your organization or how your team develops, it is hard (maybe impossible) to make a recommendation.
In our organization, the majority of our development is organized around releases, so we did a "branch on release" approach. That works great for us. We also do bug fixes, so we've implemented a "branch on feature" approach off of the production line for bug-fixes.
If you have different people all working on different features that might make it to production at different times, a "branch on feature" approach might work.
If you are all working on the same development line, a single "development" branch might work for you.
It took us months to finalize our branching strategy (for 14+ team projects, around 80 developers, and multiple applications). I don't expect that it will take quite as long for a smaller organization, but definitely spend some quality time thinking about this, and consider bringing in some outside expertise to give you guidance.
In addtion to Robaticus, you need to figure out why you want to do the parallel development.
In my opinion it is all about what you want to isolate
If you do parallel development because there are multiple features that are developed, it depends whether the features need to be released in the same package (version / release / give it a name) and whether you want the different features to don't integrate with each other during development. The latter is important when the different features interfere with each other and you only want to spend time on the integration when you merge both features.
If you need isolation on any of these reasons, you have within one release multiple branches which you merge before you do the integration test.
If you do parallel development because you want to support multiple releases, then you need to know how many releases you need to support (how many release that are rolled out in production do you need to support, how many release do you have in pre-production, etc.). In this case, the advise is to have a branch per release that you need to support
It sounds that you need to have this branching strategy for your organization.
It also depends on the type of files that you are branching. If you have SSIS or SSRS files (or any other XML-based files), binary files, or any files that are not easy text (in contrast to C#), then it is not easy to merge the files from two different branches. You then need to have some manual involvement to actually merge these files!
So as Robaticus already said, we need more information on your specific scenario to give more detailed guidance.
I'm using VS2010 and TFS to build a complex medium sized website.
Which protocol is most efficient and secure? I think I can enable any protocol I need since I own the IIS server and control all aspects of it.
My choices are:
Webdeploy
FTP
FileSystem
FPSE
There is also a hint at something called "one click"... not sure what that is, or if it relates to any of the above.
OK.. I'm sorry, but I'm not sure where to even start, and I'm not sure the question is answerable as-is. I'd probably put this as a note if there weren't a limit on the number of characters.
So much depends on the type of data in this app, your financial resources, etc. This is one of those subjects that seems like a simple question, but the more you learn, the more you realize you don't know. What you're talking about it Release management, which is just one piece of the puzzle in an overall Application Life-cycle Management strategy.
(hint, start at the link posted, and be prepared to spend months learning).
Some of the factors you may need to be aware of are regulatory factors that you many not even have thought of. Certain data is protected, and different standards require you to have formalized risk and release management built into your processes. For example, credits card data, medical records, etc, all have different regulations (some actual laws, some imposed by the Payment Card Industry) that you need to be aware of.
If your site contains ANY sensitive data, you need to first find out whether any of these rules apply to you, and if so, which ones? Do any of them require audit trails for how code goes from development to deployment? (PCI does, for example. That's because we take credit card payments, and in order to do that, you need to be PCI Certified or face heavy fines.)
If your site contains NO sensitive information at all, then your question could be answered as-is, and the question becomes a matter of what you're comfortable with.
If your application DOES contain sensitive info that makes it subject to rules that mandate a documented, secure ALM process, then the question becomes more complex, because doing deployments manually in such a situation is a PAIN IN THE BUTT. It' doesn't take too long before you start looking at tools to help automate some of the processes. (Build servers, tools such as Aldon for deployment, etc. There is a whole host of commercial and open source software to choose from.)
(we're using Atlassian for most of our ALM, but Team Foundation Server is also excellent, and there are a TON of other options.)