Selective Continuous Integration with Git - css

My Django project's team is looking to have the designer's CSS in a central place, preferably on the production server (so that there's one "truth" to the current design, a model he claims that he's worked with in the past). Assuming that this is even a good practice, it would mean setting up Git to deploy the CSS in a Continuous Integration (CI) manner to production.
However, I would want to restrict Git somehow for the designer so that he doesn't accidentally update any files other than CSS or HTML. Python and Django files would be updated by developers, who would be deploying in a more traditional manner: working in their own branches and only having a human build manager
merging everything in to master when tested and ready.
Part of the reason that we want the designer to be able to deploy the CSS to a server is to avoid setting up the Django site locally on his laptop (he's not so technical outside of CSS, HTML, and Git).
Is this setup even a good idea? If not, what's the proper alternative?
Assuming that we set up a CI config off of the master branch, and allow the CSS to be pushed to master, can I even restrict the designer's ability to modify and check in non-CSS/HTML files? If so, how?

Is this setup even a good idea? If not, what's the proper alternative?
I have some reservations. It sounds like your designer is going to be the only person pushing changes to production without any gates: no code review, no tests, etc. Continuous integration is great, but a sane process includes safeties that prevent bad deploys. Since the rest of the team is following a different process, you'll end up managing two different pipelines. That's a waste of effort, and inevitably one of them (probably the designer's) falls apart due to lack of attention.
The alternative is put everyone on the same process. Teach the designer how to run the application locally, or build a harness that makes it easier. Unless your site is entirely static, how can they even see what their changes look like without that? Maybe it's more work to train them up, but it's an excellent opportunity for personal growth.
Assuming that we set up a CI config off of the master branch, and allow the CSS to be pushed to master, can I even restrict the designer's ability to modify and check in non-CSS/HTML files? If so, how?
If you go this route, you can use Git hooks to restrict what the designer is allowed to commit. You can either put a pre-commit hook on their client or, if you control the server, a pre-receive hook that runs for only the designer's user. Either one can look at the committed files and block the commit/push if any are not CSS or HTML. There's a pre-commit framework called Overcommit that might be helpful to you. If you're using a code review tool, most have places you can hook in a bot to leave a comment or block the merge when they've modified a file they shouldn't have.
Another option here is trust your coworker. Presumably they were hired because they're effective and useful, so you can save a lot of effort building up restrictions if instead everyone's clear on what they're supposed to be doing and generally doesn't screw it up.

Related

CMS - How to work with multiple environments? Do I really need them?

I've never worked with any CMS and I simply wanted to play with such ones. As originally I come from .NET roots, so I was thinking about choosing Orchard Core CMS.
Let's imagine very simple scenario, together with my colleague I'd like to create a blog. As I'm used to work with web based systems and applications for a business for me it's kinda normal to work with code repository, having multiple environments dev/test/stage/prod, implementing CI / CD, adjusting database via migrations or scripts.
Now the question is do I need all of this with working on our blog with a usage of CMS.
To be more specific I can ask few questions:
Shall I create blog using CMS locally (My PC) -> create few articles and then deploy it to the web or I should create a blog over the internet and add articles in prod environment directly.
How to synchronize databases between environments (dev / prod).
I can add, that as I do not expect many visitors on a website I was thinking to use Orchard Core CMS together with SQLite. Also I expect that I can customize code, add new modules, extend existing ones etc. - not only add content (articles). You can take that into consideration in answering the question
So basically my question is what should be the workflow of a person who want to create / administer and maintain CMS (let it be blog) as a single person or as a team.
Shall I work and create content locally, then publish it and somehow synchronize both application and database (database is my main question mark - also in a context how to do that properly using SQLite).
Or simply all the changes - code + content should be managed directly on a server let's call it production environment.
Excuse me if question is silly and hard to understand, but I'm looking for any advice as I really didn't find any good examples / information about that or maybe I'm seeking in totally wrong direction.
Thanks in advance.
Great question, not at all silly ;)
When dealing with a CMS, you need to think about the data/content in very different terms from the code/modules, despite the fact that the boundary between them is not always completely obvious.
For Orchard, the recommendation is not to install modules in production, but to have a dev - staging - production type of environment: install new modules on a dev environment, test them in staging, and then deploy to production when it's safe to do so. Depending on the scale of the project, the staging may be skipped for a more agile dev to prod setting but the idea remains the same, and is not very different from any modular application.
Then you have the activation and configuration of the settings of the modules you deploy. Because in a CMS like Orchard, those settings are considered data and stored in the database, they should be handled like content. This includes metadata such as the very shape of the content of your site: content types are data.
Data is typically not deployed like code is, with staging and prod environments (although it can, to a degree, more on that in a moment). One reason for this is that a CMS will often feature user-provided data, such as reviews, ratings, comments or usage stats. Synchronizing all that two-ways is very impractical. Another even more important reason is that the very reason to use a CMS is to let non-technical owners of the site manage content themselves in a fast and direct manner.
The difference between code and data is also visible in the way you secure their changes: for code, usual source control is still the rule, whereas for the content, you'll setup database backups.
Also important to mention is the structure of the database. You typically don't have to worry about this until you write your own modules: Orchard comes with a rich data migration feature that makes sure the database structure gets updated with the code that uses it. So don't worry about that, the database will just update itself as you deploy code to production.
Finally, I must mention that some CMS sites do need to be able to stage contents and test it before exposing it to end-users. There are variations of that: in some cases, being able to draft and preview content items is enough. Orchard supports that out of the box: any content type can be marked draftable. When that is not enough, there is an optional feature called Deployments that enables rich content deployment workflows that can be repeated, scheduled and validated. An important point concerning that module is that the deployment only applies to the subset of the site's content you decide it should apply to (and excludes, obviously, stuff like user-provided content).
So in summary, treat code and modules as something you deploy in a one-way fashion from the dev box all the way to production, with ordinary source control and deployment methods, and treat data depending on the scenario, from simple direct in production database instances with a good backup policy, to drafts stored in production, and then all the way to complex content deployment rules.

deploying changes on a living drupal site

I really like drupal somehow. But what disturbs me most is that i can't figure out a clear way of deployment. Drupal stores a lot of stuff inside the database (views, cck, workflow, trigger etc) that needs to be updated.
I've seen some modules that could be used for this task (eg features) and I'm not sure if they are sufficient. Yet they are only for drupal6 and i currently have to work on a drupal5 site where updating is not yet an option.
Any ideas?
This is a weakness. Drupal doesn't have the developer tools built in that make development and deployment easy like Rails does (for example). One problem is Drupal isn't aware of it's environment natively. Secondly, there are too many different methods and modules that require special care. It can get very confusing. But things are getting better with drush and drush make.
I'm assuming here that you have a development environment on your local machine and a live or staging server you upload to.
The first thing you have to do is work out how to get your database fixture and your code to and from your server to your development environment very quickly. You need to make this proceedure as painless as possible so you can keep different versions of your site in sync without much effort. This will mean you will hopefully be able to manage less change every time you deploy. Hopefully...
Moving the database around isn't too hard. You could use phpMyadmin or mysqldump but the backup migrate module is my favorite tool.
To upload code from your local repository or site can be done in a few ways. If you use a version control system like git, you can commit on your local machine and check out again on the staging server. There are also special deployment tools like capistrano you should take a look at. (if you know this stuff already it may benefit others to read). If you're using FTP you should probably try something different.
If you're working with a site that is still in production, you can afford to make small incremental changes to your local site, then repeat on the live site and down load the new version of the database when your changes are in place. This does mean you double handle the database but can be a safe way of doing things. It keeps both your database closer to each other and minimises risk.
You can also export views backup to your server in either your code or importing them into your live site. There is a hack to get around deploying cck changes here: http://www.tinpixel.com/node/53 it works OK but cannot truly manage changes like rollbacks. (Respect to the guy who wrote that)
You can also use hook_updateN to capture changes and then run update.php to apply them. I worked on a d5 site with dozens of developers and this was the only way to keep things moving forward. This may be a good option if your site is live or if you need all database schema changes captured in a version control system (so you can roll back).
Also: Take a look at drush and drush make. These tools can be of great benefit. I can't remember how much support is for d5.
One final method of dealing with this is not to use cck or views (and use hook updates). But this is really only suitable for enterprise sites where you have big developer resources. This may seem like a strange suggestion but it can negate this whole problem completely.
Sorry I could not give you a clear answer. This is because one does not exist yet. You'll end up finding your own rhythm once you get into. Just keep backups of your database if you can roll back to them easy enough.

ASP.NET website deployment best practices resource suggestions

I have looked through the related questions, and none of them have provided me the information I am looking for.
Currently the team I work on does deployments of individual .aspx (and .aspx.vb) files for bug fixes/enhancments. I am trying to affect change, as I really believe that deploying the "whole compiled site" is less error prone. As this is a significant change from the way things have been done, my suggestions have ben met with significant resistance.
As my google-fu has not been up to par lately, I was hoping the SO community could either tell me that I am off my rocker, and that there is nothing wrong with moving individual files, or point me to some really good resources which would allow me to make a stronger case.
Edit:
This has all been great info, and reinforces the arguments that I have already been making, can anyone argue the other side?
Deploying individual files for bug fixes and deployment is not a wise strategy. It sounds like you need a comprehensive build and deployment process. That doesn't mean it has to be complicated as there are some good tools available nowadays.
Build and deployment can get detailed, so as a minimum start try taking a look at the Microsoft Web Deployment Tool (http://www.iis.net/extensions/WebDeploymentTool). Install the tool on your build server and install it on your deployment server. Stage your ASP.NET content locally using the Visual Studio Publish command, then use the above tool to synchronize the entire package on the deployment server. I like this approach because it can be completely automated. When doing builds and deployments, aim for complete automation to reduce potential errors.
This is the bare minimum, but you will at least be certain that when specific files are changed, they are ALL synchronized on the deployment server.
Personally to me rolling back immediately is most important. Again website projects are very hard when it comes to track the changes.
you can find a good detailed comparison here. I am reproducing the article here.
1) Deployment. If you need in-place deployment, this model is perfect. However, it's not recommended since you are exposing your logic in clear text. So, anybody who have access to physical server can mess with your code and you never going to notice this. You can try to make precompiled web site, but you going to end up with a lot of dll and almost untouchable aspx files. Microsoft recognized this limitation and released Web Deployment Project tool.
2) You need to keep track of what did you change locally and what did you upload to production server. There are no versioning control. Visual Studio has Web Copy tool, but this tool fails to help. I had to build my own tool, which kept track of changes based on Visual Source Safe.
3) When you hit F5 for debug execution it takes merely 2 minutes to compile and execute whole project. Of course you can attach debugger to existing thread, but this is not an obvious solution.
4) If you ever try to generate controls on a fly you will hit first unsolvable limitation. How to reference other pages and controls. Page and control compilation happens on a per directory basis. On best case you going to get assembly for each directory, in worst each page or control is going to get its own assembly. If you need to reference another page from a control or another page you need to explicitly import it with the #Reference directive.
So for,
customControl = this.LoadControl("~/Controls/CustomUserControl.ascx") as CustomUserControl;
You need,
But what if you want to add something really dynamically and can't put all appropriate #Reference directives? Or What if you are creating server control and it doesn't have ascx file, so you don't have a place for #Reference ? Since each control has it's own assembly, it's almost impossible to do reflection.
Web Application Projects which re-appeared in Visual Studio 2005 SP1. They solves all issues mentioned above.
1) Deployment. You get just one dll per project. You can created redistributable packages and repeatable builds.You can have versioning and build scripts.
2) If you did code behind change you can upload just one dll. If you did aspx change you can upload just aspx change.
3) Execution takes 2-3 sec maximum.
4) Whole project is in one assembly, which helps reference any page or control. Conclusion. For any kind of serious work you should use Web Application Projects. Special thanks to Rick Strahl for his amazing article Compilation and Deployment in ASP.NET 2.0.
I agree with Rich.
Further information:
Deploying your SOURCE code ala the .vb files to the server is a BAD idea. Compile it. Obfuscate if you can, just don't deploy straight source. Imagine an attacker which gains access to the system. They could easily change your code and you might not ever notice. Yes, you can use a tool like reflector to decompile. But it's really hard to decompile a full site, make the changes you want, and put them back into production.
Deploying a single file might very well cause some type of problem in a related module. I'm guessing you guys don't really do QA. Tell them it's time to grow up.
Compiling your site will reduce JIT (just in time) compilation. Think performance.
I'm also going to guess that pretty much everyone has production server access. This is bad from the company's perspective as you have no controls in place. What happens when an employee decides to cause some havoc before leaving?
What you are describing is inline with Cowboy coding. Sure, it's fun to ride to the rescue but this style frequently blows everything up.
It's bad for rolling back. If you deploy as a web site vs web app, yeah you can do quick patches of one or two files, but what if you ever need to roll back to a previous version? Good luck tracking down all the files that were updated to make the new version. I much prefer the concept of a "version" for organizational reasons, and the compiled web app is much more inline with this than a "website" project.
We had this dilemma and ended up going with the compiled version mainly for the security reasons. If your site is external facing you could be compromising your security by allowing the vb files to be out there in plain text. I realize one could still get your code if they really wanted to but it would be an additional hurdle they would need to go through. If you use Visual Studio as your development environment you can publish the site pre-compiled and check the named assemblies option when publishing and this will essentially create a dll for each aspx page so you can do the one off page changes if necessary. This was a great feature we found as we were constantly updating the whole site and there were times when things would get updated that shouldn't. After using that feature we no longer had updates getting pushed that shouldn't. As far as rolling back I hope your using some type of Source control / versioning system. Team Foundation Server is great for versioning/source control but it is quite pricey.
What is the best deployment strategy depends a lot on what kind of environment you are working in, and what kind of developers you are working with.
Visual artists that started with graphic layout and worked towards programming are much more in tune to individual page generation and release. Also the .aspx.vb files are simply server side scripting, not really programming.
Programmers usually start at the command line and branch out to environments such as the web and understandably feel that good programming practices should be applied too the web, including standard test and release cycles (and compiled code).
If the site is in constant flux the individual pages would make more sense, but if you are required to deliver an installation package to your production group msi files are the way to go, since they can be easily backed out if necessary.
If you evaluate what your groups needs are, which includes the varied experience of everyone in your group, you should be able to convince either yourself or the group. This is not a matter of which is better, but which provides the best business model.

Is it commonplace/appropriate for third party components to make undocumented use of the filesystem?

I have been utilizing two third party components for PDF document generation (in .NET, but i think this is a platform independent topic). I will leave the company's names out of it for now, but I will say, they are not extremely well known vendors.
I have found that both products make undocumented use of the filesystem (i.e. putting temp files on disk). This has created a problem for me in my ASP.NET web application as I now have to identify the file locations and set permissions on them as appropriate. Since my web application is setup for impersonation using Windows authentication, this essentially means I have to assign write permissions to a few file locations on my web server.
Not that big a deal, once I figured out why the components were failing, but...I see this as a maintenance issue. What happens when we upgrade our servers to some OS that changes one of the temporary file locations? What happens if the vendor decides to change the temporary file location? Our application will "break" without changing a line of our code. Related, but if we have to stand this application up in a "fresh" machine (regardless of environment), we have to know about this issue and set permissions appropriately.
Unfortunately, the components do not provide a way to make this temporary file path "configurable", which would certainly at least make it more explicit about what is going on under the covers.
This isn't really a question that I need answered, but more of a kick off for conversation about whether what these component vendors are doing is appropriate, how this should be documented/communicated to users, etc.
Thoughts? Opinions? Comments?
First, I'd ask whether these PDF generation tools are designed to be run within ASP.NET apps. Do they make claims that this is something they support? If so, then they should provide documentation on how they use the file system and what permissions they need.
If not, then you're probably using an inappropriate tool set. I've been here and done that. I worked on a project where a "well known address lookup tool" was used, but the version we used was designed for desktop apps. As such, it wasn't written to cope with 100's of requests - many simultaneous - and it caused all sorts of hard to repro errors.
Commonplace? yes. Appropriate? usually not.
Temp Files are one of the appropriate uses IMHO, as long as they use the proper %TEMP% folder or even better, use the integrated Path.GetTempPath/Path.GetTempFileName Functions.
In an ideal world, each Third Party component comes with a Code Access Security description, listing in detail what is needed (and for what purpose), but CAS is possibly one of the most-ignored features of .net...
Writing temporary files would not be considered outside the normal functioning of any piece of software. Unless it is writing temp files to a really bizarre place, this seems more likely something they never thought to document rather than went out of their way to cause you trouble. I would simply contact the vendor explain what your are doing and ask if they can provide documentation.
Also Martin makes a good point about whether it is a app that should run with Asp.net or a desktop app.

How do you deploy and manage a C# web application to a customer with some minor differences from your base project?

I have a semi-large web application that we run locally and I need to deploy it at another location. The second location will require some slight modifications to the project (especially cosmetic). How do you manage these differences and what do you use to distribute the site and updates to a customer like this?
Edit:
Right now our web app runs in-house and we build with Cruise Control .NET and MSBuild with WDP. What would be a good option for deployment to the customer? We will not be updating their site for them so a solution that is simple for them to deploy and update is desirable.
Branch your code.
Hopefully your code is source controlled (if not, start now!), you should branch from the base to the "Customer X" branch and just make the slight cosmetic modifications in that branch. Then just build and deploy off of that branch for that customer.
Additionally, if the changes are minor enough, you could try to make the changes configurable. That way you could deploy the same site everywhere, and just change the configuration to match what the customer wants. The more complex the differences, the harder it will be to make them configurable though.
After reviewing comments: Its good to note that configuration is practical, but ONLY if the # of changes are minor, otherwise you will pollute your code with configuration logic. (Thanks commenters)
So: Lots of changes --> Branch (more maintainable), few minor changes --> Make configurable (more practical).
We have to do it all the time. We try to generalize and make the differences between versions configurable. The most common reasons for customizations are:
additional database fields: we implemented a dynamic way to store these items in db
UI layout: we have special folders where we put images and css files which are loaded on demand
different mandatory input fields: we store the definition in the configuration and activate them programmatically
special reports: we put the template files in the custom folder in order to be chosen instead of the standard template
Some changes require programming new modules. We code them in a custom library which will be dynamically loaded inside the main application.
We usually make those differences by being data driven. The customer's difference is just a different setting; any other user in the future could as well reuse the same "custom" options later on.
Creating "one-off"s doesn't scale.
Custom patches are a pain for this very reason. We typically just branch in our source control system, and manually apply the changes after updating with a script. Because of the additional overhead, we discourage custom patches as much as possible.

Resources