Trouble designing grunt workflow, with rev and usemin, for webdev - gruntjs

I'm going to start using grunt-rev & grunt-usemin with grunt-watch for my web development needs (a RESTful Web App specifically).
I have a local development machine which will run grunt-watch to attach revision identifiers on my JS files. I git commit and git push my tree to a git repo, and then ask the production server to git pull the changes from the git repo to show them to the web visitors.
The problem is that I don't want my git repo to store different filenames (due to grunt-rev) on each commit. That would be bad, because then I wouldn't be able to do git diff between commits without having my screen get flooded with the contents of files that appear and disappear, and also it could (sometimes) take up a lot more storage than if it only stored the small diffs of the files.
The only solution I see is to add the build directory containing the versioned filenames in my .gitignore, so as to not store those files (with the constantly changing filenames) in git. But wouldn't that mean that I would have to run grunt-watch on my production server as well, in order to produce the build directory with the versioned filenames there as well? But that gets complicated: a new process has to run on the remote server, maybe with its small chances of error in processing the files. Not the solution I was hoping for.
Do you people have another solution? What would you suggest I did?

What I do to solve this, is to remove previous "build" files before committing and deploying a new file. There is no need to keep older files that have been generated, because you can always rebuild them with the source files (which are in git).

Related

Use Git in SSH to pull specific directories

Total newbie question but what is the best practice when it comes to using SSH with Git? I'm working on a WordPress project. In the root I have gulp and other dev files/folders like SASS and Scripts that I don't need on the server and in the same project I have my WordPress folder that contains a theme and a few custom plugins. As you can imagine when the theme or any of the plugins are ready to be deployed I don't want to pull everything in my repository on the server. So far as a newbie I've always just pull and pushed the entire repository and used FTP to upload what I need to the server, so how is this done with SSH and Git and is there a better way to have my setup?
EDIT: To make my question a little bit more clear let me give you an example of what I think my issue is. In my main project folder, I have a SASS folder next to my WordPress folder. All I really need to deploy to the server is the WordPress folder. My build process that happens on my dev machine combines all of the SASS files into a single CSS that is then placed into the WordPress folder. I need the SASS folder to be tracked by Git so that any other developer can pull them and continue developing so I can't have git ignore it. However none of those SASS files need to be on the server for WordPress to work either. I just simply need to deploy the WordPress folder and everything that's in it.
I understand the idea of creating a bare repository on the server and using post-receive hook to point the git folder sitting outside your web root to point to where the web root is. But that's basically how GIT and SSH work and that's not answering my concern.
Not with Git
Git is not designed to pull specific files or directories only. It's a directed acyclic graph with binary blobs as objects and sometimes multiple objects get compacted into a single larger object.
Due to Git design, your specific request is not possible.
Alternatives
post-receive hook
If your website only contains simple static files then it's okay to push to a git repository over SSH. In reality, it's unlikely your repository will be large as long as you don't have non-text files.
Take for example the following setup.
/var/lib/www - apache web dir which is the cloned copy of www.git
/var/lib/www.git - a bare git repository.
/var/lib/www.git/hooks/post-recieve - A server side git hook. It can be a shell script that pulls the www repository when this repository is updated.
Sample post-recieve hook script:
#!/bin/bash
cd /home/sam/sandbox/git-hooks/www
unset GIT_DIR
git fetch origin master
git reset --hard origin/master
Zip up build in a tar.gz
At the end of your build you can zip up your files in a tar.gz. This file should be hosted somewhere (perhaps GitHub releases if you're using GitHub). Some enterprises use on premise artifact hosting like Nexus or Artifactory.
The idea being: you have a tested artifact that has a specific sha256sum. The artifact you test is the exact same artifact which eventually goes to production.
Diving into more detail such as continuous integration, continuous delivery, and the software development life cycle might be out of scope for your question.
No best practice.
Git is for source control, not for deployment. There is no best practice for using git this way because git is not a deployment tool. You also don't need git history on your server. In fact, you don't need git at all unless you insist on using it for deployment. You are welcome to use it this way but it's not ideal because of exactly the kind of problem you're asking about.
What is the best practice?
There are a number of tools you could use to handle your deployments. Most of the tools generally let you set up a series of steps that let you deploy the code you want into the environment you want. You could go with simple tools such as Phing or Deployer in the PHP world, or something more sophisticated like Puppet or Chef if you have more complex needs. You could just write your own bash scripts if what you need is really very simple. I recommend Phing or Deployer given the info you've provided. https://deployer.org/ https://www.phing.info/
You'll just configure whichever tool you want to ssh into your target box and copy over only the files you want into the directory you want on the server, in whatever way you would like to do that. Usually, you have the script copy files into a temp dir, tarball them up, ssh them over and untar them. After that, you'll usually do some additional work on the server to move files around, change symlinks, whatever else you might need to do.
What about compiled SASS, ES6 js files, or modern static stuff?
All you need to do is add steps to the handle the static files and where you want them to go. Include the generated static files in your tarball when you push stuff up, and put them in the right directories in the server once you untar it.
When you configured your SASS compiler, and whatever other pre-compiled static code you may have - you configured it to create a destination file. That is, the file(s) of actual CSS and JS that they generate. That's all you need to bring along - and if you have the destination directory set to be inside your wordpress theme, you may not even have to pay all that much special attention to it's handling. You may need to move them somewhere else once they are on the server but that all depends on the specific setup in your server, which I think is outside the scope of this question.
Additional Notes:
You didn't ask about this but I thought it was worth mentioning, that you shouldn't be sending the entire wordpress repository every time you update. Just like you don't need the uncompiled SASS code, you also don't need to be repackaging core WordPress. You don't even need to be commiting core WordPress, its a dependency and you don't need to be changing it.
All that should be getting committed by you is your theme and plugin code, and the uncompiled static files. Compiled static files and external dependencies like the WordPress core don't belong in your git history. For deployment purposes, WordPress should already be installed. The stuff in your tarballs should just be plugins and themes, and additional static files if they aren't already in there for some reason.
TLDR;
Don't use git for this. Use a tool like Phing or Deployer. Build your static files into your theme, and create phing/deployer scripts that tarball up only the code you want, SSH's it over to your server, and untars it into the directories you want. If you have some special location on the server for your static files, just make sure to add steps in your script for that.
So, based on your question and comment, there are three computers involved. There is a web server (when you say "server", I take it as a Web server in this scenario, or the server computer that runs a Web server program). There is another server where your git repo is hosted. And, there is your dev workstation. Is this correct?
It seems like, you have a cloned git repo on your Web server. Your current practice/workflow appears to be (1) (based on your expression "SSH'ed into my server") you log into the web server via SSH (just like Telnet) from your workstation (SSH is just a protocol, which can be used for different purposes). (2) you pull from your repo on hosted service (e.g., github), and (3) deploy it to your "www" directory on the same server. Is this correct?
(I can think of an alternative scenario based on your use of the word "FTP", etc., but let's focus on the above scenario, for now.)
Now, your question is, whenever you "pull" (on your Web server), you feel like you are pulling everything from your repo on your hosted service. And, is there a better way? Am I understanding your question correctly?
If so, as another commenter suggested, git (and, any version control system, in general) is very good at fetching "deltas" only. If you are worried about "fetching everything" every time you pull (the step (2) above), then your worry is unfounded.
Now, the question is, why do you have a git repo on your Web server, if that is indeed the case? This is a pretty legit setup and I've done this before (e.g., on EC2). But, as a best practice, people generally don't do that on "production" servers. It's because you have to "build" your web app, and you really don't want to do that on production servers.
The next question is, what do you exactly do in Step (3)? The build process (whatever process you use) typically generates an "output" which can be directly deployed to the web server. (The convention is the output is generally a single folder, "public", "www", "dist", or whatever, or a single file (e.g., tar.gz, zip, jar, war), etc.) Regardless of whether you build the deployable output on your dev workstation (or, a build machine) or on your Web server, you don't generally do "deltas" in this context. Even if you've only changed a single file (say, a CSS file), you generally build the whole output again (instead of, say, just replacing the changed CSS file only). When you use FTP to upload files, etc., you can selectively upload certain files and/or directories, etc., but as a general practice, we don't do that. We always build the complete output from scratch and deploy it to the Web server. (This is mainly to reduce the potential deployment errors and increase the reliability.)
So, to answer your question, (A) If you are pulling git repo on your Web server, you should really change that practice, and move the build process to your dev computer or a dedicated build machine. (BTW, services like github, gitlab, TFS, ... provide the build service for you.) (B) If you are currently selectively FTP'ing your web app files to your Web server, then you should really consider adopting some kind of formal build, and deployment, process moving forward.
After your SASS build process is done use scp or rsync to move the files to the prod server:
scp -r /[local wordpress dir]/wp-content/themes/your-theme/ username#your.prod.server.com:/path/to/dir/wordpress/wp-content/themes/
scp -r /[local wordpress dir]/wp-content/plugins/* username#your.prod.server.com:/path/to/dir/wordpress/wp-content/plugins/
I am working in a project and using git ssh with bitbucket following is the process i am using it may work for you also if not please correct me :
Step 1 ->I have setup git and create repo in bit-bucket.
Step 2 ->And setup project with my local and linked with my repo.
Step 3 ->connect my server using ssh.
Step 4 ->Work in my local and commit and push all changes in my git repo.
Step 5 ->Run git pull on ssh so all changes deployed in my server.
I am using above process and i love this process.i have used .gitignore file that is not required for push on my repo.
Thanks

What's best way to clone a GIT project into a Web application with existing and non-GIT files?

Want to cut to the chase? Jump down to QUESTIONS below. Cheers.
BACKGROUND
I am changing my web development workflow to include GIT for version
control. GIT is new to me, although I have a technical background, so
it's not a complete mystery. :-)
I have a GIT repository on Bitbucket (BB), which I use as the central project container.
I use my local web server (on Mac OS X) for development.
That's my dev server.
Once tested locally, I push changes from my dev server to Bitbucket.
I have two remote servers. One is a staging server, the other is the production/live server.
I've learnt (sufficiently, for now) how to manage the files between dev and Bitbucket. I have everything set up to run over password-less SSH (using rsa keys).
THE ISSUE
I am not sure what the proper or recommended way is to manage the workflow between BitBucket (or whatever system one uses) and the staging server, and production server.
Here's an example:
I am working on a site with the following structure (in part): (those with ++ around them are the folders I with files I must track/manage in GIT)
- www_root/images/...
www_root/etc/...
www_root/extras/themes/...
www_root/extras/plugins/...
++ www_root/extras/plugins/my-plugin-1 ++
++ www_root/extras/plugins/my-plugin-2 ++
++ www_root/extras/themes/my-theme-1 ++
www_root/extras/theme/someothertheme
The approach I took on my dev server (localhost) was to have a repository for the entire www_root/, and a gitignore that excludes everything except those folders with the ++ next to them (in the above example), AND their parents. So in this case, the ../plugins/.. and ../themes/.. folders are both included in full.
When it comes time to set up the staging server, this is where I came unstuck. What I did was install the application on there (WordPress, in my present case). But then I can't add the GIT project, because I run into the issue that git init aborts due to the folder not being empty.
As mentioned, on my local server I'd set up the site root as the root of my project, and with gitignore all files except the various folders I am working in, are excluded.
WHAT I TRIED
I attempted setting up a project.git folder above the web_root, and then adding a work-tree for the /extras/ folder in the web root. I ran into issues. Could not git init (non-empty folders issue) and could not git pull, git fetch, etc.
I also attempted creating a GIT project within the www_root, and with the worktree being the /extras/ folder, but again, struck the non-empty folder issue.
QUESTIONS
1) Is there a way I can incorporate a GIT project, and pull down the files from Bitbucket/Github, into a an existing folder structure (such as would exist in any situation where one is developing components of a larger application, with its broader folder structure)?
2) OR, is it necessary to either have a completely empty /wp-content/themes/ (in my example) folder on the staging and production server, and manage the entire content through GIT, and then GIT INIT and GIT CLONE into that folder, and then have a separate project for /wp-content/plugins/ where all content of that folder is handled through GIT, etc., and do the same for that?
3) OR, is the best approach to have separate projects for each sub-folder? Hypothetically, www_root/extras/themes/my-theme-1 and, let's say, www_root/extras/themes/my-theme-2, and perhaps www_root/extras/plugins/my-plugin-1
4) Or have I completely overlooked another way of going about this?
If this is an option, can you create a new folder, create a GIT repo in it and then move files into the repo?
Then the usual - in case of fire:
git commit
git push
leave the building

How to manage multiple alfresco repositories?

Problem description:
I have multiple alfresco installations (development, testing, production) of one project.
I need to copy files under Data Dictionary folder (Scripts, Templates, Web Scripts) from one to another in one direction (development -> testing -> production).
Current solution:
I copy files manually via webdav, which is annoying and unreliable (I can forget to copy some.).
Desired solution:
I'd like to have I tool, which will copy changed files at my command, what they are ready for the next step. I had an idea, that it could internally use a Git repository with branches for each installation, being able to fetch the files from devel and push the files to testing and production. This way (with Git) it could also support reverting changes.
It looks like a quite common problem, but I wasn't able to google something about it, so I'm asking here. Does such a tool exist or is there a better way of managing multiple repositories?
If you have a brand new installation of your development/testing/production Alfresco instances, you could simply migrate alf_data dir content, that contains by default db, indexes, content-store, backup files. If you need, you could migrate the "shared" folder too, or at least some files from the shared folder as could be some Alfresco customization (custom scripts or similar). Here is the link that helps with migration steps:
http://wiki.alfresco.com/wiki/System_Migration
Otherwise, if you need only to move a folder from Data Dictionary, or a set of documents, you could use ACP in order to achieve that. Here is the wiki for doing this: http://wiki.alfresco.com/wiki/Export_and_Import
You could do this via FTP. When your want to deploy new changes, you can go with manual client like FileZila to download changes from Dev, then upload them to test.
But you can also automate FTP, so that it can run a scheduled check if there are new things on, say, dev and push them to test.
If you use Git for source control, you could also do this via git-ftp. Hold a copy of Data Dictionary in your source folder, then add some sort of pre-commit check, which will see if you changed any of those files. If you did, on commit it will push the change to dev and test.
I think Relication service AF is suitable for you.
http://wiki.alfresco.com/wiki/Alfresco_Community_3.4.a#Replication

How do I sync production sub folders with a git repository?

I'm using asp.net and WebDeploy to publish the latest bits of my website. The production site has a couple folders that I would like to keep in sync, though.
Since it's an asp.net site, I would rather not have my entire repository on the server when I can get by with just the views and dll. Additionally, I would rather not add the extra clutter of additional class projects & files to my production server file system.
How do I keep the folders in production in sync with the master branch in git? An automated solution would be optimal.
you could create a git hook that will export your files using git archive to create a zip (or tar) to a tmp folder. Then, uncompress the files to the given tmp folder, finally, transfer them to your server.
Also, check this answer Do a "git export" (like "svn export")?
Setting up a scheduled task to commit those files to their respective git repositories will work. It's not ideal, but it will make sure those repositories are relatively up to date with the assets on the live site.

Use subversion for asp.net deployment - Causes appdomain recycle

I am experimenting with using subversion to deploy updates to my ASP.Net application, one issue that I am facing is that whenever the working copy(containing the build) is updated the ".svn" folder inside of bin gets updated and this causes the ASP.Net appdomain to recycle. I don't want this to happen unless something in bin has actually changed.
Is there a way to tell ASP.Net to ignore the ".svn" folder inside of bin? Basically not watch that folder for changes?
If this does not work out, I'll be using a staging folder outside the web folders to download the builds onto the servers and then use scripts to patch/update the actual web folders.
[Edit:] The svn export option will not keep my deployment under version control, I want to be able to do "svn update" on the web folders to deploy and rollback releases.
If you use svn export instead of svn checkout to get the files from your repository you will not get the .svn folder on your server.
[Edit] Another option would be to delete "bin" from your repository (and possibly commit it to another one, if you need revisions), and then just copy the bin-catalog to your webroot manually when it changes. Remember to add "bin" to your svn-ignore-list.
You probably want to add the "Bin" directory to your svn:ignore list; it should not be committed anyway as it contains compiled code, not source code.
In any case as your final deployment "svn export" is probably a better choice, as others have noted.
Have you thought about using a Continuous Integration server?
Continuous Integration basically refers to a development practice designed to increase the frequency of commits to the repository.
The more often you commit the better granularity you have over rollbacks and also the less that can be broken between commits.
The tools listed below all work with subversion and can be combined with MSBuild on the server to produce an automated build & deployment system.
MSBuild directives include the option to ignore certain files (i.e. code behind) when copying to live directory. In addition, some files you may want to have a development version and a live version... in this case we should write a "transform" script for MSBuild which just makes sure that the file is correct for the live server when copying live. (i.e. web.config)
Hudson - http://java.net/projects/hudson/
Draco - http://draconet.sourceforge.net/
CruiseControl - http://cruisecontrol.sourceforge.net/
Well unfortunately if you do this, then you will, as you're experiencing, an AppDomain restart. So unless you do as Espo has said and use svn export, you'll see this issue.
Would it be easier to write a 2 line batch file that svn updates a local copy and then copies the files across?
An app pool recycle should not be that big a deal. Is your issue perhaps that your users are losing their session when this happens? If so, switch to StateServer or SQLServer sessions instead of using InProc.
Subversion 1.7 and up doesn't create .svn files in each subdirectory anymore which makes it possible to do what you want without the .svn files getting in the way.
I'm a little late to the game but I'll throw my 2 cents:
You can use svn export passing a -r REV param. This enables you to rollback your app to the specified revision.
So you can use:
svn export REPOSITORY DESTINATION --force to update to HEAD (current state of you application)
or
svn export REPOSITORY -r REV DESTINATION --force to update to an other revision (maybe you should be using tags)
DonĀ“t forget the --force param so it can replace the existing files in DESTINATION.

Resources