Alternative to client side task runners in asp.net 5 - asp.net

I have been using Visual Studio Team Services (was Visual Studio Online) for version control. As a backup i keep my project files on OneDrive and mapped the root work-spaces there. Soon after the launch of RC 1 and its support on Azure, we have started migrating all our projects to ASP.NET 5 and things have been really great, and i really love the GruntJS task runner for the client side development, but the issue with it is that the node modules create a highly nested folder structure which causes the OneDrive sync to fail.
The client side development happens in TypeScript and grunt is used just for bundling and minification purposes of the compiled JavaScript files. Since Web Optimizations is not the recommended method in ASP.NET 5 and i could not find any port for .NET Core
While searching for solutions i somewhere stumbled upon a link which said that OneDrive for Business does not have this limitation, we have Office 365 subscription for the organization and tried syncing the projects there, but failed as even OneDrive for business has this limitation.
Though there is a UserVoice suggestion for this, where the Microsoft Representative says that they are thinking about this, but till the time this gets implemented, i wanted to ask for alternatives to GruntJS for ASP.NET 5.
Since most of the application we build are single page enterprise apps where user opens the app during start of the workday and closes the tab at the end of day, rarely refreshing the app so we can live without optimizations for now, but just out of curiosity for consumer oriented application, are there any alternatives for GruntJS for ASP.NET 5.
EDIT
Points to be noted
TFS Check in happens only when a code needs to be committed, but OneDrive sync happens continuously in the background automatically, in the event of a hardware crash or device getting lost (or any reason in the world stopping us from accessing the code), TFS won't be able to provide the code under development which was not committed. OneDrive saves us from those scenarios
Check in happens mostly when a significant portion of assigned task in backlog is completed. In 50% of the cases it's 2-6 working hrs but in the rest half it can happen around 2-3 business days, worst case scenario is 5 days in case holidays happen in between.
One drive allows us to stop syncing of selected folders available on live to local but there is no way we can stop sync of local folders to live
Possibility of hardware failure is assumed to be once a year resulting in a loss of 5 working days and it's not an acceptable risk to loose the progress. (Just installation of visual studio enterprise with all features takes around 5-8 hrs on my development machine with 4th Gen Intel i5 2.9GHz 16GB ram, we have plans to upgrade our development machines with SSD and 64 GB of ram, but that's out of scope of question)
If we weigh extra time spent in optimization tasks vs the risk of lost files, the optimization happens only when we push updates to production server which happens 1-3 times a month, every round of optimization tasks does not takes more than 10-15 minutes, so we can live with manually optimizing the files before production or not optimizing at all.
Files like .gitignore and.csproj allows which files to sync to TFS not OneDrive, and my problem is not at all with TFS, we are perfectly happy with the way Check Ins are done and managed, once a code is committed, all the worries are resolved automatically, my worries is just for the uncommitted code.
Summary
I agree the problem has a highly specific nature, but it might be of help to future readers either looking for similar solutions or just to gain some more knowledge.

Two things:
Team Foundation Services / Visual Studio online IS your source control solution, why would you need / want to back it up elsewhere?
I would probably exclude your node modules from your sync anyway. Your project configuration files will contain a list of the node modules you need, and if you wanted to move the app to a new machine or folder you'd just run npm install again to pull down the packages.
EDIT:
TFS Check in happens only when a code needs to be committed, but
OneDrive sync happens continuously in the background automatically, in
the event of a hardware crash or device getting lost (or any reason in
the world stopping us from accessing the code), TFS won't be able to
provide the code under development which was not committed. OneDrive
saves us from those scenarios
This indicates that your check ins are likely too large or not frequent enough.
How often do you check in? How often does your hardware fail to the extent where you lose all files? Is this an acceptable risk?
One drive allows us to stop syncing of selected folders available on live to local but there is no way we can stop sync of local folders
to live
And this is why you should be using TFS/VSO as the mechanism to control what files are stored in your source control repository. Systems like .gitignore or .csproj files exist for exactly this reason.
EDIT2:
I'm trying not to be too harsh and I agree this is a problem which could probably be solved in some specific way by OneDrive, but I'm trying to perform a SO equivalent of the 5 Whys.
You've said:
TFS won't be able to provide the code under development which was not committed. OneDrive saves us from those scenarios
and
Check in happens mostly when a significant portion of assigned task in backlog is completed. In 50% of the cases it's 2-6 working hrs but in the rest half it can happen around 2-3 business days, worst case scenario is 5 days in case holidays happen in between
Ok, so lets talk about that. You want to prevent loss of code if hardware fails, sure we all do. Looking at your figures 2-6 working hours seems reasonable, so you're looking at loosing 0.25 to 1 day (ballpark). I think the bigger issue is if people aren't checking in for 5 days. You've mentioned 5 days could be the worse case and only if holidays happen, so that's not 5 working days it's 2-3 business days worst case.
You've then said:
Possibility of hardware failure is assumed to be once a year resulting in a loss of 5 working days
So, above you've said 2-6 hours for check-ins (with exceptions) but you've based your decision to come up with a solution for this problem based on an assumption of 5 working days. Above you've said 5 days (not working) only if holidays occur. So really your figures for risk are exaggerated.
Let's be more realistic and say you have a hardware failure once a year, and you lose 4 hours worth of work.
From OneDrive, how long would it take you restore this work? lets guess at say a few minutes to connect and download it, and perhaps half an hour to an hour to sort out conflicts (#optimism). So lets say a net saving of 3 hours...once a year...do you think this is reason enough to back up all code to a and offsite cloud provider?
You talk about time taken to re-install VS, but that's not relevant to backing files up to OneDrive, you'd need to do that irrespective.
To be honest, I think your fear of losing uncommitted code is blown out of proportion and I think your reaction is too. TFS and VSO offers functions like shelvesets which can be used for just this kind of situation.
As an enterprise developer too, I do understand the concern, but normally you'd cover this by working on a machine with a RAID array or similar.
I think you should re-assess you check in policy and your assessment of the potential for lost code.

Related

.NET applications load extremely slowly in IIS-8 Windows Server 2012

After perusing stack exchange & numerous MSDN articles for months, I'm coming to the community looking for help. I'm the IIS Admin for my organization & I've been seeing an issue wherein when a change is made to a .NET application, the page takes a very long time to load when a request is first submitted to the page. When I say a long time, I'm saying anywhere from five minutes to a half hour at times. I've tried a number of items to fix the issue. But it's my belief that there's a configuration setting buried somewhere that is causing the issue. Once the page loads once, it's load time is normal going forward. It seems to happen regardless of the application, although some take longer than others. It's also not consistent. Sometimes it'll take seven minutes to load after a change. That same application may take seventeen minutes to load the next time a change is made... The changes are usually very minor, a new image box is moved in, or a new link added. Nothing major.
If an app takes twenty seconds to a minute to load the first time, we're not concerned. But the ten, fifteen minute, sometimes half hour load time is what is not acceptable.
The issue happens regardless of whether or not it's a static content application, or if it's reaching out to a data source. Any connections to a data source are configured at the app level. We only utilize that on a handful of apps & I verified the connection information is valid in the web.config for those apps that do reach out to a data source. We're using Windows Authentication on every app.
We run a three tiered environment, all running Windows Server 2012 R2 Standard with 16gb of ram & a multicore CPU setup. We're running IIS 8.5 on .NET 4.0.30319. The app pool is utilizing the integrated pipeline with support for 32-bit apps (which the apps are). These are VMware Hosts. The server is rebooted once a week.
The issue does not occur on our test or dev servers. Only our production server. It doesn't matter what time of day the change is made.
In December of 2016, I ported over all of our .NET applications from an older Windows 2003 system running IIS 6.0 to the new Windows 2012 system. While we worked with the developer to alter any hardcoded hostnames in the apps, we ultimately had to install a CNAME host record to re-direct the old hostname to the new one. The issue seems to have started around this time.
One issue I noticed was that the developer was compiling all apps in debug mode. We changed this setting to false, but it only marginally helped in some cases.
I tried the following as well:
Segregated a given app & set the app pool to always run. Also tried changing the application pool identity to run as a network service, or as a service account user.
Adding the application initialization role & configuring in IIS to always run the app pool & enabled preload at the app level. - I backed this out when I saw it made no change.
I married up all the roles between our test/dev servers where the issue doesn't happen with that of production.
Disabled idle timeout at the app pool level.
Compared settings between test & production regarding compilation settings, time-outs, etc
None of these changes made any difference. I can't find anything differentiating the test/dev boxes where there is no issue & the production problematic server. Please let me know what additional information you may need & I appreciate the help in advance!!
Thanks,
Mike
We found the problem. Our AV software was the culprit. Specifically Trend Micro Deep Security.
When a request was made & the program was going to compile in the temp area (C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files) - I noticed it seemed to be taking a very long time for the files to build out. Pre-compiling saw the same behavior. I asked our LAN admin to add an exception for this particular folder. Once that was done, it's only taking 5-10 seconds on first load, which is acceptable. Thanks again for your input.
You want to precompile your pages so they don't get compiled on first run. IIRC it's just a simple web.config addition.
https://msdn.microsoft.com/en-us/library/bb398860.aspx
Though to be honest, compile times of 5 minutes per pages seems like there's a bigger problem going on. I've never seen a new aspx compile take longer than a few seconds.
Neil N's answer makes sense, but I'd definitely check and see if that setting is the same across all of your servers.
If that isn't the culprit, my next stop would definitely be looking at the 'Application' section of Event Viewer. It could be throwing some meaningful warnings to help you get closer to the root of the issue.

SQL Server stalls for ten minutes

I manage an ASP.NET MVC 3 website with multiple online transactions. In the website, customers can place orders, pay bills while vendors can bill customers. All this can happen simultaneously so I have semaphores to ensure thread safety.
What I have noticed is that about once a week, the website stalls for ten minutes. My first tought was for deadlocks in the semaphores, but after putting in place a semaphore log and analysing the results, there seems to be no deadlocks. Also, the website comes back by itself after ten minutes.
While investigating, I noticed that the entire website becomes irresponsive and not just the parts using the semaphores. They all use the database tough. That is why my primary suspect is the database.
What is stranger is that every time, the website freeze for ten minutes almost to the second. Could SQL Server have a scheduled maintenance or anything that could explain this delay? If not, do you have any idea what could cause this?
The answer to your question seems to be "yes". Something is happening in the environment that is locking things up.
Have you run sp_who2 to see what is running when it stalls?
If that is inconvenient, then set up a job to dump sp_who2 output into a table every five minutes. When it stalls, you can see what is running and work from there.
I have faced what may be a similar problem, where the master database seems to be getting locked up. As a consequence, renaming databases does not work. Fortunately, this is not in a live transaction environemnt, so waiting five minutes and trying again does the trick.
ASP.NET Hangs can happen for a variety of reasons. Typically you get Connection or Command timeouts when you have problems with SQL not hangs.
You're much better off
Grabbing the Debugging Tools for Windows
Use AdPlus to grab a memory dump (adplus -hang -pn processname.exe) or DebugDiag and setup a dump rule
Use WinDbg (or VS 2010 for 4.0 framework) (after you set up a symbol cache) and start examining what's happening using
!threads or !dumpheap -stat to inspect the threads and the heap objects.
Please note debugging production issues is very hard and WinDbg is not a friendly tool but guessing and looking at logs is even less so.

Advantages of a build server?

I am attempting to convince my colleagues to start using a build server and automated building for our Silverlight application. I have justified it on the grounds that we will catch integration errors more quickly, and will also always have a working dev copy of the system with the latest changes. But some still don't get it.
What are the most significant advantages of using a Build Server for your project?
There are more advantages than just finding compile errors earlier (which is significant):
Produce a full clean build for each check-in (or daily or however it's configured)
Produce consistent builds that are less likely to have just worked due to left-over artifacts from a previous build
Provide a history of which change actually broke a build
Provide a good mechanism for automating other related processes (like deploy to test computers)
Continuous integration reveals any problems in the big picture, as different teams/developers work in different parts of the code/application/system
Unit and integration tests ran with the each build go even deeper and expose problems that would maybe not be seen on the developer's workstation
Free coffees/candy/beer. When someone breaks the build, he/she makes it up for the other team members...
I think if you can convince your team members that there WILL be errors and integration problems that are not exposed during the development time, that should be enough.
And of course, you can tell them that the team will look ancient in the modern world if you don't run continuous builds :)
See Continuous Integration: Benefits of Continuous Integration :
On the whole I think the greatest and most wide ranging benefit of Continuous Integration is reduced risk. My mind still floats back to that early software project I mentioned in my first paragraph. There they were at the end (they hoped) of a long project, yet with no real idea of how long it would be before they were done.
...
As a result projects with Continuous Integration tend to have dramatically less bugs, both in production and in process. However I should stress that the degree of this benefit is directly tied to how good your test suite is. You should find that it's not too difficult to build a test suite that makes a noticeable difference. Usually, however, it takes a while before a team really gets to the low level of bugs that they have the potential to reach. Getting there means constantly working on and improving your tests.
If you have continuous integration, it removes one of the biggest barriers to frequent deployment. Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features, and generally become more collaborative in the development cycle. This helps break down the barriers between customers and development - barriers which I believe are the biggest barriers to successful software development.
From my personal experience, setting up a build server and implementing CI process, really changes the way the project is conducted. The act of producing a build becomes an uneventful everyday thing, because you literally do it every day. This allows you to catch things earlier and be more agile.
Also note that setting build server is only a part of the CI process, which includes setting up tests and ultimately automating the deployment (very useful).
Another side-effect benefit that often doen't get mentioned is that CI tools like CruiseControl.NET becomes the central issuer of all version numbers for all branches, including internal RCs. You could then enforce your team to always ship a build that came out of the CI tool, even if it's a custom version of the product.
Early warning of broken or incompatible code means that all conflicts are identified asap, thereby avoiding last minute chaos on the release date.
When your boss says "I need a copy of the latest code ASAP" you can get it to them in < 5 minutes.
You can make the build available to internal testers easily, and when they report a bug they can easily tell you "it was the April 01 nightly build" so that you can work with the same version of the source code.
You'll be sure that you have an automated way to build the code that doesn't rely on libraries / environment variables / scripts / etc. that are set up in developers' environments but hard to replicate by others who want to work with the code.
We have found the automatic VCS tagging of the exact code that produce a version very helpful in going back to a specific version to replicate an issue.
Integration is a blind spot
Integration often doesn't get any respect - "we just throw the binaries into an installer thingie". If ithis doesn't work, it's the installers fault.
Stable Build Environment
Prevents excuses such as "This error sometimes occurs when built on Joe's machine". Prevents using old dependent libraries accidentally when building on Mikes machine.
True dogfooding
You inhouse testers and users have a true customer experience. Your developers have a clear reference for reproducing errors.
My manager told us we needed to set them up for two major reasons. None were really to do with the final project but to make sure what is checked in or worked on is correct.
First to clean up DLL Hell. When someone builds on their local machine they can be pointing at any reference folder. Lots of projects were getting built with the wrong versions of dlls from someone not updating their local folder. In the build server it will always be built of the same source. All you have to do is get latest to get the latest references.
The second major thing for us was a way to support projects with little knowledge of them. Any developer can go grab the source and do a minor fix if required. They don't have to mess with hours of set up or finding references. We have an overseas team that works primarily on a project but if there is a rush fix we need to do during US hours we can grab latest and be able to build not have to worry about broken source or what didn't get checked in. Gated checkins save everyone else on your team time.

Forbid developer to commit code because of making weekly build

Our development team (about 40 developers) has a formal build every two weeks. We have a process that in the "build day", every developers are forbiden to commit code into SVN. I don't think this is a good idea because:
Build will take days (even weeks in bad time) to make and BVT.
People couldn't commit code as they will, they will not work.
People will commit all codes in a huge pack, so the common is hard to write.
I want know if your team has same policy, and if not how do you take this situation.
Thanks
Pick a revision.
Check out the code from that revision.
Build.
???
Profit.
Normally, a build is made from a labeled code.
If the label is defined (and do not move), every developer can commit as much as he/she wants: the build will go on from a fixed and defined set of code.
If fixes need to be make on that set of code being built, a branch can then be defined from that label, minor fixes can be made to achieve a correct build, before being merging back to the current development branch.
A "development effort" (like a build with its tweaks) should not ever block another development effort (the daily commits).
Step 1: svn copy /trunk/your/project/goes/here /temp/build
Step 2: Massage your sources in /temp/build
Step 3: Perform the build in /temp/build. If you encounter errors, fix them in /temp/build and build again
Step 4: If successful, svn move /temp/build /builds/product/buildnumber
This way, developers can check in whenever they want and are not disturbed by the daily/weekly/monthly/yearly build
Sounds frustrating. Is there a reason you guys are not doing Continuous Integration?
If that sounds too extreme for you, then definitely invest some time in learning how branching works in SVN. I think you could convince the team to either develop on branches and merge into trunk, or else commit the "formal build" to a particular tag/branch.
We create a branch for every ticket or new feature, even if the ticket is small (eg takes only 2 hours to fix).
At the end of each the coding part of each iteration we decide what tickets to include in the next release. We then merge those tickets into trunk and release software.
There are other steps within that process where testing is performed by another developer on each ticket branch before the ticket is merged to trunk.
Developers can always code by creating their own branch from trunk at any time. Note we are a small team with only 12 developers.
Both Kevin and VonC have well pointed out that the build should be made from a specific revision of the code and should not ever block the developers from committing in new code. If this is somehow a problem, then you should consider using another version management software which uses centralized AND local repositories. For example, in mercurial, there is a central repository just like in svn, but developers also have a local repository. This means that when a developer makes a commit, he only commits to his local repository and the changes will not be seen by other developers. Once he is ready to commit the code for other developers, then the developer just pushes the changes from his local repository to the centralized repository.
The advantage with this kind of an approach is that developers can commit smaller pieces of code, even if it would break a build, because the changes are only applied to the local repository. Once the changes are stable enough, they can be pushed to the centralized repository. This way a developer can have the advatange of source control even though the centralized repository would be down.
Oh, and you'll be looking at branches in a whole new way.
If you became interested in mercurial, check out this site: http://hginit.com
I have worked on projects with a similar policy. The reason we needed such a policy is that we were not using branches. If developers are allowed to create a branch, then they can make whatever commits they need to on that branch and not interrupt anyone else -- the policy becomes "don't merge to main" during the weekly-build period.
Another approach is to have the weekly-build split off onto a branch, so that regardless of what gets checked in (and possibly merged), the weekly build will not be affected.
Using labels, as VonC suggested, is also a good approach. However, you need to consider what happens when a labeled file needs a patch for the nightly build -- what if a developer has checked in a change to that file since it was labeled, and that developer's changes should not be included in the weekly build? In that case, you will need a branch anyway. But branching off a label can be a good approach too.
I have also worked on projects that make branches like crazy and it becomes a mess trying to figure out what's happening with any particular file. Changes may be committed to multiple branches in the same timeframe. Eventually the merge conflicts need to be resolved. This can be quite a headache. Regardless, my preference is to be able to use branches.
Wow, thats an awful way to develop.
Last time I worked in a really large team we had about 100 devs in 3 time zones: USA, UK, India, so we could effectively have 24 hour development.
Each dev would check the build tree and work on what they had to work on.
At the same time, there would be continuous builds happening. The build would make its copy from the submitted code and build it. Any failures would go back to the most recent submitter(s) for code for that build.
Result:
Lots of builds, most of which compiled OK. These builds then started automatic smoke testing scenarios to find any unexpected bugs not found during testing priot to commiting.
Build failures found early, fixed early.
Bugs found early, fixed early.
Developers only wait the minimum time to submit (they have to wait until any other dev that is submitting has finished submitting - this requirement made so that the build servers have a point at which they can grab the source tree for a new build).
Most devs had two machines so they could work on a second bug while running their tests on the other machine (the tests were very graphical and would cause all sorts of focus issues, so you really needed a different machine to do other work).
Highly productive, continuous development with no deadtime as in your scenario.
To be fair, I don't think I could work in place that you describe. It would be soul destroying to work in such an unproductive way.
I strongly believe that your organization would benefit from Continuous Integrations, where you build very often, perhaps for every checkin to your code base.
Don't know if i'll get shot for saying this, but you should really move to a decentralized solution like GIT. SVN is horrible about this and the fact that you can't commit basically stops people from working properly. At 40 people this is worth it because each can continue working on their own stuff and only push what they want. The build server can do what it wants and build without affecting everyone.
Yet another example why Linus was right when saying that in almost all cases, a decentralized solution like git works best in real life teams.

How can we improve our deployment and build systems?

We have 4 different environments:
Staging
Dev
User Acceptance
Live
We use TFS, pull down the latest code and code away.
When they finish a feature, the developers individually upload their changes to Staging. If the site is stable (determined by really loose testing), we upload changes to Dev, then UserAcceptance and then live.
We are not using builds/tags in our source control at all.
What should I tell management? They don't seem to think there is an issue as far as I can tell.
If it would be good for you, you could become the Continuous Integration champion of your company. You could do some research on a good process for CI with TFS, write up a proposed solution, evangelize it to your fellow developers and direct managers, revise it with their input and pitch it to management. Or you could just sit there and do nothing.
I've been in management for a long time. I always appreciate someone who identifies an issue and proposes a well thought-out solution.
Whose management? And how far removed are they from you?
I.e. If you are just a pleb developer and your managers are the senior developers then find another job. If you are a Senior developer and your managers are the CIO types, i.e. actually running the business... then it is your job to change it.
Tell them that if you were using a key feature of very expensive software they spent a lot of money on, it would be trivial to tell what code got pushed out when. That would mean in the event of a subtle bug getting introduced that gets passed user acceptance testing, it would be a matter of diffing the two versions to figure out what changed.
One of the most important parts of using TAGS is so you can rollback to a specific point in time. Think of it as an image backup. If something bad gets deployed you can safely assume you can "roll" back to a previous working version.
Also, developers can quickly grab a TAG (dev, prod or whatever) and deploy to their development PC...a feature I use all the time to debug production problems.
So you need someone to tell the other developers that they must label their code every time a build is done and increment a version counter. Why can't you do that?
You also need to tell management that you believe the level of testing done is not sufficient. This is not a unique problem for an organisation and they'll probably say they already know. No harm in mentioning it though rather than waiting for a major problem to arrive.
As far as individuals doing builds or automated build processes this depends on whether you really need this based on how many developers there are and how often you do builds.
What is the problem? As you said, you can't tell if management see the problem. Perhaps they don't! Tell them what you see as the current problem and what you would recommend to fix the problem. The problem has to of the nature of "our current process has failed 3 out of 10 times and implementing this new process would reduce those failures to 1 out of 10 times".
Management needs to see improvements in terms of: reduced costs, icreased profits, reduced time, reduced use of resources. "Because it's widely used best practice" isn't going to be enough. Neither is, "because it makes my job easier".
Management often isn't aware of a problem because everyone is too afraid to say anything or assumes they can't possibly fail to see the problem. But your world is a different world than theirs.
I see at least two big problems:
1) Developers loading changes up themselves. All changes should come from source control. Do you encounter times where someone made a change that went to production but never got into source control and then was accidentally removed on the next deploy? How much time (money) was spent trying to figure out what went wrong there?
2) Lack of a clear promotion model. It seems like you guys are moving changes between environments rather than "builds". The key distinction is that if two changes work great in UAT because of how they interact, if only one change is promoted to production it could break there. Promoting consistent code - whether by labeling it or by just zipping up the whole web application and promoting the zip file - should cause fewer problems.
I work on the continuous integration and deployment solution, AnthillPro. How we address this with TFS is to retrieve the new code from TFS based on a date-time stamp (of when someone pressed the "Deliver to Stage" button).
This gives you most (all?) the traceability you would have of using tags, without actually having to go around tagging things. The system just records the time stamp, and every push of the code through the testing environments is tied to a known snapshot of code. We also have customers who lay down tags as part of the build process. As the first poster mentioned - CI is a good thing - less work, more traceability.
If you already have TFS, then you are almost there.
The place I'm at was using TFS for source control only. We have a similar setup with Dev/Stage/Prod. I took it upon myself to get a build server installed. Once that was done I added in the ability to auto deploy to dev for one of my projects and told a couple of the other guys about it. Initially the reception was luke warm.
Later I added TFS Deployer to the mix and have it set to auto deploy the good dev build to stage.
During this time the main group of developers were constantly fighting the "Did you get latest before deploying to Stage or Production?" questions; my stuff was working without a hitch. Believe me, management and the other devs noticed.
Now (6 months into it), we have a written rule that you aren't even allowed to use the Publish command in visual studio. EVERYTHING goes through the CI build and deployments. When moving to prod, our production group pulls the appropriate copy off of the build server. I even trained our QA group on how to do web testing and we're slowly integrating automated tests into the whole shebang.
The point of this ramble is that it took awhile. But more importantly it only happened because I was willing to just run with it and show results.
I suggest you do the same. Start using it, then show the benefits to get everyone else on board.

Resources