Sass at Multiple Physical Locations - css

I am about to start a new Sass project where the work will need to be carried out from multiple physical locations on different machines including laptops.
The 1st location is a standard setup with compass etc all running ok.
The second again has compass setup but cannot be networked as such to the first.
The third would be laptops etc.
So the question:
What is the best way to work access the same sass file from all 3 locations ( different times) without carrying a stick or drive around ?
Google drive ?
ftp down load at each?
Also concerned that someone may not get on to the latest version before modifying it.

When any sort of code is going to be worked on in multiple locations, it's always best to use some sort of version control system. Ideally, any time any code is being done, it should use a version control system, but I'll overlook that.
A version control system (VCS) will allow you to make changes in one place, store them in a central place, and then get those changes on other machines. It will also mean you can check what changes were made when, and find what caused something to break a little easier.
There's a multitude of different options out there, and it comes down to whether or not you want to host your own server, use a generally available one, pay a small amount each month to keep servers private etc.
The obvious candidate (being the seemingly current favourite) would be to use git, where you can have your own server or use something like github. But there's also oter options including (but not limited to):
CVS
SVN
Mercurial
Which option you go for will depend on your preference really.

If you are the only one working on the code and don't care much about safety, which versioning systems provide, you can very easily put your project on a Dropbox and set desktop application to sync only selected directories, ignoring others.
I use this method even though I use Git, just because sometimes I just want to not commit changes and continue working on them in different location. I fire up other computer and I am right where I left everything (including text editor settings, plugins, etc.)

Related

Run R script and hide the actual code from user

I have created an R code script that:
Reads some data from a database
Makes some transformations and..
exports into a csv the modified table.
This code needs to run in a client's machine, but we need to "hide" the actual code from the user.
Is there any useful suggestions on how we can achieve that?
Up front
... it will be nearly impossible to deploy an R <something> to another computer in a way that prevents curious users from accessing the source code.
From a mailing list conversation in 2011, in response to "I would not like anyone to be able to read the code.",
R is an open source project, so providing ways for you to do this is not
one of our goals.
Duncan Murdoch https://stat.ethz.ch/pipermail/r-help/2011-July/282755.html
(Prof Murdoch was on the R Core Team and R Foundation for many years.)
Background
Several (many?) programming languages provide the ability to compile a script or program into an executable, the .exe you reference. For example, python has tools like py2exe and PyInstaller. The tools range from merely compactifying the script into a zip-ball, perhaps obfuscating the script; ... to actually creating a exe with the script either tightly embedded or such. (This part could use some more citations/research.)
This is usually good enough for many people, by keeping the honest out. I say it that way because all you need to do is google phrases like decompile py2exe and you'll find tools, howtos, tutorials, etc, whose intent might be honestly trying to help somebody recover lost code. Regardless of the intentions, they will only slow curious users.
Unfortunately, there are no tools that do this easily for R.
There are tools with the intent of making it easy for non-R-users to use R-based tools. For instance, RInno and DesktopDeployR are two tools with the intent of creating Windows (no mac/linux) installers that support R or R/shiny tools. But the intent of tools like this is to facilitate the IT tasks involved with getting a user/client to install and maintain R on their computer, not with protecting the code that it runs.
Constrain R.exe?
There have been questions (elsewhere?) that ask if they can modify the R interpreter itself so that it does not do everything it is intended to do. For instance, one could redefine base::print in such a way that functions' contents cannot be dumped, and debug doesn't show the code it's about to execute, and perhaps several other protective steps.
There are a few problems with this approach:
There is always another way to get at a function's contents. Even if you stop print.default and the debugger from doing this, there are others ways to get to the functions (body(.), for one). How many of these rabbit holes do you feel you will accurately traverse, get them all ... with no adverse effect on normal R code?
Even if you feel you can get to them all, are you encrypting the source .R files that contain your proprietary content? Okay, encrypting is good, except you need to decrypt the contents somehow. Many tools that have encrypted contents do so to thwart reverse-engineering, so they also embed (obfuscatedly, of course) the decryption key in the application itself. Just give it time, somebody will find and extract it.
You might think that you can download the key on start-up (not stored within the app), so that the code is decrypted in real-time. Sorry, network sniffers will get the key. Even if you retrieve it over https://, tools such as https://mitmproxy.org/ will render this step much less effective.
Let's say you have recompiled R to mask print and such, have a way to distribute source code encrypted, and are able to decrypt it in a way that does not easily reveal the key (for full decryption of the source code files). While it takes a dedicated user to wade through everything above to get to the source code, none of the above steps are required: they may legally compel you to release your changes to the R interpreter itself (that you put in place to prevent printing function contents). This doesn't reveal your source code, but it will reveal many of your methods, which might be sufficient. (Or just the risk of legal costs.)
R is GPL, and that means that anything that links to it is also "tainted" with the GPL. This means that anything compiled with Rcpp, for instance, will also be constrained/liberated (your choice) by the GPL. This includes thoughts of using RInside: it is also GPL (>= 2).
To do it without touching the GPL, you'd need to write your interpreter (relatively from scratch, likely) without code from the R project.
Alternatives
Ultimately, if you want to release R-based utilities/apps/functionality to clients, the only sure-fire way to allow them to use your code without seeing it is to ... control the computers on which R will run (and source code will reside). I'll add more links supporting this claim as I find them, but a small start:
https://stat.ethz.ch/pipermail/r-help/2011-July/282717.html
https://www.researchgate.net/post/How_to_make_invisible_the_R_code
Options include anything that keeps the R code and R interpreter completely under your control. Simple examples:
Shiny apps, self-hosted (or on shinyapps.io if you trust their security); servers include Shiny Server (both free and commercial versions), RStudio Connect (commercial only), and ShinyProxy. (The list is not known to be exclusive.)
Rplumber is an API server, not a shiny server. The intent is for single HTTP(s) endpoint calls, possibly authenticated, supporting whatever HTTP supports (post, get, etc). This can be served in various ways, see its hosting page for options.
Rserve. I know less about this, but from what I've experienced with it, I've not had as much luck integrating with enterprise systems (where, e.g., authentication and fine-control over authorization is important). This does allow near-raw access to R, so it might not be what you want (especially when the intent is to give to clients who may not be strong R users themselves).
OpenCPU should be discussed, but not as a viable candidate for "protect your code". It is very similar to rplumber in that it provides HTTP endpoints, but it supports endpoints for every exported function in every package installed in its R library. This includes the base package, so it is not at all difficult to get the source code of any function that you could get on the R console. I believe this is a design feature, even if it is perfectly at odds with your intent to protect your code.
Anything that can call R or Rscript. This might be PHP or mod_python or similar. Any web-page serving language that can exec("/usr/bin/Rscript",...) can take its output and turn it around to the calling agent. (It might also be possible, for example, for a PHP front-end to call an opencpu endpoint that only permits connections from the PHP-serving host.)

How do I upload changed files via FTP? (wordpress)

I'm used to git and command line stuff, but working on a wordpress site freelancing. I have FTP access, but the site I'm working on has like 16,000 files just in wp-content. Is there a way to automatically only upload changed files? I'm using Filezilla and there's an option to do that, but going through 16,000 files takes hours anyway. I know I could use git and do things manually, but that's a pain.
I'm open to suggestions outside of FTP if there's any easier way in general for wordpress dev.
Since you're bound to FTP¹, your options are quite limited. There are free (in limited capacity) services to deploy to SFTP² via git. Some examples: DeployBot, Buddy.works, DeployHQ, etc. There is also Beanstalk, which I've used in the past and it worked rather well, but the free account is limited to 100MB (which would obviously not work for your situation, and it sounds like the client is too cheap to buy a paid account). It is a bit odd to me to store media library in git, but that is another topic and I understand your dilemma.
¹I would highly recommend using the insecurities of FTP as an argument to try to convince the client to switch to... literally anything else.
²Not certain if these services support FTP (as opposed to SFTP). You would probably need to ask, but they may not given the insecurity of FTP.
EDIT - There may also be some open source options like this (albeit old) solution: https://github.com/mehedi101/ftploy (purely as an example; there are others, but they appear to vary in complexity and I have not tried them)

Drupal development workflow for teams

In my last Drupal project we were 5 people doing coding and installing new modules, at the same type our client was putting up content. Since we chose to have only one server for simplicity there were times were many people needed to write to the same files like style.css or page.tpl.php or when someones broken code would prevent others from working
Are there any best practises for a team that works with Drupal? How can leverage code repositories or sandboxes?
A single server may appear to give you "simplicity", but what it gives you, as you've experienced, is utter chaos -- and you were lucky if it didn't result in unpleasant and hard-to-reproduce, harder-to-fix crashes. Don't settle for anything less than a "production" server (where your client can be working -- on content only -- if they like minor risks;-) and a "staging" one (where anything from the development team goes to get tested and tried for a while before promotion to development, which is done at a quiet and ideally prearranged time).
Second, use a version control system of some kind. Which one matters less than using one at all: svn is popular and simple, the latest fashion (for excellent reasons) are distributed ones such as hg and git, Microsoft and other have commercial offerings in the field, etc.
The point is, whenever somebody's updating a file, they're doing so on their own client of the VCS. When a coherent set of changes is right, it's pushed to the VCS, and the VCS diagnoses and points out any "conflicts" (places where two developers may have made contradictory changes) so the developer who's currently pushing is responsible for editing the files and fixing the conflicts before their pushes are allowed to go through. Only then are "current versions" allowed to even go on the staging system for more thorough (and ideally automated!-) testing (or, better yet, a "continuous build" system).
Basically, there should be two layers of defense against such conflicts as you observed, and you seem to have deployed neither. They're both essential, though, if forced under duress to pick just one, I guess I'd reluctantly pick the distinction between production and staging servers -- development will still be chaotic (intolerably so compared to the simple solidity of any VCS!) but at least it won't directly hurt the actual serving system;-).
Here's a great writeup about development workflow in Drupal. It sums everything so far responded here and adds "Features", "Strongarm" and a few more tricks to the equation. http://www.lullabot.com/articles/site-development-workflow-keep-it-code

How can we improve our deployment and build systems?

We have 4 different environments:
Staging
Dev
User Acceptance
Live
We use TFS, pull down the latest code and code away.
When they finish a feature, the developers individually upload their changes to Staging. If the site is stable (determined by really loose testing), we upload changes to Dev, then UserAcceptance and then live.
We are not using builds/tags in our source control at all.
What should I tell management? They don't seem to think there is an issue as far as I can tell.
If it would be good for you, you could become the Continuous Integration champion of your company. You could do some research on a good process for CI with TFS, write up a proposed solution, evangelize it to your fellow developers and direct managers, revise it with their input and pitch it to management. Or you could just sit there and do nothing.
I've been in management for a long time. I always appreciate someone who identifies an issue and proposes a well thought-out solution.
Whose management? And how far removed are they from you?
I.e. If you are just a pleb developer and your managers are the senior developers then find another job. If you are a Senior developer and your managers are the CIO types, i.e. actually running the business... then it is your job to change it.
Tell them that if you were using a key feature of very expensive software they spent a lot of money on, it would be trivial to tell what code got pushed out when. That would mean in the event of a subtle bug getting introduced that gets passed user acceptance testing, it would be a matter of diffing the two versions to figure out what changed.
One of the most important parts of using TAGS is so you can rollback to a specific point in time. Think of it as an image backup. If something bad gets deployed you can safely assume you can "roll" back to a previous working version.
Also, developers can quickly grab a TAG (dev, prod or whatever) and deploy to their development PC...a feature I use all the time to debug production problems.
So you need someone to tell the other developers that they must label their code every time a build is done and increment a version counter. Why can't you do that?
You also need to tell management that you believe the level of testing done is not sufficient. This is not a unique problem for an organisation and they'll probably say they already know. No harm in mentioning it though rather than waiting for a major problem to arrive.
As far as individuals doing builds or automated build processes this depends on whether you really need this based on how many developers there are and how often you do builds.
What is the problem? As you said, you can't tell if management see the problem. Perhaps they don't! Tell them what you see as the current problem and what you would recommend to fix the problem. The problem has to of the nature of "our current process has failed 3 out of 10 times and implementing this new process would reduce those failures to 1 out of 10 times".
Management needs to see improvements in terms of: reduced costs, icreased profits, reduced time, reduced use of resources. "Because it's widely used best practice" isn't going to be enough. Neither is, "because it makes my job easier".
Management often isn't aware of a problem because everyone is too afraid to say anything or assumes they can't possibly fail to see the problem. But your world is a different world than theirs.
I see at least two big problems:
1) Developers loading changes up themselves. All changes should come from source control. Do you encounter times where someone made a change that went to production but never got into source control and then was accidentally removed on the next deploy? How much time (money) was spent trying to figure out what went wrong there?
2) Lack of a clear promotion model. It seems like you guys are moving changes between environments rather than "builds". The key distinction is that if two changes work great in UAT because of how they interact, if only one change is promoted to production it could break there. Promoting consistent code - whether by labeling it or by just zipping up the whole web application and promoting the zip file - should cause fewer problems.
I work on the continuous integration and deployment solution, AnthillPro. How we address this with TFS is to retrieve the new code from TFS based on a date-time stamp (of when someone pressed the "Deliver to Stage" button).
This gives you most (all?) the traceability you would have of using tags, without actually having to go around tagging things. The system just records the time stamp, and every push of the code through the testing environments is tied to a known snapshot of code. We also have customers who lay down tags as part of the build process. As the first poster mentioned - CI is a good thing - less work, more traceability.
If you already have TFS, then you are almost there.
The place I'm at was using TFS for source control only. We have a similar setup with Dev/Stage/Prod. I took it upon myself to get a build server installed. Once that was done I added in the ability to auto deploy to dev for one of my projects and told a couple of the other guys about it. Initially the reception was luke warm.
Later I added TFS Deployer to the mix and have it set to auto deploy the good dev build to stage.
During this time the main group of developers were constantly fighting the "Did you get latest before deploying to Stage or Production?" questions; my stuff was working without a hitch. Believe me, management and the other devs noticed.
Now (6 months into it), we have a written rule that you aren't even allowed to use the Publish command in visual studio. EVERYTHING goes through the CI build and deployments. When moving to prod, our production group pulls the appropriate copy off of the build server. I even trained our QA group on how to do web testing and we're slowly integrating automated tests into the whole shebang.
The point of this ramble is that it took awhile. But more importantly it only happened because I was willing to just run with it and show results.
I suggest you do the same. Start using it, then show the benefits to get everyone else on board.

Slicehost installation profile

I'm no UNIX Guru, but I've had to set up a handful of slices for various web projects. I've used the articles on there to set up users, a basic firewall, nginx or apache, and other bits and pieces of a basic web server.
I foresee more slice administration in my future. Is there a more efficient way to set up users, permissions, and software on a clean slice than configuration by hand?
It sounds like you can create a new slice from the backup of an existing one. This might not work for you if the slices would be different sizes, different distros, etc. Their forums mention this: Clone a slice?
Depending on the number of machines you might find it makes sense to use something like CFEngine, or Puppet, to configure the new installs.
That brings your work down to configuring each new machine as a CFEngine, for example, client. Then that may be used to install the packages, edit files, & etc.
There are a few articles I wrote on the subject, with a Debian bias, here:
http://www.debian-administration.org/tag/cfengine

Resources