Testing plugins live with Varying Vagrant Vagrants - wordpress

I'm currently trying to use VVV to develop and test my plugins. My host OS is Win10.
My plugins are in D:\Workshop\projects\vendor\module. I've used this folder structure for a long time, and it is really convenient, especially for use with Composer and friends.
Now I've installed VVV, created a site with VV. I want to test a plugin, the source code of which is in D:\Workshop\projects\XedinUnknown\my-project. So, I create a symlink in D:\Workshop\projects\XedinUnknown\vvv-local\www\my-test-site\htdocs\wp-content\plugins that points to that project's folder. Alas, it doesn't work. If I SSH into VVV and ls /srv/www/my-test-site/htdocs/wp-content/plugins, I can see my-project there, but it points to ../../../../../../../XedinUnknown/my-project, which, of course, doesn't exist. If instead of symlink I create a junction, it's just an empty file.
I suspect that this has to do with how the Linux environment handles Windows symlinks, but I'm not entirely sure. Is it possible to make this work somehow? I really don't wanna copy the whole project folder into VVV.
This is also addressed here.

So, it would seem like I've found somewhat of a solution. I added a synched folder, which maps to my projects home. I then create a symlink to that folder from the WP plugins directory, inside the VM.
Step 1 - Add Shared Folder
This should be done in a Customfile as explained here. This file should go into the same directory as the Vagrantfile, e.g. it will become the Vagrantfile's sibling. In my case, if you're following along from my question, it is in D:\Workshop\projects\XedinUnknown\vvv-local. Anything put here becomes global for the whole of VVV. This also gives you the ability to use different combinations of your projects in different websites. Add these contents to your Customfile, creating it if it does not exist.
config.vm.synced_folder "D:/Workshop/projects", "/srv/projects", :owner => "www-data", :mount_options => [ "dmode=775", "fmode=774" ]
Of course, you should replace D:/Workshop/projects with the path to where you store your projects. Note the forward slashes (/). This works on Win/Nix. For a Windows-only configuration, I suspect you'd have to replace them with \\, because this is an escape sequence.
Step 2 - Add Link to Project
This should be done in your site's vvv-init.sh file. In my case, this file was in D:\Workshop\projects\XedinUnknown\vvv-local\www\my-test-site\, because I want to create this symlink specifically for the my-test-site site. Please note that your VVV path will probably be different, and it doesn't have to be inside the projects directory. It's wherever you cloned VVV into. Add the below lines to your site's vvv-init.sh file.
if [ ! -f "htdocs/wp-content/plugins/my-project" ]; then
echo 'Creating symlink to plugin project...'
cd ./htdocs/wp-content/plugins
ln -s /srv/projects/XedinUnknown/my-project my-project
cd -
fi
In the above snippet, change the path to your desired project path, keeping in mind that /srv/projects/ now maps live to the projects root in your host OS. You can also replace the second occurrence (last word) of my-project in ln -s /srv/projects/XedinUnknown/my-project my-project with whatever you want. As long as you don't change it later, your plugin should not suddenly get de-activated.
Also, from what I understood, vvv-init.sh runs during provisioning, not every time the machine is brought up. So, if you want to run the code in there, you have to run vagrant up --provision from the VVV directory. If you don't want to provision, you can run it manually. SSH into VVV with vagrant ssh, then cd /srv/www/my-test-site (replace my-test-site with name of your site), and run . vvv-init.sh.
Afterword
I am quite new to Bash scripting, and I don't know if my solution is the best one, so please feel free to suggest better versions of the Bash script. I also don't know Ruby, and am new to Vagrant, so please feel free to suggest improvements to the Customfile - this is in essence the same as the Vagrantfile.
One possible issue that I can anticipate with this solution (and this is inherently by design of the filesystem architecture) is that if WordPress decides to make changes to your plugin, e.g. if you run a WP update, it will effectively delete all files in your project, including the repository. So, on the testing site I would recommend using something like this. I am in no way associated with this plugin.

Related

Different local and remote organisation R Project and GitHub

I want to version control my R scripts so I've created an R project and a GitHub repo. My scripts are scattered through several directories within the same directory where the R project is.
I would like that my GitHub repository harbors only the scripts, independently of the folders they are locally stored in. However when I run the below command:
git add folder/file.R
git commit -m "my_message"
git push -u origin master
A directory named folder is created containing file.R but I'd like to just see file.R without the folder. Do you know how can I do this? Also, would it be good practice? My local folders are organized so each directory contains its own scripts and results, that's the reason the scripts are separated.
Thank you very much
is there a way to add the file.R without specifying the path?
Not using git add, no. The design constraint for git add is that it should store the file's name exactly as it appears, including the forward slashes, so if the file's name is folder/file.R, that's the file's name.
You have some options here though:
You can make a parallel directory where you put the files with the names you want them to have. Run git init in that directory, copy the folder/file.R file to file.R in that directory. Then cd ../gitdir or whatever is appropriate to get there, and git add file.R.
This method is probably the best because it's the simplest.
You can write your own programs using git hash-file -w and git update-index, which are two of Git's plumbing commands. A plumbing command, in Git, is basically a command that exists so that you can build user-facing commands: they're not meant to be run by humans but rather by other programs. So you write a program (in whatever language you like) that uses these plumbing programs to achieve whatever you want.
In particular, you can create or find a Git blob object holding the contents of file.R as read from anywhere you like, then use git update-index to create an index entry holding whatever path you like and referring to the blob object you created (or found) with git hash-object with the -w flag.
Since Git is a suite of tools, not a solution, you can come up with your own method. The tools in Git are made with particular approaches in mind, but they are flexible enough to be repurposed.

How do I update a drupal module without deleting it first?

How do I upload a new version of a module to my site? If I choose "Install new module" through the administration page I get a message that the module already is installed. There are two work arounds that I have found, but none of them seems ideal and the way you are supposed to do it.
I can delete the old module first, and then upload and install the new version. However, if the modules has data associated with it, then this data will be lost.
I can replace the module files on the server. This doesn't seem such a clean way to do it, I would rather follow a more standard process, if there is any.
So what is the best way to do it?
Thanks!
Make a back-up...just in case.
Put your site in maintance mode.
Overwrite your module files with updated version (delete old module files first)
run /update.php from browser or from console run drush updb if you have drush installed (to make database changes if any and similar stuff).
Put your site in "normal" mode again...check if everything is ok.
https://www.drupal.org/node/250790
MilanG's answer is incorrect on step 3. You don't overwrite, you need to remove the existing modules files, then put the new module files in it's place. This is because new module releases may include removed files, and sometimes if those files aren't removed you have problems.
So...
Make a backup (drush site-archive)
Put site in maintenance mode (drush vset maintenance_mode 0)
Delete the old module's directory entirely (rm -fr path/to/modulename)
Download the latest version of the module (drush dl modulename)
Run update.php (drush -y updb)
Turn off maintenance mode (drush vset maintenance_mode 1)
With drush it's pretty easy to just write a wrapper script and run that by module, so that you could do "drush update-module modulename" and those other steps happen.

Can a pre-commit Git hook zip a directory and add it to the repository?

I'm doing development on a Wordpress plugin. My development directory contains a lot of development-specific stuff (e.g. Grunt files, Sass files, the git repository itself, etc.).
Obviously, I don't want to distribute this folder containing all of those development files; people don't want a few MB of Grunt files when they download my Wordpress plugin.
Up until now, though, my "release" process has been cumbersome:
Commit the Git changes
Zip the entire folder
Open the zip file and delete the .git folder, grunt files, and all the other development-specific files
Release the new zip
I don't know the best way to accomplish this, but I'm very vaguely familiar with Git hooks, and I had this thought: could I set up a Git hook that would zip ONLY the needed production files into a ZIP file and store it with the repo? That way, every time I commit it would automatically create a new release ZIP.
Is that possible? If so, could someone point me in the right direction?
Oh also, I'm on Windows (・_・;). So I'm hoping that there's a way to do it on Windows.
I can't speak for Windows, but:
It's technically possible to do that sort of thing in a pre-commit hook.
Don't.
A pre-commit hook that modifies "what you will commit" is annoying (if nothing else, it violates the "rule of least astonishment", where your version control system simply stores the versions you tell it to store). Apart from that, storing large pre-compressed binaries interferes with git's attempt to save space in pack files, and will cause rapid repository bloat, poor performance, running out of memory, and so on. A ZIP-archive is a pre-compressed binary and hence will behave badly.
In general, a more reasonable "hook-y" way to handle releases is to set up a "release server" to which you push new releases, and have the push trigger the archive-generation. (There are ways to do this without a separate server / repository, and you can do it in a more pull-style fashion, but the push-style is easy to illustrate.)
[Edit: I had originally considered git archive but did not realize you could get it to exclude files conveniently, so wrote up the below instead. So, jthill's answer is better and should be one's first resort. I'll leave this in place as an alternative for some case where for some reason, git archive might not do.]
For instance, here's a server-side post-receive hook code fragment that checks whether a branch whose name matches release* has been pushed-to, and if so, invokes a shell function with the name of the branch (once for each such branch):
#! /bin/sh
NULL_SHA1=0000000000000000000000000000000000000000
scan()
{
local oldsha newsha fullref shortref
local optype
while read oldsha newsha fullref; do
case $oldsha,$newsha in
$NULL_SHA1,*) optype=create;;
*,$NULL_SHA1) optype=delete;;
*) optype=update;;
esac
case $fullref in
refs/heads/*)
reftype=branch
shortref=${fullref#refs/heads/}
;;
*)
reftype=other
shortref=fullref
;;
esac
case $optype,$reftype,$shortref in
create,branch,release*|update,branch,release*)
do_release $shortref;;
esac
done
}
scan
(much of the above is boilerplate, which I have stripped down to essentials). You would have to write the do_release function, which might resemble (totally untested):
do_release()
{
local tmpdir=/tmp/build.$$ # or use mktemp -d
# $tmpdir/index is git's index; $tmpdir/t is the work tree
trap "rm -rf $tmpdir; exit 1" 1 2 3 15
rm -rf $tmpdir
mkdir $tmpdir/t
GIT_INDEX_FILE=$tmpdir/index GIT_WORK_TREE=$tmpdir/t git checkout $1
# now clean out grunt files and make zip archive
(cd $workdir/t; rm -rf grunt; zip ../t.zip .)
# put completed zip archive in export location, name it
# based on the branch name
mv $workdir/t.zip /place/where/zip/files/live/$1.zip
# clean up temp dir now, and no longer need to clean up
# on signal related abort
rm -rf $tmpdir
trap - 1 2 3 15
}
There's actually a command for this, git archive.
git archive master -o wizzo-v1.13.0.zip
See the EXAMPLES section, you can select paths, add prefixes to them, define custom postprocessing by output extension, and some more minor tweaks.
Also see the ATTRIBUTES section: you can give files -- arbitrary patterns, really -- an export-ignore attribute to exclude them from archives.
It's got a bunch more handy-dandies, you can get archives from remote repos, expand arbitrary git log --pretty=format: placeholders, the git manpages are definitely worth whatever time you can invest in them.

What is the Unix way for a console script to use config files?

Let's imagine we have some script 'm12' (I've just invented this name) that runs
on Linux computers. If it is situated in your $PATH, you can easily run it
from the console like this:
m12
It will work with the default parameters. But you can customize the work of
this script by running it something like:
m12 --enable_feature --select=3
It is great and it will work. But I want to create a config file ~/.m12rc so I
will not need to specify --enable_feature --select=3 every time I run it.
It can be easily done.
The difficult part is starting here.
So, I have ~/.m12rc config file, but I what to start m12 without parameters that
are stored in that config file. What is the Unix way to do this? Should I run
script like this:
m12 --ignore_config
or there is better solution?
Next. Let's imagine I have a config file ~/.m12rc and I want some parameters from that
file, but want to change them a bit. How should I run the script and how the
script should work?
And the last question. Is it a good idea for script to first look for .m12rc
in the current directory, then in ~/ and then in /etc?
I'm asking all these questions because I what to implement config files in my
small script and I want to make the correct decisions about the design.
The book 'The Art of Unix Programming' by E S Raymond discusses such issues.
You can override the config file with --config-file=/dev/null.
You would normally use the order:
System-wide configuration (/etc/m12/m12rc, or just /etc/m12).
User's personal configuration (~/.m12rc)
Local directory configuration (./.m12rc)
Command-line options
with each later-listed item overriding earlier listed items. You should be able to specify the configuration file to read on the command line; arguably, that should be given precedence over other options. Think about --no-system-config or --no-user-config or --no-local-config. Many scripts do not warrant a system config file. Most scripts I've developed would not use both local config and user config. But that's the way my mind works.
The way I package standard options is to have a script in $HOME/bin (say m12a) that does it for me:
#!/bin/sh
exec m12 --enable_feature --select=3 "$#"
If I want those options, I run m12a. If I want some other options, I run raw m12 with the requisite options. I have multiple hundreds of files in my personal bin directory (about 500 on my main machine, a Mac; some of those are executables, but many are scripts).
Let me share my experience. I normally source config file at the beginning of the script. In the config file I also handle all the parameter switches:
DEFAULT_USER=blabla
while getopts ":u" do
case $opt in
u)
export APP_USER=$OPTARG
;;
esac
done
export APP_USER=${APP_USER-$DEFAULT_USER}
Then within the script I just use variables, this let me to have number of script having same input parameters.
In your case I imaging you would move "getopts" section to script and after it source the config file (if there was no switch to skip sourcing).
You should not put yours script config file to etc, it will require root privilidge to do that, and you simple can live with config file in home.
If you would like anyway to put your script for sharing with other users, it should go to /usr/share...
Another solution use thor (ruby gem), its way simpler to handle input parameter, avoiding work to get same result in bash e.g. getopts support only single letter switches.

Directory for files generated by Make during installation from configure, make and make install

Just assume we are installing some libraries from its source distributed by the way GNU promoted. When using "./configure --prefix" to specify where to install.
(1) does Make generate the binaries under the current directory? Does Make install then copies them from the current directory (which is from where Make is run) to $prefix? If the answers are yes, I have two questions, each for different cases.
(2) when the current directory and $prefix are not the same, can I remove all the files generated by Make under current directories to save some space?
(3) when the current directory and $prefix are the same, will make install do nothing or copy the files to themselves? Can I just skip the make install step?
The answer to your first question is probably yes.
As for the rest, you may find a make clean which will tidy up the files created by the initial make.
I think the makefile will be able to handle the situation where current directory and $prefix are not the same, and do the right thing.
The current directory would not usually be the destination of files created by makefiles.
(of course it depends on how the makefile is written, so I can't give definite answers, but I've generally been impressed with the makefiles I've used)
You are absolutely right: make just creates files in current directory and make install copies it to the destination directories, based on $prefix and other variables maintained by configure script.
You can wipe out the whole directory you've ran the build at. It will not be used, because, well, that's what "install" means: you build in one directory and the the files are placed in the proper places of your system.
Usually install destination and the directory you build in differ. The hierarchy of the files being installed usually do not relate to directory hierarchy of the build system. Just install to the other dir: it's cheap to create just yet another directory.

Resources