steps for launching 2nd instance of drupalvm - drupal

Am I overlooking something in DrupalVM documentation? To run multiple instances, what steps do people follow?
I’ve seen mentions that after you’ve launched DrupalVM once, it’s quick to launch another instance.
Various approaches have had the same results, including some of the helpers on http://docs.drupalvm.com/en/latest/other/management-tools as well as the vagrant plugin vagrant-cachier. With each, starting a new instance takes the same (very long) length of time….

First, do you really need to launch 2 same machines at once? You can have multiple websites (vhosts) on one VM. That way you would save some computer resources (memory). Edit hosts file on your (host) machine to match web server settings, where you defined your website
But if you insist, should be possible to copy whole project dir, change ip of one of those 2 machine (config.vm.network "private_network", ip: "192.168.something.something" in vagrant file) and run them simultaneously.

Related

How to manage multiple symfony projects in a development computer

I've seen some post, including How to manage multiple backend stacks for development?, but nothing related to use lxc for a stable, safe and separate development environment, matching the production environment and regardless the desktop and/or linux distribution.
There was a feature previous to symfony cli release that allowed to specify a socket via ip:port, and this allowed to use different names in /etc/hosts using the 127.0.0.0/8 loopback network, so I could always use "bin/console server:start -p:myproject:8000", and I knew that using http://myproject:8000 (specified in /etc/hosts) I could access my project and keep the sessions, etc.
The symfony cli, as far as I've tried, doesn't allow this. Reading the docs, there's a built-in proxy in symfony cli, but though I've set a couple of projects to use this in the container, clicking on the list doesn't open the project (with .wip suffix), and issues an error about proxy redirections. If I browse to the port and ip of the container ip, it works perfectly, but the port is something that can change with every reboot of the container.
If there's nothing that can be set on the proxy side to solve this scenario, I'd ask to take back the socket feature that existed previously, so I can manage this situation as I used to do before, and solve this.
Thanks in advance.
I think I've finally found a good solution. I've created an issue to improve the situation that seemed not to work, so I'll try to explain for whoever might be interested.
I've setup the proxy server built-in with the symfony cli, but instead of allowing it to run with the defaults, I've had to specify --host=proxyhost (resolvable from the host) and setting proxy exceptions for .com, .org, .net, .tv, etc, together with setting a name to attach for every project (issuing symfony proxy:domain:attach myproject from inside the project dir), I can go to http://myproject.wip just like http://proxyhost:portX, no matter which port is portX.

Vagrant shared/synced folders permissions

From my research I understand that VirtualBox synced folders have permissions set up during the mounting process. Later, I am unable to change it therefore permissions for the whole synced folder MUST be same for every single file/folder in the shared folder. When trying to change with or without superuser permissions, changes are reverted straight away.
How this can work with for example Symfony PHP framework where there are several different permissions for different files/folders? (i.e. app/console needs execute rights but I don't want to have 7XX everywhere).
I have found in different but similar question (Vagrant and symfony2) that I could set the permissions to 777 for everything in the Vagrantfile, however this is not desirable as I need to use GIT behind my source code which is than deployed to the live environment. Running everything under 777 in the production is, nicely put, not correct.
How do you people cope with this? What are yours permissions setups?
A possible solution could be using the rsync synced folder strategy, along with the vagrant rsync and vagrant rsync-auto commands.
In this way you'll lose bidirectional sync, but you can manage file permission and ownership.
I am in a similar situation. I started using Vagrant mount options, and found out that as I upgraded parts of my tech stack (Kernel, Virtualbox, Vagrant, Guest Additions) I started getting different behavior while trying to set permissions in synced folders.
At some point, I was perfectly fine updating a few of the permissions in my shell provisioner. At first, the changes were being reflected in the guest and the host. At another point in time, it was being done the way I expected, with the changes being reflected only in the guest and not the host file-system. After updating the kernel and VB on my host, I noticed that permission changes in the guest are being reflected on the host only.
I was trying to use DKMS to compile VBOX against an older version of my Kernel. No luck yet.
Now when I have little more experience, I can actually answer this question.
There are 3 solution to this problem:
Use Git in your host system because vagrant basic shared folders setup somehow forces 777 (at least on Windows hosts)
Use NFS shared folders option from the vagrant (not available on Windows out of the box)
Configure more complex rsync as mentioned in Emyl's answer (slower sync speeds).

Vagrant 2 way folder sync

I've setup a Vagrant box that runs my webserver to host my Symfony2 application.
Everything works fine except the folder synchronization.
I tried 2 things:
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER, type="rsync"
Option 1: First option works, I actually don't know how file is shared but it works.
Files are copied in both way, but the application is SUPER slow.
Symfony is generating cache files which might be the issue, but I don't really know how to troubleshoot this and see what is happening.
Option 2: Sync is only done in one way (from my local machine to the vagrant box), which covers most of the case and is fast.
Issue is that when I use symfony command line on the vagrant box to generate some files they are not copied over to my local machine.
My question is:
What is the best way to proceed with 2 ways syncing? With option 1 how can I (as it might be the issue) exclude some files from syncing.
With Option 2 how can I make sure changes on remote are copied to my local machine?
If default synced folder strategy (VirtualBox shared folders, I imagine) is slow for your use case, you can choose a different one and, if you need, maintain the two-way sync:
If your host OS is Linux or Mac OS X, you can go with NFS.
If your host OS is Windows you can instead choose SMB.
Rsync is very fast but, as you've pointed out, is one-way only.
As it doesn't seem Vagrant offers a "built-in" way to do this here is what I did:
Configure Vagrant RSYNC folder on the folders that will contains application generated files (in Symfony2 it is your Bundle/Entity folder). Note that I didn't sync the root folder because some folders doesn't have to be rsynced (cache/logs...) and also because it was taking way too much time for the rsync process to parse all the folders/subfolders when I know that only the Entity folder will be generated.
As the Rsync has to be done from the Vagrant box to the host, I use vagrant-rsync-back plugin and thus run this manually everytime I use a command that generates code.
https://github.com/smerrill/vagrant-rsync-back#getting-started
Create an watcher on my local machine that will track any change in code and rsync it to the vagrant box.
https://gist.github.com/laurentlemaire/e423b4994c7452cddbd2
Vagrant mounts your project root as /vargrant folder inside box as 2 way share.
You can run your command there do get required files synced. Any I/O will be damn slow (like you already mentioned), however you will get your files. For other stuff use your 1-way synced folder.

How to use a virtual machine with automated tests?

I am attempting to setup automated tests for our applications using a virtual machine environment.
What I would like to have is something like the following scenario:
Build server is automatically triggered to start an automated test for the application
A "build" script is then run which consist of:
Copy application files and a test script to a location accessible by the VM
Start the VM
In the VM, a special application looks in the shared folder and start the test script
The tests script do its job, results are output to shared folder
Test script ends
The special application then delete the test script
The special application somehow have the VM manager close the VM and revert to the previous snapshot
When the VM has exited, process the result and send to build server.
I am using TeamCity if that matters.
For virtual machines, we use VirtualBox but we are open to any other if needed.
Is there any applications/suite that would manage this scenario?
If there are none then I would then code it myself, should be easy but the only part I am not sure is the handling of the virtual machine.
What I need to be able to do is to have the VM close itself after the test and revert to a previous snapshot since I want it to be in a known state for the next test.
Any pointers?
I have a similar setup running and I chose to use Vagrant as its the same thing our developers where using for normalizing the development environment.
The initial state of the virtualmachine was scripted using puppet, but we didn't run the deployment scripts from scratch on each test, only once a day.
You could use puppet/chef for everything, but for all other operations on the VM, we would use Fabric scripts, as they were used for the real deployment too, and somehow fitted how we worked better. In sum the script would look something like the following:
vagrant up # fire up the vm, and run the puppet provisioning tool
fab vm run_test # run tests on vm
fab local process_result # process results on local shared folder
vagrant destroy # destroy the vm
The advantage is that your developers can also use vagrant to mimic your production environment without having to take care of that themselves (i.e. changes to your database settings get synced to all your developers vm's wherever they are) and the same scripts can be used in production too.
VirtualBox does have a COM API. I have no experience with it, but it may be possible to use that. One option would be to have TeamCity fire off a script to do this. I'd suggest starting with NAnt (supported natively by TeamCity) and possibly executing PowerShell if necessary.
Though I don't have any experience with either, I happen to have heard of a couple applications in this space recently:
http://www.infoq.com/news/2011/05/virtual_machine_test_harness
http://www.automatedqa.com/techpapers/testcomplete/automated-testing-in-virtual-labs/

Working with multiple branches in ASP .NET

I've seen several other posts similar to this (namely https://stackoverflow.com/questions/5237/solutions-for-working-with-multiple-branches-in-asp-net) but there are several issues that I have that seem to be different than other similar posts.
I have an ASP .NET application that uses a virtual directory off of localhost. There are several spots in the code where I need to reference the name of the virtual directory so the virtual directory needs to be in place and named correctly in order for it to work. I'm also using my httpd.conf file to format my URLs to avoid cluttered querystrings.
That being said, I just published my application and now need to create a branched environment for bug-fixes whenever there is a bug in the live code and I don't want to upload the dev code.
The trouble is that I need to be able to easily run my branched code parallel to my dev code without needing to do a bunch of work with IIS and config files every time I want to load in my branched code. The drawbacks are that the parallel environment needs to have the virtual directory in place and work with the same httpd.conf (for URL formatting).
I don't think Cassini would work because I need SSL and of course...the httpd.conf and the virtual directories would need to still be in place.
The perfect solution in my mind would be to run a parallel website to localhost with the same httpd.conf and the same virtual directory...but I'm running XP Pro and they don't "do" multiple websites.
Have your build process create the virtual directory each time the build is run.
I've used NantContrib's mkiisdir task for this.
With this approach you can't run multiple branches simultaneously, but you can quickly switch between branches by building the branch you want to run.
I would do as above, but you could hook it into your solutions post build event, but this wouldn't be parallel more a quick switch. I think there's a registry hack out there to get multiple sites in iis or if memory serves if you create an additional site through a script it works, it's just the GUI that's locked down. Or the better solution would be upgrade to windows server, and have different branches build to different ports.

Resources