How to deploy WordPress with Capistrano on shared host without sudo - wordpress

In the pursuit of a more professional WordPress dev and deployment environment I am trying to use Capistrano to deploy from my local dev environment to staging and production servers but I can't quite get it to work.
I am on Site5 shared hosting and am deploying to one server with two different domains - staging.example.com and example.com.
I have used https://github.com/markjaquith/WP-Stack as a basis and have added
set :user, "myserveruser" to my config.rb file as the connection didn't work without that.
Running cap deploy:checktells me "You appear to have all necessary dependencies installed" and I can run cap deploy:setup which successfully connects to staging and production creating releases and shared directories etc.
The problem comes when I run cap deploy. Everything seams to work fine until I am prompted for a sudo password. This is not a normal login prompt though and does not allow me to type a password. As I am on a shared host I don't have sudo access anyway.
Its similar to this problem Capistrano is hanging when prompting for SUDO password to an Ubuntu box but the solutions didn't fix all my issues.
I have set default_run_options[:pty] = true which I seam to need but still get the sudo prompt.
I am using passwordless ssh so Capistrano is able to connect and do everything it needs to do without prompting for a password and I am also using an SSH config file to handel agent forwarding so the server can also connect to my git repository. I have confirmed this is working.
I have found other people having similar problems - Capistrano using sudo even with "set :use_sudo, false" - etc but none of the solutions have worked.
I am using default_run_options[:pty] = true and have tried using set :use_sudo, false and default_run_options[:shell] = false but I still have the same issue.
You can view my Terminal output here - http://pastebin.com/5xZmCnyA
I am seriously going crazy! Any help would be greatly appreciated!!!
Cheers

You can't run setup without sudo. That's the only part that REQUIRES sudo, because it makes a directory public_html/staging.exposecreative.org, in your case...
That step shouldn't be required, you can make that directory yourself (assuming you have permission)..
The problem you will face however is that the shared host will expect your index.php to be directly in public_html, for that to work you'd need write permission to whatever directory public_html is in, which your shared host won't allow (almost certainly)
The line default_run_options[:pty] = true has to do with whether Capistrano pretends to be an interactive shell or not. Many commands (sudo included) use this to determine if they should bother asking the user for input, or if they are part of an automated process (
in which case there's no way to ask for input)
My advice (as Capistrano maintainer) - don't try this on a shared host, it almost certainly won't work. (Sorry, their limitations, not ours)

I wrote an extensive post on how to deploy WordPress with Capistrano on a shared host (Bluehost). I use the Roots/Bedrock stack and it only took me about 20 minutes to get up and running.

I made a WordPress development stack using Shipit JS instead of Capistrano because I'm not comfortable with Ruby. Maybe that can help. Feel free to use it and/or post some issues if needed. Regards.
WP-Jungle / Bonzai

Related

GCE: cannot login, The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item

I cannot ssh into my Google Compute Engine (GCE) Wordpress instance anymore.
It was working one month ago when I tried last.
I use the Google built-in SSH client in a Chrome browser window.
Yesterday I tried an got the following message:
The VM guest environment is outdated and only supports the deprecated
'sshKeys' metadata item. Please follow the steps here to update.
The "Steps here" link navigates to https://cloud.google.com/compute/docs/images/configuring-imported-images#install_guest_environment which does not seem to help me much.
I am not aware of any changes that I may have made.
How can I fix this?
It looks like your instance's disk is full, and so the SSH keys can't be created in the temp directory. You can do the following:
Stop your instance and wait for it to shut down
Click on the disk your instance is using, and choose "edit" at the top
Enter a larger disk size, and save
Go back to your instance and start it up again
You should now be able to connect via SSH. While you're in there, check to see what filled up your hard disk so you can prevent this from happening again (maybe a rogue program is printing out too many logs, etc).
If you're seeing this on Debian 8 or 9, the most likely reason for this is that the google-compute-engine.* packages that allow SSH access to the instance have been removed by apt-get autoremove.
If you have an open SSH connection to the machine or can use a tool like gcloud, running apt-get update && sudo apt-get install gce-compute-image-packages should fix this.
If you no longer have any SSH access, there is a procedure available on the GCP docs site that can be used to restore it.
I've created a bug report here for this.
Might be a bit late, but you can
1) Stop the VM
2) Edit and enable serial console
3) Use the serial connection to login and update the VM
recent days, I meet similar problem, later I find the permission rights of my home directory fools me, as a lazy-bone, I chmod 777 ~
After did that, I cannot ssh via my terminal, even cannot ssh via browser, only get 'The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item, Plese follow the steps here to update'. Sounds like you must set 755 to your home dir, not just care your 700 .ssh or 600 authorized_keys.
I met the similar issue after I created a FreeBSD VM, gcloud ssh not works, but I am lucky that I can use the browser window ssh to my VM. Then I manually add the google_compute-engine public key to the .ssh/authorized_keys, now it work, I can use the gcloud ssh to connect. But not sure if this is a better/security way.

WordPress project setup - Trellis, Valet or Docker?

I want to start a new WordPress project with another developer. The decisions we made are:
We want to use Bedrock as the WP structure
we want to use Sage as the WP theme
We put the project in a GIT repository
I now ask myself if we should use Trellis, Valet or Docker.
My personal opinion is that Trellis / Docker is a bit too much for a project with two developers working on it. Additionally my experience with Vagrant is not very positive, as it was very slow when I used it. My favorite would be Valet, because it's so slim. The repository I would use is Beanstalk, from there I would trigger my deployments to my test and live system.
Additionally I am not 100% sure if my server to which I want deploy my project also needs Docker installed - does anybody knows that? And what happens when my server runs on Apache and not Nginx?
Now that Docker has native Mac and Windows apps you wouldn't need Vagrant for local dev, and running a series of Docker containers is much faster than a full-fledged VM with Vagrant+VirtualBox. Right now I have MariaDB + PHP-FPM + Nginx + WordPress + PHPMyAdmin, and the whole thing is really fast relative to my previous experience with Vagrant. Faster as in: faster initial install, faster to start/stop, faster to make changes and have them reflected after a restart. I just replaced MySQL with MariaDB in a matter of minutes (mostly fumbling with having the proper syntax in my docker-compose file).
The beauty of Docker appears precisely when you want to switch components (say Apache vs. Nginx). In WordPress' case, they provide images on Docker Hub that either include Apache or PHP-FPM (in the latter case you just add a Nginx container to your stack).
That said I just got started with Docker and there are some kinks to figure out, but it's worth figuring out.
I haven't deployed with Docker yet but I plan to test that next once I've got local dev fully working as intended. It's optional though, you could always deploy with Git webhooks or whatever you were using up to now.

access denied on cloud control app push git bash for windows

I am trying to push Piwigo CMS(http://piwigo.org/) to Cloudcontrol I tried the method as same as Drupal, which is described in https://www.cloudcontrol.com/dev-center/Guides/PHP/Drupal%207 But I am getting this error
This is indeed, as pst pointed out, a pubkey issue.
You might want to check your ssh config, it helped to define the following in my ~/.ssh/config:
Host cloudcontrolled.com
user USERNAME
IdentityFile ~/PATH_TO_YOUR/rsa_key
I dunno if this works as well in cygwin or similar: it would help if you could specify your environment a little bit more, as there are many elements (read binaries, configs, etc) in play here.

Drupal very slow in Vagrant environment

I've begun migrating a lot of our development environments to Vagrant. So far, this has been great for almost everything, but our first Drupal migration is unusable. It's unbelievably slow. Our Wordpress, CakePHP and Node.js sites all perform very adequately or better, but not Drupal. This think is just awful.
The box is a Veewee-created Ubuntu 12.04 64bit machine. It's the same base box we use for all of our web-based projects so nothing unique there. In my sites directory, I have a canonical directory (sites/my-site/) with all of the site resources and a symlink to that canonical directory with the domain name (sites/dev.mysite.com -> /vagrant/www/sites/my-site) that is evidently required for some module that the team is using.
This is a mixed Windows/OSX dev team and it's slow across both platforms. The only semi-unconventional snippet from my Vagrantfile is this:
config.vm.forward_port 80, 8080
config.vm.share_folder( "v-root", "/vagrant", ".", :extra => 'dmode=777,fmode=777' )
# Allows symlinks to the host directory.
config.vm.customize ["setextradata", :id, "VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
Vagrant::Config.run do |config|
config.vm.provision :shell, :path => "provision.vm.sh"
end
My shell provisioner only does a couple of things:
Installs drush
Creates the aforementioned symlink to the canonical site directory
Writes out an Nginx server block
If necessary, creates a settings.php file.
Is there anything I can do to improve performance? Like, a lot?
UPDATE
I've narrowed this down to a point where it looks like the issue is the remote database. To compare apples to apples with no project baggage, I downloaded a fresh copy of Drupal 7.21 and performed a standard install from the Vagrant web server against 3 different databases:
A new database created on the same Vagrant VM as the webserver (localhost)
A new database created on the shared dev server used in the original question (dev)
A new database created on an EC2 instance (tmp)
Once that was done, I logged in to the fresh Drupal install and loaded the homepage (localhost:8080) 5 times. I then connected to each database and loaded the same page, the same way. What I found was that the page loaded 4-6x slower when Drupal was connected to the remote database.
Remember, this is a fresh (standard) install. There is no project baggage.
I am hit by similar problem, too. It seems that VirtualBox shared folder can be very slow for project tree with +1000 files.
Switching to NFS might be the solution.
The issue is almost certainly either skip_name_resolve (being needed in my.cnf) or VirtualBox's poor handling of shared directories with large numbers of files. Both are easy to track down with strace -c, but you may find it easier just to correct them one at a time and see which one fixes your performance issues.
If you're still seeing slowness after both of these changes, let me know and we can debug it further.
I got here via google for similar, so I'm replying in the hopes others find this useful.
If you're using the precise32 vagrant box as your starting point, it's worth noting that the box by default has only 360MB of RAM.
Up the ram (at least in Vagrant V2 with VirtualBox) like so
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "1024"]
end
This made Drupal much more responsive for me.
It's just a PHP/MySQL app so there's not much special about Drupal besides how it has been customized. You may have done some of this, but here are some suggestions to isolate the issue.
Check the Drupal dblog for errors.
Check your nginx & php logs for errors.
Consider how many active modules you are running (over 100? That would be a very heavy install)
Install a fresh Drupal instance & compare. This may isolate the problem to your instance and not Drupal in general.
If you find that it is your instance of Drupal
Install the devel module and enable memory reporting so you know how much memory is being used per page load, as well as to have a base line for improvement.
Make sure you have APC or another PHP opcache installed, and make sure the hit rate is good. If you weren't running it before, note the memory usage difference reported by devel.
run something like xhprof or disable suspicious modules till you find the major offenders.
enable mysql slow & index log to find potential issues, then add indexes or take other action appropriately
If your other apps are running fine, I suspect there is a problem with a particular module, or you have a fat Drupal install in general that needs some optimizing or more memory.
I tried pretty much everything to get my slow Vagrant to speed up and finally stumbled on this in the Issues tracker of the project.
config.vm.provider "virtualbox" do |v|
v.memory = 1024
v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
v.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
I had previously tried NFS to no avail; this happened to be the silver bullet.
Since Vagrant 1.5 you can use rsync as a mechanism to sync a folder to the guest machine. Because rsync copies the files directly onto the remote filesystem, performance is noticeably better than NFS and VM shared folders.
You can read more about it here: http://www.vagrantup.com/blog/feature-preview-vagrant-1-5-rsync.html.
I just was trying to solve this issue myself. I tried the suggestions here and at Rails Windows Vagrant very slow response time. No real luck, I shaved 200 ms off 1800 ms response time on a warm request with no real data rendered. This with Ruby on Rails, not Drupal. The problem is the same, though.
Switching the shared folder to Rsync gave me a response time of ~280ms on that same request.
Vagrantfile:
config.vm.synced_folder '.', '/vagrant', type: 'rsync',
rsync__exclude: '.git/'
Usage:
$ vagrant up
$ vagrant rsync-auto
The latter command will watch your working dir and sync changed automatically.
See https://www.vagrantup.com/docs/synced-folders/rsync.html and https://www.vagrantup.com/docs/cli/rsync-auto.html
Latency is a big issue with database connections in any server environment. Even just running encryption on the DB connections is going to be a substantial performance issue, though it's presumably needed under these conditions.
What's your ping time to the database? If you've got at least one round trip for each query you run, then that's going to add up. Plus a bit of time for encryption. Worse again. if you don't use persistent database connections.
I'd think about where you do your caching. Eg cache in memcached on the VM instead of in the DB.
I run into the same problem. These advises will be especially helpful for those who uses Windows host machine. You will not be able to get decent performance without NFS supoort (for windows it is a big issue), so:
Do not use synced folder at all.
config.vm.synced_folder "../data", "/vagrant", disabled: true
Setub samba server in the guest VM + network drive on Windows host.
There are a lot of articles how to do it, e.g.: https://www.liberiangeek.net/2014/07/ubuntu-tips-create-samba-file-server-ubuntu-14-04/
If the NFS shares with Vagrant are still too slow for you, you can do the contrary:
Instead of installing a NFS server on your host machine, you can install it on the VM guest: http://guillaumeduveau.com/en/drupal-lightning-fast-synced-folders-in-vagrant-virtualbox/
I started to get slow performance on a drupal site once I installed nodejs and gulp. I had to do this because the drupal bootstrap 4 barrio sass subtheme requires nodejs/gulp. Then I ran into issues with vagrant on Windows and npm install commands. All npm install commands fail because they create sym links and Windows OS does not recognize these links. I had to create a sym link to the sites node_modules folder over to my vagrant home directory. npm install comands work after doing this. But then I started noticed the very slow response on this website. My other site run fast.

How can i setup Drush to use a proxy server to access internet on Windows7?

i am using WAMP for Drupal development. I have installed drush and it works fine when using home network without any proxy. When i am at work, the network setting uses a proxy to access internet and hence any drush command which need internet, eg. drush dl {module_name}, doesn't work.
After googling i could only find texts that told me how to configure them on *nix based OS. I'm stuck with windows7. Any idea?
Okay. I got it running. I had to do the following change to make drush dl work on windows 7. Apparently "which wget" wasnt returning anything as windows doesn't have a 'which' command. I hacked the drush core to do the following changes.
Go to file drush.inc in folder C:\ProgramData\Drush\includes
Change the line $use_wget = drush_shell_exec('which wget'); with $use_wget = drush_shell_exec('where wget');
Root Cause:Windows doesn’t have 'which' command, 'where' command serves the purpose
I think there was no issue with proxy at all and it was using proxy from drupal's settings.php file correctly

Resources