I am unable to vagrant up - wordpress

I am using a file from GitHub
It has a vagrant file with it. When I run vagrant up command in my terminal, I get an error.
The terminal should show READ ABOVE message when successful download
I want to type in the address to the site on my browser to start a local development server.

Its pretty old file and the repo was using puphpet but this project seems dead for 2 years, the website is down.
In your case, vagrant is trying to download the box from internet but the owner of this box hosted it under the puphpet domain not available anymore
I am not sure what's the best way to help now:
find another more recent example and start from there
if you want to fix this, you will need https://github.com/LearnWebCode/vagrant-lamp/blob/master/puphpet/config.yaml#L6 and use a different box available on vagrant site, ubuntu 16.04 is pretty old now but you can search one from vagrant box

Related

How to see what web server is running on my mac?

I'm trying to learn how to create a custom WordPress theme. I've been following a tutorial, and I was trying to install DesktopServer onto my MacBook Pro (to create a local environment.)
But I'm not able to install it because it's stating that
"It appears that you have another web server already running. DesktopServer cannot be installed. Check that you do not have Web Sharing turned on from your System Preference -> Sharing control panel or turn off and remove your other web server."
I've checked my Sharing settings, and nothing is enabled (including internet sharing.) So that must mean I have a web server already running. But I don't know what that would be.
Is there a way for me to find out what web server my mac is running?
And after that, is there a way for me to disable that so I could possibly use DesktopServer instead.
I've really good with writing HTML, CSS, Javascript, etc., but I'm pretty new to the server and hosting and stuff. I honestly don't understand everything yet.
I had the same problem, and the solution that worked for me was here:
https://zachgoll.github.io/blog/2018/serverpress-error/
By default, Mac OSX has an Apache server running in the background
which conflicts with Serverpress by default.
To turn it off, run sudo apachectl stop.

GCE: cannot login, The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item

I cannot ssh into my Google Compute Engine (GCE) Wordpress instance anymore.
It was working one month ago when I tried last.
I use the Google built-in SSH client in a Chrome browser window.
Yesterday I tried an got the following message:
The VM guest environment is outdated and only supports the deprecated
'sshKeys' metadata item. Please follow the steps here to update.
The "Steps here" link navigates to https://cloud.google.com/compute/docs/images/configuring-imported-images#install_guest_environment which does not seem to help me much.
I am not aware of any changes that I may have made.
How can I fix this?
It looks like your instance's disk is full, and so the SSH keys can't be created in the temp directory. You can do the following:
Stop your instance and wait for it to shut down
Click on the disk your instance is using, and choose "edit" at the top
Enter a larger disk size, and save
Go back to your instance and start it up again
You should now be able to connect via SSH. While you're in there, check to see what filled up your hard disk so you can prevent this from happening again (maybe a rogue program is printing out too many logs, etc).
If you're seeing this on Debian 8 or 9, the most likely reason for this is that the google-compute-engine.* packages that allow SSH access to the instance have been removed by apt-get autoremove.
If you have an open SSH connection to the machine or can use a tool like gcloud, running apt-get update && sudo apt-get install gce-compute-image-packages should fix this.
If you no longer have any SSH access, there is a procedure available on the GCP docs site that can be used to restore it.
I've created a bug report here for this.
Might be a bit late, but you can
1) Stop the VM
2) Edit and enable serial console
3) Use the serial connection to login and update the VM
recent days, I meet similar problem, later I find the permission rights of my home directory fools me, as a lazy-bone, I chmod 777 ~
After did that, I cannot ssh via my terminal, even cannot ssh via browser, only get 'The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item, Plese follow the steps here to update'. Sounds like you must set 755 to your home dir, not just care your 700 .ssh or 600 authorized_keys.
I met the similar issue after I created a FreeBSD VM, gcloud ssh not works, but I am lucky that I can use the browser window ssh to my VM. Then I manually add the google_compute-engine public key to the .ssh/authorized_keys, now it work, I can use the gcloud ssh to connect. But not sure if this is a better/security way.

What am I doing wrong when trying to install Bitnami modules into a LAMP stack

When I first uploaded the basic LAMP stack from Bitnami, it was just one .run file. I first was making the mistake of not writing it like this:
./bitnami-lamp-stack.run
Note, full file name was longer, obviously. So, then to install WordPress, there is a native installer. So, I uploaded that, just as instructed. Made it executable. Then ran
./bitnami-wordpress-module.run
Note, again, the actual fine name was different. So, the second command, should find the bitnami installation and add WordPress. Strangely, it just returns immediately without doing anything. I tried it with an without sudo, as I had given read and execute permission temporarily. It just throws me back at the command prompt having done nothing.
I even tried running it from the same directory as where the lamp stack is installed. I am baffled by this and stumped. One idea did come to mind... Maybe I need to add the bitnami lamp stack location to the path. It doesn't seem to require that but who knows.
This is on Ubuntu 14.04.
Thanks in advance for any help,
Bruce
My understanding is that you already have the Bitnami LAMP properly installed and you have troubles installing the module on top of the LAMP. Could you run the module installer with the following option?
./bitnami-wordpress-module.run --mode text
Could you also try to download again the module from the bitnami page and check the md5 of both installers? You can check it with the following command:
md5sum /path/to/installer

Using Docker for Drupal Dev (Local)

So, to put it simply, I have a drupal site that's live.
I want to work on it locally and use docker containers to manage that.
I want to use this Image:
https://index.docker.io/u/bnchdrff/nginx-php5-drupal/
And use this as my data container:
https://index.docker.io/u/bnchdrff/mariadb/
I have the database downloaded from the live site saved as an .sql file.
I need to be able to use this pre-existing database.
Best case scenario is to be able to run the images in terminal and open a browser, navigate to something like 'localhost' and have the Drupal site pop up there for me to work on.
I am running Ubuntu 13.10 and have the latest version of Docker. Needless to say I have been working on trying to get this working for a while but don't want to complicate things with my failed attempts. Any and all suggestions welcomed.

How can i setup Drush to use a proxy server to access internet on Windows7?

i am using WAMP for Drupal development. I have installed drush and it works fine when using home network without any proxy. When i am at work, the network setting uses a proxy to access internet and hence any drush command which need internet, eg. drush dl {module_name}, doesn't work.
After googling i could only find texts that told me how to configure them on *nix based OS. I'm stuck with windows7. Any idea?
Okay. I got it running. I had to do the following change to make drush dl work on windows 7. Apparently "which wget" wasnt returning anything as windows doesn't have a 'which' command. I hacked the drush core to do the following changes.
Go to file drush.inc in folder C:\ProgramData\Drush\includes
Change the line $use_wget = drush_shell_exec('which wget'); with $use_wget = drush_shell_exec('where wget');
Root Cause:Windows doesn’t have 'which' command, 'where' command serves the purpose
I think there was no issue with proxy at all and it was using proxy from drupal's settings.php file correctly

Resources