hosting an mvc app within vagrant - networking

Maybe it's just because it's a Friday and it's after closing time but I've been stuck on this for an hour and can't quite get it working. I'm using Vagrant with an application we're building - the git repo contains the Vagrantfile and a Laravel application. We have /deploy, /tests, and /src directories; the actual Laravel framework lives in /src. On my local machine, I have set up a VirtualHost that let's me access the application by browsing to localhost:9000:
Listen 8081
<VirtualHost *:8081>
DocumentRoot "/Application/mamp/apache2/htdocs/myapp/src/public"
ServerName localhost
<Directory "/Application/mamp/apache2/htdocs/myapp/src/public">
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
Works like a charm. So I copied the relevant bits to my Vagrant setup:
Listen 8081
<VirtualHost *:8081>
DocumentRoot "/var/www/src/public"
ServerName localhost
<Directory "/var/www/src/public">
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
My Vagrantfile looks like this:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "hashicorp/precise64"
config.vm.provision :shell, :path => "vagrant/main.sh"
config.vm.network "private_network", ip: "192.168.100.101", virtualbox__intnet: true
end
And my vagrant/main.sh file looks like this:
#!/usr/bin/env bash
apt-get update
echo mysql-server-5.5 mysql-server/root_password password notthepassword | debconf-set-selections
echo mysql-server-5.5 mysql-server/root_password_again password notthepassword | debconf-set-selections
apt-get install -y mysql-common mysql-server mysql-client
apt-get install -y apache2
apt-get install -y php5 libapache2-mod-php5
apt-get install -y php5-mysql php5-curl php-pear php5-imagick php5-mcrypt php5-memcache
apt-get install -y vim
a2enmod rewrite
sed -i -e 's/AllowOverride None/AllowOverride All/g' /etc/apache2/sites-available/default
cp /vagrant/vagrant/bgs /etc/apache2/sites-available
a2ensite bgs
/etc/init.d/apache2 restart
rm -rf /var/www
ln -fs /vagrant /var/www
Once it's all up and running I can ping 192.168.100.101. But it's not serving any HTML - if I browse to that address in Chrome, I get a "no data received" error. If I go to 192.168.100.101:8081 Chrome says it can't find the address. How can I configure everything to play nice together and let me clone my repo, run vagrant up, and browse to 192.168.100.101:8081 and see my app?
(Also: I even added a port forwarding line in there to go from guest:8081 to host:8081. That generated an HTTP 500 error ("The server encountered an internal error or misconfiguration and was unable to complete your request."). Not sure if that's progress or not.

Turns out there was a number of things happening all at once:
I was using Ubuntu 12 LTS, which had a version of PHP a little to old to run the edge release of Laravel. Installing an upgraded version of PHP fixed that.
The virtualbox__intnet directive was...wrong. Somehow. Changed that whole Vagrantfile line to: config.vm.network "private_network", :ip => "192.168.100.101", :auto_network => true
The different ports, mucking about in the various symlinked directories vs. apache config directories...needlessly complicated.
Here's my final setup, in case anyone else has this exact, specific problem:
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "puppetlabs/ubuntu-13.10-64-puppet"
config.vm.provision :shell, :path => "vagrant/main.sh"
config.vm.network "private_network", :ip => "192.168.100.101", :auto_network => true
end
main.sh:
#!/usr/bin/env bash
apt-get update
echo mysql-server-5.5 mysql-server/root_password password f6b6rWixbu99CtQ | debconf-set-selections
echo mysql-server-5.5 mysql-server/root_password_again password f6b6rWixbu99CtQ | debconf-set-selections
apt-get install -y mysql-common mysql-server mysql-client apache2 php5 libapache2-mod-php5 php5-mysql php5-curl php-pear php5-imagick php5-mcrypt php5-memcache php5-json
a2enmod rewrite
sed -i -e 's/AllowOverride None/AllowOverride All/g' /etc/apache2/sites-available/default
cp /vagrant/vagrant/app.conf /etc/apache2/sites-available
a2ensite app.conf
#fix for ubuntu 13.10: http://stackoverflow.com/questions/19446679/mcrypt-not-present-after-ubuntu-upgrade-to-13-10
ln -s /etc/php5/conf.d/mcrypt.ini /etc/php5/mods-available/mcrypt.ini
php5enmod mcrypt
#/fix
#json licensing snafu: http://stackoverflow.com/questions/18239405/php-fatal-error-call-to-undefined-function-json-decode
php5enmod json
#/snafu
#may need to be done on the host OS, not the guest: http://stackoverflow.com/questions/17954625/services-json-failed-to-open-stream-permission-denied-in-laravel-4
chmod -R 0777 /vagrant/src/app/storage
rm -rf /var/www
ln -fs /vagrant/src/public /var/www
/etc/init.d/apache2 restart
Apache site configuration copied in:
<VirtualHost *:80>
DocumentRoot "/var/www"
ServerName localhost
<Directory "/var/www">
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
(BTW, though this config looks very similar to the default apache configuration, I found it was easier and more extensible to create a config for whatever project I happen to be working on, and if I need to expand on the options for a future project, I can.)

Related

Wordpress install with Docker doesn't work

I'm completely beginner with Docker and I'm trying to install wordpress (without database) from a base ubuntu 20.04 image with docker. I'm using apache server for this.
Here is my wordpress2_ms.dockerfile:
FROM ubuntu:20.04 as baseimage
SHELL ["/bin/bash", "-c"]
ARG DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y wget tar curl sudo systemctl
RUN apt install -y php libapache2-mod-php
RUN wget -c http://wordpress.org/latest.tar.gz
RUN tar xzvf latest.tar.gz -C /var/www/html
RUN sudo chown -R www-data.www-data /var/www/html/wordpress
FROM baseimage as wordpressapp
COPY wordpress.conf /etc/apache2/sites-available/
WORKDIR /etc/apache2/sites-available
RUN sudo a2ensite wordpress.conf
RUN sudo a2dissite 000-default.conf
RUN sudo systemctl reload apache2
EXPOSE 80
For this, we have to place a context folder beside this wordpress2_ms.dockerfile, and inside this context folder we need the following wordpress.conf file:
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port t$
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html/wordpress
ServerName localhost
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
The building command: sudo docker build -t wordpress:1.0 -f ./wordpress2_ms.dockerfile --target wordpressapp ./context/
The run command: sudo docker run -td --name wordpress_cont -p 8081:80 wordpress:1.0
After the run command I get the log that the wordpress has started at port 80 of the container, but nothing is happening at 8081 port of my host machine.
I would appreciate any help. Thanks in advance!

Apache2 not pointing to the WordPress directory

First I followed this tutorial:
https://www.tecmint.com/install-wordpress-alongside-lamp-on-debian-10/
Which worked fine until I got to the step where I needed to set the permissions. After that, when I try to cd wordpress I get Permission Denied, so I had to sudo su to continue following the directions.
Then in step 6 where you set the DocumentRoot, I followed that to the latter.
Now on step 7, where you actually test if you can access the WordPress initial installation screens, Apache2 is still displaying the Default apache2 static page.
I found this tutorial as well:
https://dade2.net/how-to-install-lamp-wordpress-ubuntu-and-debian/
While that second one is more recent, the only difference is that it uses slightly different Permissions and uses MariaDB. So I tried their Permissions and can now cd into wordpress without sudo su.
But the apache2 static page is still there.
Has something changed with Apache2 or WordPress that's preventing it from pointing to WP?
Looks like your are trying to install apache2 with WordPress. I would suggest you to follow these steps and let me know, if it helps.
$ sudo apt-get update
Install apache
$ sudo apt install apache2
Verify your Apache installation by typing "http://your-ip-address" in your favourite browser.
hostname -I | awk '{print $1}' # can help you to get your IP address.
If you have firewall installed run this command to enable port 80. If you don't have firewall, skip this step.
$ sudo ufw allow 'Apache'
Install wordpress
1. sudo apt update
2. sudo apt install wordpress php libapache2-mod-php mysql-server php-mysql
3. cd /etc/apache2/sites-available/
4. sudo vi wordpress.conf
#Add these lines in wordpress.conf
Alias /blog /usr/share/wordpress
<Directory /usr/share/wordpress>
Options FollowSymLinks
AllowOverride Limit Options FileInfo
DirectoryIndex index.php
Order allow,deny
Allow from all
</Directory>
<Directory /usr/share/wordpress/wp-content>
Options FollowSymLinks
Order allow,deny
Allow from all
</Directory>
5. sudo a2ensite wordpress
6. sudo a2enmod rewrite
7. sudo service apache2 reload
Now Configure mysql
$ sudo mysql -u root
Once you get mysql prompt, Run create, grant, Flush and quit command as follows -
$ mysql> CREATE DATABASE wordpress;
set username and password
$ mysql> create user 'wordpress'#'localhost' IDENTIFIED BY 'test1234';
Run these commands
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER ON wordpress.* TO 'wordpress'#'localhost';
mysql> FLUSH PRIVILEGES;
mysql> quit
Now, configure "/etc/wordpress/config-localhost.php file to link MySQL DB "WordPress" created above. create the config-localhost.php, if doesn't exist.
add these lines
<?php
define('DB_NAME', 'wordpress');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', 'test1234');
define('DB_HOST', 'localhost');
define('DB_COLLATE', 'utf8_general_ci');
define('WP_CONTENT_DIR', '/usr/share/wordpress/wp-content');
?>
Restart mysql service
$ sudo service mysql start
Login to your WordPress by opening "localhost/blog/wp-login.php".
I found this link, may be helpful for you. It shows each step screenshot also.

In vagrant can not access to the default keystone site with nginx

The vagrant server I configure with the following script still serve the default nginx page instead of the default keystone page.
Here the scripts I use:
The vagrant file:
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.network "private_network", ip: "192.168.1.10"
config.vm.provider "virtualbox" do |vb|
config.vm.provision "file", source: "mongodb-org-3.2.repo", destination: "~/mongodb-org-3.2.repo"
config.vm.provision "shell", path: "provision.sh"
end
The provision file:
sudo yum -y update
sudo hostnamectl set-hostname melanie
echo "given hostname :"
hostnamectl status --static
echo -e "\e[1;34m
***************************************************
add host names
***************************************************"
sudo cp /etc/hosts /etc/hosts.origin
echo "192.168.1.10 melanie.misite.com melanie" | sudo tee -a /etc/hosts > /dev/null
echo -e "\e[1;34mIP, FQDN and Server name setted in /etc/hosts:"
cat /etc/hosts
echo -e "\e[1;34m
***************************************************
set timezone
***************************************************"
sudo timedatectl set-timezone America/Guayaquil
echo -e "\e[1;34msetted time zone:"
timedatectl | grep "Time zone"
echo -e "\e[1;34m
***************************************************
add automatic security update
***************************************************"
sudo yum -y install yum-cron
sudo sed -i.bak 's/.*update_cmd =.*/update_cmd = security/' /etc/yum/yum-cron.conf
sudo sed -i.bak 's/.*apply_updates =.*/apply_updates = yes/' /etc/yum/yum-cron.conf
sudo sed -n /update_cmd/p /etc/yum/yum-cron.conf
sudo sed -n /apply_updates/p /etc/yum/yum-cron.conf
sudo systemctl status yum-cron
sudo systemctl start yum-cron
echo -e "\e[1;34m
***************************************************
create limited user account
***************************************************"
sudo useradd me
sudo echo me:admin | chpasswd
echo -e "\e[1;34m
***************************************************
SSH Dameon Options
***************************************************"
sudo sed -i.bak 's/.*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
echo yum-cron.conf modified parameters:
sudo sed -n /PermitRootLogin/p /etc/ssh/sshd_config
systemctl restart sshd
echo -e "\e[1;34m
***************************************************
installing fail2ban
***************************************************"
sleep 15 #put sleep hoping it will help to fail2ban to be installed => do not work
sudo yum -y install fail2ban
sudo yum -y install sendmail
sudo systemctl start fail2ban
sudo systemctl enable fail2ban
systemctl start sendmail
systemctl enable sendmail
cp /etc/fail2ban/fail2ban.conf /etc/fail2ban/fail2ban.local
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sed 's/.*backend =*/backend = systemd./' /etc/fail2ban/jail.local
echo -e "\e[1;34m
***************************************************
installing nginx
***************************************************"
sudo yum -y install epel-release
sudo yum -y install nginx
sudo systemctl start nginx
echo -e "\e[1;34m
***************************************************
configure nginx
***************************************************"
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
sudo mkdir /etc/nginx/sites-available
sudo mkdir /etc/nginx/sites-enabled
sudo mkdir /var/www/misite.com/logs
sudo cp /home/vagrant/misite.conf /home/vagrant/misite.com
sudo mv /home/vagrant/misite.com /etc/nginx/sites-available > /dev/null
sudo ln -s /etc/nginx/sites-available/misite.com /etc/nginx/sites-enabled
sudo rm -rf /etc/nginx/sites-available/default
sudo chown -R nginx:nginx /var/www
sudo service nginx restart > /dev/null
echo -e "\e[1;34m
***************************************************
installing nodejs
***************************************************"
sudo yum -y install npm
sudo yum -y install nodejs
node --version
echo -e "\e[1;34m
***************************************************
installing mongoDB
***************************************************"
sudo mv /home/vagrant/mongodb-org-3.2.repo /etc/yum.repos.d/mongodb-org-3.2.repo
sudo yum -y install mongodb-org
systemctl start mongod
systemctl status mongod
echo -e "\e[1;34m
***************************************************
installing keystone
***************************************************"
sudo npm install -g yo
sudo mkdir /var/www
sudo mkdir /var/www/misite.com
cd /var/www/misite.com
sudo npm install -g generator-keystone
sudo chown -R vagrant:vagrant /var/www/
The nginx server conf file (/etc/nginx/sites-available/misite.com):
Here the keystone site should be redirect to the port 80 of the vagrant server (I think the mistake is in this file but can not see where)
# IP which nodejs is running on
upstream app_misite.com {
server 0.0.0.0:3000;
}
# nginx server instance
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
access_log /var/www/misite.com/logs/access.log;
error_log /var/www/misite.com/logs/error.log;
location / {
root /var/www/misite.com;
index index.html index.htm;
try_files $uri $uri/ #node;
}
location #node {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://app_misite.com;
}
}
I also remove the default keyword from the /etc/nginx/nginx.conf
Then:
vagrant ssh
[vagrant#melanie ~]$cd /var/www/misite.com
[vagrant#melanie misite.com]$ yo keystone
[vagrant#melanie misite.com]$ node keystone
And I have keystone js running:
------------------------------------------------
KeystoneJS Started:
My Site is ready on http://0.0.0.0:3000
------------------------------------------------
But still see the default nginx page from http://192.168.1.10/
Any help will be appreciate.
Disclaimer: I'm not familiar with Nginx, I'm trying to see if it may be settings with Keystone that are affecting the port it is running on versus Nginx.
Keystone defaults to port 3000 (more specifically, process.env.PORT || 3000), unless you specify another one. If you can set the environment variable of port to whatever value you want (80 in this case), that should make it work on http://192.168.1.10:80/.
process.env.PORT = 3000
Looking at your nginx server conf file also shows this:
upstream app_misite.com {
server 0.0.0.0:3000;
}
Try changing :3000 to :80.
I think you have cupple of issues :
nginx installed on centos has a default nginx.conf file with a server directive so you cannot override this directive in your config misite file.
You need to remove the server default declaration in /etc/nginx/nginx.conf file or you can just use your provisioning script to copy a new default conf file without server declaration
I am also not even sure if the default file has an include directive on sites-available directory (look if you have include /etc/nginx/sites-enabled/*; in your conf file)
when you create the keystone app, it does not contain the /var/www/misite.com/logs/ directory and log file, I do not see you create them in your script so nginx will fail on this (btw you can create a directory structure with mkdir -pv single command)
The keystone app you created is owned by vagrant. Make sure vagrant is added to nginx group otherwise you might get a Forbidden exception when accessing your site
can help on centos if you dont want to fight with SELinux, just disabled it on a dev instance. edit the /etc/sysconfig/selinux and just set SELINUX=disabled

Circle CI and Behat

Helo i have a problem with running behat test on circle ci. When i try to run them on my local everything works fine.
Here is a screen: https://www.dropbox.com/s/f27tfxdm1zxst8h/Screenshot%202016-11-17%2014.33.54.png?dl=0
and here is my circle.yml:
machine:
timezone:
Europe/Prague
php:
version: 7.0.7
dependencies:
pre:
- cp $HOME/$CIRCLE_PROJECT_REPONAME/app/config/parameters.yml.circle.dist $HOME/$CIRCLE_PROJECT_REPONAME/app/config/parameters.yml
database:
override:
- php app/console doctrine:migrations:migrate --no-interaction
test:
override:
- php bin/behat
Thanks for help
UPDATE
circle.yml
machine:
timezone:
Europe/Prague
php:
version: 7.0.7
dependencies:
pre:
- cp $HOME/$CIRCLE_PROJECT_REPONAME/app/config/parameters.yml.circle.dist $HOME/$CIRCLE_PROJECT_REPONAME/app/config/parameters.yml
post:
- sudo cp $HOME/$CIRCLE_PROJECT_REPONAME/app/config/mywebsite.conf /etc/apache2/sites-available
- sudo a2ensite mywebsite.conf
- sudo rm /etc/apache2/mods-enabled/php5.load
- sudo service apache2 restart
database:
override:
- php app/console doctrine:migrations:migrate --no-interaction -e=test
test:
override:
- php bin/behat
mywebsite.conf
Listen 8080
<VirtualHost *:8080>
LoadModule php7_module /opt/circleci/php/7.0.7/usr/lib/apache2/modules/libphp7.so
DocumentRoot /home/ubuntu/my_project/web
ServerName mywebsite.com
<FilesMatch \.php$>
SetHandler application/x-httpd-php
</FilesMatch>
</VirtualHost>
And of course now i'm trying to connect to url mywebsite.com but response is still: cURL error 6: Could not resolve host: mywebsite.com (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) (GuzzleHttp\Exception\ConnectException)
I had the same problem as you and finally got it running. The mere Apache config sample in the Circle docs will never work. For me the following did work in the end:
<VirtualHost *:80>
DocumentRoot /home/circleci/drupal-circleci-behat/web
ServerName drupal-circleci-behat.localhost
<Directory /home/circleci/drupal-circleci-behat/web >
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
Also I had added the domain to the host file via config.yml:
version: 2
jobs:
build:
docker:
- image: circleci/php:7.1-apache-node-browsers
working_directory: ~/drupal-circleci-behat
steps:
- checkout
- run:
name: Apache
command: |
sudo cp .circleci/drupal-circleci-behat.conf /etc/apache2/sites-available/drupal-circleci-behat.conf
sudo a2ensite drupal-circleci-behat
sudo service apache2 start
echo 127.0.0.1 drupal-circleci-behat.localhost | sudo tee -a /etc/hosts
cat /etc/hosts
curl drupal-circleci-behat.localhost
- run:
name: Other stuff ...
command: |
...
Working sample repo here: https://github.com/leymannx/drupal-circleci-behat

Vagrant - centos networking

I have set up a Vagrant machine with this configuration --
Vagrant.configure("2") do |config|
config.vm.box = "intprog/centos7-ez6"
config.ssh.insert_key = false
config.vm.network "public_network", ip: "192.168.33.243"
config.vm.provision "file", source: "/server/bin/nginx/conf/domains-enabled/cemcloudMigration.conf", destination: "~/cemcloud.conf"
config.vm.provision "shell", path: "webroot/bootstrap/script.sh"
end
This is how my script looks like -- sudo su
#update the centos version
#yum update -y
yum -y erase httpd httpd-tools apr apr-util
#getting nginx from the right address
yum install -y http://http://nginx.org/packages/centos/7/x86_64/RPMS/nginx-1.10.0-1.el7.ngx.x86_64.rpm
yum install -y nginx
#installing composer
curl -sS https://getcomposer.org/installer | php
chmod +x composer.phar
mv composer.phar /usr/bin/composer
cd /srv/www/cemcloud2
composer install
#removal of old mariadb5.5 and installation of the new one
yum -y remove mariadb-server mariadb mariadb-libs
yum clean all
yum -y install MariaDB-server MariaDB-client
#clear unnecessary software
yum -y remove varnish
## restart the service
service mysql restart
service php-fpm restart
service nginx restart
/var/log/nginx/access.log is producing this --
10.0.2.2 - - [17/Oct/2016:11:42:10 +0000] "GET / HTTP/1.1" 301 185 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:51.0) Gecko/20100101
Firefox/51.0" "-"
Really strange behavior from nginx because it some times produces log and sometimes it doesn't. When I open up my firefox developer it produces log and when I am on google chrome it doesn't.
Every time I put in the URL into browser it says
the connection has timed out.
Anyhow I want to get connected to this machine. What am I doing wrong ??
Please check your network on the guest-mashine with:
nmap -sT -O localhost
Check if the ports your are using in your nginx configuration are open.
If not, open them in your firewall and check again.
It was a firewall issue inside this machine "intprog/centos7-ez6". It wasn't listening to the port https.
I have followed those steps:
firewall-cmd --add-service=https
firewall-cmd --add-service=https --permanent
firewall-cmd --reload
and it all worked.

Resources