I'm trying to use gitlab-ci and capistrano to deploy my symfony application. But I can't deploy using SSH by injecting keys into docker, the script keeps prompting for password when connecting. I'm using a local instance of gitlab.
In gitlab's SSH_PRIVATE_KEY private variable, I added the git user's private key, and in SSH_SERVER_HOSTKEYS, the ssh-keyscan -H 192.168.0.226 command's result.
In file authorized_keys from deploy's .ssh folder, I put the git user's public key.
Here are the configurations files:
gitlab-ci.yml:
image: php:7.1
cache:
paths:
- vendor/
before_script:
# Install dependencies
- bash ci/docker_install.sh > /dev/null
- bash ci/ssh_inject.sh
stages:
- deploy
deploy:
stage: deploy
script:
- apt-get install ruby-full -yqq
- gem install capistrano -v 3.8.0
- gem install capistrano-symfony
- cap production deploy
environment:
name: production
url: http://website.com
only:
- master
ssh_inject.sh: link
#!/bin/bash
set -xe
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
# Run ssh-agent (inside the build environment)
eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
ssh-add <(echo "$SSH_PRIVATE_KEY")
mkdir -p ~/.ssh
[[ -f /.dockerenv ]] && echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts
deploy.rb:
# config valid only for current version of Capistrano
lock '3.8.0'
set :application, 'symfony'
set :repo_url, 'git#gitlab.local:symfony.git'
# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, '/home/symfony'
set :symfony_env, "prod"
set :composer_install_flags, '--no-dev --prefer-dist --no-interaction --optimize-autoloader'
set :symfony_directory_structure, 3
set :sensio_distribution_version, 5
# symfony-standard edition directories
set :app_path, "app"
set :web_path, "web"
set :var_path, "var"
set :bin_path, "bin"
set :app_config_path, "app/config"
set :log_path, "var/logs"
set :cache_path, "var/cache"
set :symfony_console_path, "bin/console"
set :symfony_console_flags, "--no-debug"
# asset management
set :assets_install_path, "web"
set :assets_install_flags, '--symlink'
# Share files/directories between releases
set :linked_files, %w(app/config/parameters.yml)
set :linked_dirs, %w(web/uploads)
# Set correct permissions between releases, this is turned off by default
set :permission_method, false
set :file_permissions_paths, ["var/logs", "var/cache"]
set :file_permissions_users, ["apache"]
before "deploy:updated", "deploy:set_permissions:acl"
after "deploy:updated", "symfony:assetic:dump"
and production.rb:
server '192.168.0.226', user: 'deploy', roles: %w{app db web}
What could be wrong? I tried to set forward_agent to true but it's not working eather.
If I build the docker container manually and install all dependencies, the ssh connexion can be established without asking for password...
Here is the error:
EDIT:
Is there something to add in the runner configuration ? Here it is:
concurrent = 1
check_interval = 0
[[runners]]
name = "Docker runner"
url = "http://gitlab.local/ci"
token = "mytoken"
executor = "docker"
[runners.docker]
tls_verify = false
image = "php:7.1"
privileged = false
disable_cache = false
volumes = ["/cache"]
[runners.cache]
bash ci/ssh_inject.sh does not work because it is run in a different shell. use:
source ./ci/ssh_inject.sh
instead.
You need to create a ssh key on the gitlab runner with the gitlab-runner user.
Then add the pubkey of this key to your server in the authorized_keys file.
Solved by moving ssh_inject.sh content into gitlab-ci.yml.
If anyone has an idea about why it needs to be in gitlab-ci.yml, I'd like to understand.
Related
I'm using Docker Compose locally with:
app container: Nginx & PHP-FPM with a Symfony 4 app
PostgreSQL container
Redis container
It works great locally but when deployed to the development Docker Swarm cluster, I can't login to the Symfony app.
The Swarm stack is the same as local, except for PostgreSQL which is installed on its own server (not a Docker container).
Using the profiler, I nearly always get the following error:
Token not found
Token "2df1bb" was not found in the database.
When I display the content of the var/log/dev.log file, I get these lines about my login attempts:
[2019-07-22 10:11:14] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"http://dev.ip/public/login","method":"GET"} []
[2019-07-22 10:11:14] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} []
[2019-07-22 10:11:14] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.DEBUG: Guard authenticator does not support the request. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.INFO: Populated the TokenStorage with an anonymous Token. [] []
The only thing I may find useful here is the Guard authenticator does not support the request. message, but I have no idea what do search from there.
UPDATE:
Here is my docker-compose.dev.yml (removed redis container and changed app environment variables):
version: "3.7"
networks:
web:
driver: overlay
services:
# Symfony + Nginx
app:
image: "registry.gitlab.com/my-image"
deploy:
replicas: 2
restart_policy:
condition: on-failure
networks:
- web
ports:
- 80:80
environment:
APP_ENV: dev
DATABASE_URL: pgsql://user:pass#0.0.0.0/my-db
MAILER_URL: gmail://user#gmail.com:pass#localhost
Here is the Dockerfile.dev used to build the app image on development servers:
# Base image
FROM php:7.3-fpm-alpine
# Source code into:
WORKDIR /var/www/html
# Import Symfony + Composer
COPY --chown=www-data:www-data ./symfony .
COPY --from=composer /usr/bin/composer /usr/bin/composer
# Alpine Linux packages + PHP extensions
RUN apk update && apk add \
supervisor \
nginx \
bash \
postgresql-dev \
wget \
libzip-dev zip \
yarn \
npm \
&& apk --no-cache add pcre-dev ${PHPIZE_DEPS} \
&& pecl install redis \
&& docker-php-ext-enable redis \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo_pgsql \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip \
&& composer install \
--prefer-dist \
--no-interaction \
--no-progress \
&& yarn install \
&& npm rebuild node-sass \
&& yarn encore dev \
&& mkdir -p /run/nginx
# Nginx conf + Supervisor entrypoint
COPY ./dev.conf /etc/nginx/conf.d/default.conf
COPY ./.htpasswd /etc/nginx/.htpasswd
COPY ./supervisord.conf /etc/supervisord.conf
EXPOSE 80 443
ENTRYPOINT /usr/bin/supervisord -c /etc/supervisord.conf
UPDATE 2:
I pulled my Docker images and ran the application using only the docker-compose.dev.yml (without the docker-compose.local.yml that I'd use too locally). I have been able to login, everything is okay.
So... It works with Docker Compose locally, but not in Docker Swarm on a remote server.
UPDATE 3:
I made the dev server leave the Swarm cluster and started the services using Docker Compose. It works.
The issue is about going from Compose to Swarm. I created an issue: docker/swarm #2956
Maybe it's not your specific case, but it could help some user who have problems using Docker Swarm which are not present in Docker Compose.
I've been fighting this issue for over a week. I found that the default network for Docker Compose uses the bridge driver and Docker Swarm uses Overlay.
Later, I read in the Caveats section in the Postgres Docker image repo that there's a poblem with the IPVS connection timeouts in overlay networks and it refers to this blog for solutions.
I try with the first option and changed the endpoint_mode setting to dnsrr in my docker-compose.yml file:
db:
image: postgres:12
# Others settings ...
deploy:
endpoint_mode: dnsrr
Keep in mind that there are some caveats (mentioned in the blog) to consider. However, you could try the other options.
Also in this issue maybe you find something useful as they faced the same problem.
I will run JupyterHub in a sub domain. Here is the Dockerfile, jupyterhub_config.py, .gitlab-ci.yml.
My first question is how to configure the jupyter_config.py. How can I load the jupyterhub_config.py on the build in the container?
How do I start Jupyterhub in the .gitlab-ci.yml for tests and how do I copy the application in the sub domain? I wrote a README.md. I need a little help for the JupyterHub. If all works fine, I will write a complete HOWTO Install JupyterHub on a local machine and in a sub domain by a provider.
FROM continuumio/miniconda3
# Updating packages
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
git \
nano \
unzip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Install conda and Jupyter
RUN conda update -y conda
RUN conda install -c conda-forge jupyter_nbextensions_configurator \
jupyterhub \
jupyterlab \
matplotlib \
pandas \
scipy
# Setup application
EXPOSE 8000
CMD ["jupyterhub", "--ip='*'", "--port=8000", "--no-browser", "--allow-root"]
The .gitlab-ci.yml
image: docker:latest
variables:
CONTAINER_IMAGE: registry.gitlab.com/joklein
DOCKER_IMAGE: jupyterhub
TAG: 0.1.0
services:
- docker:dind
stages:
- build
- test
- release
- deploy
before_script:
- echo "$GITLAB_PASSWORD" | docker login registry.gitlab.com --username $GITLAB_USER --password-stdin
build:
stage: build
script:
- docker build -t $CONTAINER_IMAGE/$DOCKER_IMAGE .
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE
test:
stage: test
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
# - docker run $CONTAINER_IMAGE/$DOCKER_IMAGE -dt -p 8000:8000 --name $DOCKER_IMAGE
release:
stage: release
script:
- docker pull $CONTAINER_IMAGE/$DOCKER_IMAGE
- docker tag $CONTAINER_IMAGE/$DOCKER_IMAGE:latest $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
- docker push $CONTAINER_IMAGE/$DOCKER_IMAGE:$TAG
only:
- master
deploy:
stage: deploy
image: alpine:latest
before_script:
- apk update && apk add git openssh-client rsync
script:
- mkdir .public
- cp -r * .public
- mv .public public
- mkdir "${HOME}/.ssh"
- echo "${SSH_HOST_KEY}" > "${HOME}/.ssh/known_hosts"
- echo "${SSH_PRIVATE_KEY}" > "${HOME}/.ssh/id_rsa"
- chmod 700 "${HOME}/.ssh/id_rsa"
- rsync -hrvz --delete --exclude=_ public/ user#example.com:www/jupyter/
only:
- master
The jupyterhub_config.py
c = get_config()
# Letsencrypt (https://letsencrypt.org/) to obtain a free, trusted SSL
# certificate.
c.JupyterHub.ssl_key = '/etc/letsencrypt/live/example.com/privkey.pem'
c.JupyterHub.ssl_cert = '/etc/letsencrypt/live/example.com/fullchain.pem'
c.JupyterHub.port = 443
#
# Change from JupyterHub to JupyterLab
c.Spawner.default_url = '/lab'
c.Spawner.debug = True
#
# # Specify users and admin
c.Authenticator.whitelist = {"systemuser"}
c.Authenticator.admin_users = {"systemuser"}
Docker base image of JupyterHub and JupyterLab
JupyterHub is a multi-user server for Jupyter notebooks. JupyterLab is the
next-generation web-based user interface for the Jupyter Project. This
JupyterHub is a Docker base image for JupyterHub and JupyterLab
that works as a stand-alone application and in a (sub) domain.
Images derived from this image can either run as a stand-alone server, or
function as a volume image for your server. You can also use them in a CI/CD
system such as GitLab CI to build your content prior to bundling it into a
standalone server container.
Building your JupyterHub image
Based on this structure, you can easily build an image for your needs. There are two options for using the image you generated:
as a stand-alone image
as a volume image for your webserver
The simplest way to build your own image is to use a Dockerfile. This is only an example. If you need more software packages you can install them with this
Dockerfile and conda.
Build the container
docker build -t juypterhub .
Your JupyterHub with JupyterLab is automatically generated during this build.
Run the container
docker run -p 8000:8000 -d --name jupyterhub jupyterhub jupyterhub
-p is used to map your local port 8000 to the container port 8000
-d is used to run the container in background. JupyterHub will just write
logs so no need to output them in your terminal unless you want to troubleshoot a server error.
-- name jupyterhub names your container jupyterhub
jupyterhub the image
jupyterhub is the last command used to start the jupyterhub server
and your JupyterHub with Jupyterlab is now available of http://localhost:8000.
Start / Stop JupyterHub
docker start / stop juyterhub
Configure JupyterHub
Let's encrypt certificates for JupyterHub
To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to
demonstrate control over the domain. With Let’s Encrypt, you do this using
software that uses the ACME protocol, which typically runs on your web host.
Change to zerossl.com and generate a certificate for your domain. As the
result you get four files, domain-key.txt, domain-crt.txt, domain-csr.txt, account-key.txt. This files uses base 64, which is readable in
ASCII, not binary format. The certificates are already in PEM format. Just
change the extension to *.pem.
For JupyterHub only the files domain-key.txt and domain-crt are needed.
cp domain-crt.txt fullchain.pem
cp domain-key.txt privkey.pem
Add a System user in the container
By default JupyterHub searches for users on the server. In order to be able to
log in to our new JupyterHub server we need to connect to the JupyterHub docker
container and create a new system user with a password.
docker exec -it jupyterhub bash
useradd --create-home systemuser
passwd systemuser
exit
The command docker exec -it jupyterhub bash will spawn a root shell in your
docker container. You can use the root shell to create system users in the
container. These accounts will be used for authentication in JupyterHub's
default configuration.
The first command useradd creates a new user named systemuser. The second will
ask you for a password.
The all process might be simpler with GitLab 12.0 (June 2019), and its
Git integration for JupyterHub
Deploying JupyterHub via GitLab’s Kubernetes integration provides an easy way to get started with Jupyter notebooks, which can be used to create and share documents that contain live code, visualizations, and even runbooks.
Starting with GitLab 12.0, JupyterLab’s Git extension is automatically provisioned and configured when installing JupyterHub onto your Kubernetes cluster.
This integration enables full version control of your notebooks as well as issuance of Git commands within Jupyter. Git commands can be issued via the Git tab on the left panel or via Jupyter’s command line prompt.
See documentation and gitlab-ce issue 47138.
jupyterhub --generate-config
This is what on the documentation
It created a config.py file in /srv/jupyterhub
I have a multi-container Symfony application that uses docker-compose to handle the relationships between the containers. To simplify a little, i have 4 main services :
code:
image: mycode
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
The "mycode" image contains the code of my application and is built from the following Dockerfile :
FROM composer/composer
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libmcrypt-dev \
libxml2-dev \
libicu-dev \
libcurl4-openssl-dev \
libssl-dev \
pkg-config
RUN docker-php-ext-install iconv mcrypt mbstring bcmath json ctype iconv posix intl
RUN pecl install mongo \
&& echo extension=mongo.so >> /usr/local/etc/php/conf.d/mongo.ini
COPY . /code
WORKDIR /code
RUN rm -rf /code/app/cache/* \
&& rm -rf /code/app/logs/* \
&& chown -R root /code/app/cache \
&& chown -R root /code/app/logs \
&& chmod -R 777 /code/app/cache \
&& chmod -R 777 /code/app/logs \
&& composer install \
&& rm -f /code/web/app_dev.php \
&& rm -f /code/web/config.php
VOLUME ["/code", "/code/app/logs", "/code/app/cache"]
At first, deploying this application was easy. I just had to do a simple docker-compose up -d and it created all the containers and ran them without any issue. But then i had to deploy a new version.
This configuration uses volumes to store data :
the source code is mounted on the /code volume, and shared between 3
containers (code, web, php-fpm). It has to be replaced by a new version when deploying.
the MongoDb data is on another
volume, mounted only by the mongo container. I have to keep this data between deployments.
When i deploy an update to my code, i publish the new version of the mycode image and re-create the container. But since the /code volume is still used by the web and php-fpm containers, the old volume can't be replaced by the new one. I have to stop all the running services to delete the old volume, and if i use the docker-compose rm -v command, it will delete the mongodb data too !
Can't i replace only one volume with a new version, without any downtime ?
So i'm kind of stuck here. I'm thinking of having a permanent volume to store the code and update it through SSH with Capistrano, old style. This will allow me to run doctrine migrations scripts after deployment too. But i have other issues with it as Capistrano uses symlinks to handle versions so i can't just mount the /current folder to /code.
Do you have a solution to handle the deployment of a Docker application without losing data and without downtime ?
Should i use manual scripts instead of docker-compose ?
the source code is mounted on the /code volume
This is the problem, it is not what you want.
Code never goes into a volume, it should change when the image changes. Volumes are for things that you want to preserve between changes to the image (data, logs, state, etc).
Code is the immutable thing that you want to replace when you change a container. So remove the /code volume from the Dockerfile entirely, and instead do an ADD . /code in the mynginx and myphpfpm Dockerfiles.
With that change, you can deploy with just up -d. It will recreate any container that have changed, and your volumes will be copied over. You don't need an rm anymore.
If you have your Dockerfile for myphpfpm and mynginx in a different directory, you can build using docker build -f path/to/dockerfile .
Using a host volume (as suggested in another answer) is another option, however that's not usually what you want outside of development. With a host volume you would still remove the /code VOLUME from the dockerfile.
Do not copy the code via the Dockerfile, just attach volumes to the 'code' container.
Few edits:
code:
image: mycode
volumes:
- .:/code
- /code
web:
image: mynginx
volumes-from:
- code
ports:
- "80:80"
links:
- php-fpm
php-fpm:
image: myphpfpm
volumes-from:
- code
links:
- mongo
mongo:
image: mongo
Same thing applies to mongo mount it to an external volume so it persists when the container shuts down. Actually there is also another method, they mention it in their dockerhub page https://hub.docker.com/_/mongo/
Where to Store Data
Important note: There are several ways to store data used by
applications that run in Docker containers. We encourage users of the
mongo images to familiarize themselves with the options available,
including:
Let Docker manage the storage of your database data by writing the
database files to disk on the host system using its own internal
volume management. This is the default and is easy and fairly
transparent to the user. The downside is that the files may be hard to
locate for tools and applications that run directly on the host
system, i.e. outside containers.
Create a data directory on the host system (outside the container) and
mount this to a directory visible from inside the container. This
places the database files in a known location on the host system, and
makes it easy for tools and applications on the host system to access
the files. The downside is that the user needs to make sure that the
directory exists, and that e.g. directory permissions and other
security mechanisms on the host system are set up correctly.
It's unclear to me how to get my build files from the Gitlab CI (hosted on https://ci.gitlab.com) over to my personal server using rsync.
I have setup 1 test and 1 deploy job.
Under the deploy tab I have inputed the bash commands to:
Install rsync
Update packages
Finally, the rsync command to
transfer files over SSH to my personal server.
When I enter the SSH credentials (with verbose flag on) for my private personal server, it would appear that the SSH key is the issue. In Gitlab, I have already established the deploy key (for hooks - tested this and it works).
Where do I locate the public SSH key for the Gitlab deploy instance so that I can install that key on my server?
Below is the exact script entered in Gitlab CI deploy job script pane:
# Run as root
(
set -e
set -u
set -x
apt-get update -y
apt-get -y install rsync
)
git clone https://github.com/bla/deployments.git $HOME/deploy/deployments
SVR_WEB1_WEBSERVER="000.11.22.333"
USER1="franklin"
GROUP1="team1"
FROM_DIR="/gitlab-ci-runner/tmp/builds/myrepo-1/"
DEST1="subdomains/gitlab/myrepo"
EXCLUSIONS_LIST="${HOME}/deploy/deployments/exclusions/exclusions.txt"
ssh -v "$USER1#$SVR_WEB1_WEBSERVER"
/usr/bin/rsync -avzh --progress --delete -e ssh --group=$GROUP1 -p --exclude-from "$EXCLUSIONS_LIST" "$FROM_DIR" "$USER1#$SVR_WEB1_WEBSERVER:$DEST1"
Providing your private ssh key is dangerous unless you use your own gitlab-ci runners for deployment. That's why it is better to use rsync modules.
I installed it by running sudo apt-get install phpymyadmin and then running
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/html and sudo service nginx restart
but it's not working.
Note: I didn't select any of the apache2 or lighttpd options when installing.
Option 1:
This will install the latest version of PhpMyAdmin from a shell script I've written. You are welcome to check it out on Github.
Run the following command from your code/projects directory:
curl -sS https://raw.githubusercontent.com/grrnikos/pma/master/pma.sh | bash
Option 2:
This will install PhpMyAdmin (not the latest version) from Ubuntu's repositories. Assuming that your projects live in /home/vagrant/Code :
sudo apt-get install phpmyadmin Do not select apache2 nor lighttpd when prompted. Just hit tab and enter.
sudo ln -s /usr/share/phpmyadmin/ /home/vagrant/code/phpmyadmin
cd ~/Code && serve phpmyadmin.test /home/vagrant/code/phpmyadmin
Note: If you encounter issues creating the symbolic link on step 2, try the first option or see Lyndon Watkins' answer below.
Final steps:
Open the /etc/hosts file on your main machine and add:
127.0.0.1 phpmyadmin.test
Go to http://phpmyadmin.test:8000
Step 1:
Go to the phpMyAdmin website, download the latest version and unzip it into your code directory
Step 2:
Open up homestead.yaml file and add these lines
folders:
- map: /Users/{yourName}/Code/phpMyAdmin
to: /home/vagrant/Code/phpMyAdmin
sites:
- map: phpmyadmin.test
to: /home/vagrant/Code/phpMyAdmin
Step 3:
Open your hosts file and add this line:
127.0.0.1 phpmyadmin.test
Step 4:
You may need to run vagrant provision to load the new configuration if vagrant is already running.
Thats it
Go to http://phpmyadmin.test:8000. It should work from there. Great thing about this method is that if you ever need to destroy your box, you won't ever have to set up phpMyAdmin again so long as you keep your homestead.yaml file and phpMyAdmin in your code directory.
===========
Important update from DaneSoul:
I tried this instruction on Homestead 5.3 and have met a problem "No input file specified" when trying open http://phpmyadmin.test.
And finnaly I found solution:
You need unpack phpmyadmin to
/home/vagrant/Code/phpMyAdmin/public
And write in homestead.yaml
- map: phpmyadmin.test
to: /home/vagrant/Code/phpMyAdmin/public
So almost all the same, but this /public directory in paths makes it working!
Also, in my configuration I use http://phpmyadmin.test, not http://phpmyadmin.test:8000.
Update Note: Follow this article to change your domain extension.
The answer from Nikos Gr worked for me; however I needed to amend steps 2 and 3 as my host system has issues creating the symlink.
I changed:
sudo ln -s /usr/share/phpmyadmin/ /home/vagrant/Code/phpmyadmin
cd ~/Code && serve phpmyadmin.app /home/vagrant/Code/phpmyadmin
To:
cd ~/Code && serve phpmyadmin.app /usr/share/phpmyadmin/
(Couldn't comment on the original solution as my rep isn't high enough!)
A simplified version of Jyeon's Answer. You don't need to share the ~/Code folder in the Homestead.yaml file:
folders:
- map: /Users/{yourName}/Code/phpMyAdmin
to: /home/vagrant/Code/phpMyAdmin
Just download the latest version of PhpMyAdmin from PhpMyAdmin and put the unzipped file in the ~/Code/phpMyAdmin folder and just follow the 2 step here:
Step 1:
Open up homestead.yaml file and add these lines
sites:
- map: phpmyadmin.app
to: /home/vagrant/Code/phpMyAdmin
Step 3:
Open up your hosts file and add this line:
192.168.10.10 phpmyadmin.app
Now run the vagrant reload --provision command and you're good to go.
Open up the phpmyadmin.app address in your browser and you'll see the phpmyadmin interface.
Install phpMyAdmin
SSH into Homestead vagrant box with vagrant ssh and type the following command:
sudo apt-get install phpmyadmin
When prompted to select the Web server, select apache2 and press Enter, just to get pass it.
When prompted to config database for phpmyadmin with dbconfig-common, select Yes and press Enter.
When prompted for Password of the database's administrative user, enter secret and press Enter.
When prompted for MySQL application password for phpmyadmin, enter secret and press Enter.
When prompted for Password confirmation, enter secret again and press Enter.
Then Create and config site for Nginx
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/html/phpmyadmin
cd /etc/nginx/sites-available
sudo cp homestead.app phpmyadmin.app
sudo sed -i 's/homestead.app/phpmyadmin.app/g' /etc/nginx/sites-available/phpmyadmin.app
sudo sed -i 's/home\/vagrant\/Code\/Laravel\/public/usr\/share\/nginx\/html\/phpmyadmin/g' /etc/nginx/sites-available/phpmyadmin.app
sudo ln -s /etc/nginx/sites-available/phpmyadmin.app /etc/nginx/sites-enabled/phpmyadmin.app
sudo service nginx restart
sudo service php5-fpm retart
Adding phpMyAdmin.app to your hosts file
127.0.0.1 phpmyadmin.app
Navigate to http://phpmyadmin.app:8000 and you should now see phpMyAdmin login page.
More info available here if you need it
A variation on Nikos Gr's answer that seemed a bit simpler (in that it doesn't require a new symbolic link for each project on your Homestead box) and worked for me.
Inside the Homestead box, run sudo apt-get install phpmyadmin. Don't select any of the options during install.
On your host machine, add the following lines to your Homestead.yaml file:
- map: phpmyadmin.dev
to: /usr/share/phpmyadmin
On your host machine, add the following line to your hosts file:
192.168.10.10 phpmyadmin.dev
...and Homestead's phpMyAdmin will be available at phpmyadmin.dev
You can install phpmyadmin automatically when you vagrant up or provision your homestead by adding the following snippet to your Homestead\scripts\homestead.rb file after # Update Composer On Every Provision
# Install phpMyAdmin on every provision
config.vm.provision "shell" do |s|
s.inline = "curl -sS https://raw.githubusercontent.com/grrnikos/pma/master/pma.sh | sh"
end
Your hoomestead.rb file should now look somehow like this
class Homestead
def Homestead.configure(config, settings)
# Configure The Box
config.vm.box = "laravel/homestead"
config.vm.hostname = "homestead"
# Configure A Private Network IP
config.vm.network :private_network, ip: settings["ip"] ||= "192.168.10.10"
some other entries are truncated to keep this short
# Update Composer On Every Provision
config.vm.provision "shell" do |s|
s.inline = "/usr/local/bin/composer self-update"
end
# Install phpMyAdmin on every provision
config.vm.provision "shell" do |s|
s.inline = "curl -sS https://raw.githubusercontent.com/grrnikos/pma/master/pma.sh | sh"
end
# Configure Blackfire.io
if settings.has_key?("blackfire")
config.vm.provision "shell" do |s|
s.path = "./scripts/blackfire.sh"
s.args = [settings["blackfire"][0]["id"], settings["blackfire"][0]["token"]]
end
end
end
end
Save file and run vagrant destroy then vagrant up or just vagrant reload
NB: This uses Nikos Gr script located here https://raw.githubusercontent.com/grrnikos/pma/master/pma.sh
Finally it worked for me, few things I had to fix:
Homestead.yaml file:
- map: phpmyadmin.test
to: /home/vagrant/code/phpmyadmin/
I had to delete /public from the end. I installed phpmyadmin (after vagrant ssh command from Homestead directory) into the 'code' folder where the other projects are. When 'code' is with lowercase, it has to be everywhere so (or other way around): folder name, yaml file or even after installation performing these commands:
sudo ln -s /usr/share/phpmyadmin/ /home/vagrant/code/phpmyadmin
cd ~/code && serve phpmyadmin.test /home/vagrant/code/phpmyadmin
This is the most simple solution. No mapping and all needed.
Download latest phpmyadmin version from here https://www.phpmyadmin.net/downloads
Make a folder named phpmyadmin inside your main root/public folder and unzip phpmyadmin here.
Run yourwebsite.com/phpmyadmin
I am writing here the way I followed to make my local vagrant environment work-friendly.
Step 1 - Start the vagrant and login
vagrant up
vagrant ssh
Step 2 - Go to your correct directory. (Depends on your file tree)
cd <VagrantDirectory>
Step 3 - Install phpmyadmin.
curl -sS https://raw.githubusercontent.com/grrnikos/pma/master/pma.sh | bash
Step 4 - Configure the Homestead.yaml
map: phpmyadmin.test
to: /home/vagrant/<VagrantDirectory>/phpmyadmin
Step 5 - Reload the vagrant.
vagrant reload
Step 6 - Configure phpmyadmin
Go to your phpmyadmin directory. Copy config.sample.inc.php to config.inc.php
cp config.sample.inc.php config.inc.php
Step 7 - Edit config.inc.php with your text editor and place your new configuration there.
//Comment out the old configuration that was already here.
$cfg['Servers'][$i]['auth_type'] = 'config';
$cfg['Servers'][$i]['host'] = 'localhost'; // Also works with the IP address.
$cfg['Servers'][$i]['user'] = 'homestead'; // Username of MySQL, Default is homestead.
$cfg['Servers'][$i]['password'] = 'secret'; // Password. Default password is secret
$cfg['Servers'][$i]['extension'] = 'mysqli';
$cfg['Servers'][$i]['compress'] = false;
$cfg['Servers'][$i]['AllowNoPassword'] = false;
$cfg['CheckConfigurationPermissions'] = false; // Since you are on local, Leave this false.
Step 8 - Now browse your fresh PHPMyAdmin on your favorite browser.
http://phpmyadmin.test
For another alternative that I found super simple and that worked right out of the box I set up a new Nginx site from inside the Homestead box using the serve.sh script:
serve adminer.app /home/vagrant/Code/adminer/
And then in there I dropped the one page successor to phpmyadmin, Adminer. I also renamed it to "index.php" to make it just work. Then after adding the adminer.app entry to my hosts file I was good to go.
Had not used a web based MySQL interface in years since I just didn't like maintaining phpMyAdmin but this one is sweet. One file (plus an optional CSS file if you want a nicer theme) and that is all. Easy to maintain and update.
As I couldn't comment on the Jyeon solution as my rep isn't high enough, I contribute with this answer; worked for me in Linux (openSUSE Leap) with Vagrant 1.8.1 and laravel/homestead (virtualbox, 0.4.0):
Step 1:
Go to phpMyAdmin website, download the latest version and unzip it into your project directory.
Step 2:
Add to your Homestead.yaml file the following lines:
folders:
- map: ~/Code/phpMyAdmin
to: /home/vagrant/Code/phpMyAdmin
Sites:
- map: phpmyadmin.app
to: /home/vagrant/Code/phpMyAdmin
Step 3:
Add to your hosts file the following line:
192.168.10.10 phpmyadmin.app
Step 4:
After start your vagrant environment and connects to machine via SSH, set your virtual host to work with phpMyAdmin with the command serve:
cd ~/Code
serve phpmyadmin.app /home/vagrant/Code/phpMyAdmin/
Thats it!
Go to http://phpmyadmin.app it should work, and you can login with your user and password homestead default. The great thing about this method is that you can set up your phpmyadmin so long as you keep it in your Homestead.yaml file and phpMyAdmin in your Code directory.
In my case accepted solution works ok except:
$ cd ~/Code && serve phpmyadmin.app /home/vagrant/Code/phpmyadmin
dos2unix: converting file /vagrant/scripts/serve.sh to Unix format ...
* Restarting nginx nginx [fail]
php5-fpm stop/waiting
php5-fpm start/running, process 4112
For an unknown reason serve command files creating configuration file as seen in:
$ sudo tail -f /var/log/nginx/error.log
2015/03/18 11:54:16 [emerg] 3671#0: invalid number of arguments in "listen" directive in /etc/nginx/sites-enabled/phpmyadmin.app:2
Edit config:
$ editor /etc/nginx/sites-enabled/phpmyadmin.app
and add 80 to Listen directive at line 2. Apply changes with:
$ sudo service nginx reload
adminer index file is located in adminer/adminer so try :
serve adminer.app /home/vagrant/Code/adminer/adminer
I installed phpMyAdmin from here
then put these settings in config.inc.php:
/* Server parameters */
$cfg['Servers'][$i]['host'] = '127.0.0.1';
$cfg['Servers'][$i]['port'] = '33060';
$cfg['Servers'][$i]['compress'] = false;
$cfg['Servers'][$i]['AllowNoPassword'] = false;
and opened via Apache (I had a xampp). In my case i placed phpMyAdmin in D:\xampp\htdocs\pma which allowed me to open at localhost/pma url.
Everything worked!