I'm testing WordPress for personnal project but i would like to install locally my development WordPress website and install on my Personnal production server the final website.
In order to do that, i search a plugin or program for syncronising wordpress dévelopment with new pages, templates, and configurations inside my production wordpress.
Is there a program or plugin to do that? How is much better to work with wordpress?
Thanks :)
There are two topics you can try:
-.By schedule copy files to production like linux CLI with crontab (every min):
* * * * * scp local_file remote_username#remote_ip:remote_file
But I don't recommend this way , and for you to easy understand.
-.By CICD, here is a blog link for you to know the concept first if you don't know this:
https://thecodingmachine.io/continuous-delivery-on-a-dedicated-server
Briefly, you can push your project to private repo on gitlab or github,
then make development(=development server),production(=production server) branches, the automate job will deploy to the servers if you have git push.
Here's an example main part from the link on the file .gitlab-ci.yml:
deploy_staging:
stage: deploy
image: kroniak/ssh-client:3.6
script:
# add the server as a known host
- mkdir ~/.ssh
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# log into Docker registry
- ssh deployer#thecodingmachine.io "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.thecodingmachine.com"
# stop container, remove image.
- ssh deployer#thecodingmachine.io "docker stop thecodingmachine.io_${CI_COMMIT_REF_SLUG}" || true
- ssh deployer#thecodingmachine.io "docker rm thecodingmachine.io_${CI_COMMIT_REF_SLUG}" || true
- ssh deployer#thecodingmachine.io "docker rmi registry.thecodingmachine.com/tcm-projects/thecodingmachine.io:${CI_COMMIT_REF_SLUG}" || true
# start new container
- ssh deployer#thecodingmachine.io "docker run --name thecodingmachine.io_${CI_COMMIT_REF_SLUG} --network=web -d registry.thecodingmachine.com/tcm-projects/thecodingmachine.io:${CI_COMMIT_REF_SLUG}"
only:
- branches
except:
- master
It maybe hard for you to read this, but you can know there is a way which can work you need and you may take times to learn this part.
Hope it work for you.
Thanks for David Négrier sharing.
I can't seem to get Web App for Containers (S1) to deploy a Wordpress image from Azure Container Instance with HTTPS working for the admin section. The wp-config.php configuration file are taken from the samples on github provided by microsoft and the Dockerfile is extended from wordpress:4.9.5-php7.2-apache
# Pull image from official source with version specified
FROM wordpress:4.9.5-php7.2-apache
# Overwrite Wordpress configuration
COPY ./wp-config.php /usr/src/wordpress/
# Add permissions needed for wordpress to run
RUN chown -R www-data:www-data /usr/src/wordpress/
WORKDIR /var/www/html
I can build the image, push it, and deploy it to Web App for Containers, but when I try to log into the admin portal using https I am redirected to the non-https login.
The docker logs on Web App during container invokation looks like below
2018-04-23 07:57:21.751 INFO - Starting container for site
2018-04-23 07:57:21.751 INFO - docker run -d -p 58688:80 --name my-test-website__c20c_2 -e WEBSITE_SITE_NAME=my-test-website-name -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=...3cfaeb147447885bccba4565fb6192f -e HTTP_LOGGING_ENABLED=1 myacrregsitryhere.azurecr.io/wordpressdocker:21483
Things that I have tried:
allow http/s in wp-config like so:
define('WP_HOME', '//'. filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_STRING));
define('WP_SITEURL', '//'. filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_STRING));
define('WP_CONTENT_URL', '/wp-content');
define('DOMAIN_CURRENT_SITE', filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_STRING));
which results in redirect loop that is stopped by the browser.
Azure Web app enforce https
results in redirect loop that is stopped by the browser.
Enforce ssl via wp-config.php
define('FORCE_SSL_ADMIN', true);
How am I supposed to get https working with slots in Azure Web App for Containers?
You can enforce SSL for that web app in the portal, i.e. as per https://learn.microsoft.com/en-gb/azure/app-service/app-service-web-tutorial-custom-ssl#enforce-https
I'm trying to run CakePHP 2 app inside of a container. I have everything setup and PHP works properly but have one problem: /var/www/app/tmp has incorrect write permissions. This directory is loaded from volume
Did you already take a look at the CakePHP2.0 docs? Maybe this is usefull:
One common issue is that the app/tmp directories and subdirectories must be writable both by the web server and the command line user. On a UNIX system, if your web server user is different from your command line user, you can run the following commands just once in your project to ensure that permissions will be setup properly:
HTTPDUSER=`ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`
setfacl -R -m u:${HTTPDUSER}:rwx app/tmp
setfacl -R -d -m u:${HTTPDUSER}:rwx app/tmp
Source: https://book.cakephp.org/2.0/en/installation.html#permissions
This happens a lot if you're running PHP via a container passthrough. In this scenario, you are passing a directory through to the application with pre-defined permissions. What you'll need to do is periodically make sure permissions are being updated to the webserver from the container. Let's say your container is called web
docker exec web chown -R www-data /var/www/html
(/var/www/html being replaced with wherever your code resides)
For Example. This will make it work perfectly fine in the container, but may actually cause issues accessing the data from the host OS if you're using Linux. I had this issue several times with Laravel and PHP using a volume passthrough from the host, since the volume's files themselves are updated to a userID the host OS doesn't have.
Normally we point nginx at a directory by using the root directive in conf/nginx.conf.
However, I am wondering if I can put something for that option so that nginx will always serve the directory that I am currently working in (that is, the output of pwd) instead of the fixed path. I have tried setting . as the root, but that does not seem to work.
I am running nginx as a non-root user, serving requests at a port greater than 1024.
If you use directive root .;, the real path of root directory is /<nginx_prefix_path>/..
You can use command sbin/nginx -p $(pwd) -c /path/to/nginx.conf to start nginx,
in which case <nginx_prefix_path> is changed to your current working directory.
BTW, command sbin/nginx -h shows the default <nginx_prefix_path>:
-p prefix : set prefix path (default: /usr/local/nginx/)
Is it possible to have an SSH session use all your local configuration files (.bash_profile, .vimrc, etc..) on login? That way you would have the same configuration for, say, editing files in vim in the remote session.
I just came across two alternatives to just doing a git clone of your dotfiles. I take no credit for either of these and can't say I've used either extensively so I don't know if there are pitfalls to either of these.
sshrc
sshrc is a tool (actually just a big bash function) that copies over local rc-files without permanently writing them to the remove user's $HOME - the idea being that might be a shared admin account that other people use. Appears to be customizable for different remote hosts as well.
.ssh/config and LocalCommand
This blog post suggests a way to automatically run a command when you login to a remote host. It tars and pipes a set of files to the remote, then un-tars them on the remote's $HOME:
Your local ~/.ssh/config would look like this:
Host *
PermitLocalCommand yes
LocalCommand tar c -C${HOME} .bashrc .bash_profile .exports .aliases .inputrc .vimrc .screenrc \
| ssh -o PermitLocalCommand=no %n "tar mx -C${HOME}"
You could modify the above to only run the command on certain hosts (instead of the * wildcard) or customize for different hosts as well. There might be a fair amount of duplication per host with this method - although you could package the whole tar c ... | ssh .. "tar mx .." into a script maybe.
Note the above looks like it clobbers the same files on the remote when you connect, so use with caution.
Use a dotfiles.git repo
What I do is keep all my config files in a dotfiles.git on a central server.
You can set it up so that when you ssh into a remote machine, you automatically pull the latest version of the dotfiles. I do something like this:
ssh myhost
cd ~/dotfiles
git pull --rebase
cd ~
ln -sf dotfiles/$username/linux/.* .
Note:
To put that in a shell script, you can automate the process of executing commands on a remote machine by piping to ssh.
The "$username" is there so that you can share your config files with other people you're working with.
The "ln -sf" creates symbolic links to all your dotfiles, overwriting any local ones, such that ~/.emacs is linked to the version controlled file ~/dotfiles/$username/.emacs.
The use of a "linux" subdirectory is just to allow for configuration changes across platforms. I also have a mac directory under dotfiles/$username/mac. Most of the files in the /mac directory are symlinked from the linux directory as it's very similar, but there are some exceptions.
Finally, note that you can make this even more sophisticated with hostnames and the like rather than just a generic 'linux'. With a dotfiles.git, you can also raid dotfiles from your friends, which is awesome -- everyone has their own set of little tricks and hacks.
No, because it's not SSH using your config files, but the remote shell.
I suggest keeping your config files in Subversion or some other VCS. Here's how I do it.
Well, no, because as Andy Lester says, the remote machine is the one doing the work, and it has no access back to your local machine to get .vimrc ...
On the other hand, you could use sshfs to mount the remote file system locally and edit the files locally. This doesn't require you to install anything on the remote machine. Not sure how efficient it is, maybe not great for editing big files over slow links.
Or Komodo IDE has a neat "Open >> Remote File" option which lets you edit files on remote
machines by scping them back and forth automatically.
I do this kind of things every day. I have about 15 bash rc files and .vimrc, a few vim plugin scripts, .screenrc and some other rc files. I have a sync script (written in bash) which uses the cool rsync command to sync all these files to remote servers. Every time I update some files on my main server, I would call the script to sync them to remote servers.
Setting up a svn/git/hg repository on the main server also works for me but my remote servers need to be repeatedly reinstalled for testing. So I find it's more convenient to use rsync.
A few years ago I also used the rdist tool which can also meet the requirement for most of the time. But now I prefer rsync as it supports incremental sync which is very efficient.
ssh can be configured to pass certain environment variables through to the other (remote side). And since most shells will check some environment variables for additional settings to apply, you can hack that into applying some local settings remotely. But its a bit complicated and most administrators turn off the ssh environment variable pass-through in the sshd config anyways.
You could always just copy the files to the machine before connecting with ssh:
#!/bin/bash
scp ~/.bash_profile ~/.vimrc user#host:
ssh user#host
This works best if you are using keys to login and no one else logs in as that user.
Here's a simple bash script I've used for this purpose. It syncs over some folders I like to have copied over using rsync and then adds the ~/bin folder to the remote machines .bashrc if it's not there already. It works best if you have have copied your ssh keys to each server. I use this approach instead of a "dotfiles repo" as lots of the servers I connect to don't have git on them.
So to use it, you'd do something like this:
./bin_sync_to_machine.sh server1
bin_sync_to_machine.sh
function show_help()
{
echo ""
echo "usage: SERVER {SERVER2 SERVER3 etc...}"
echo ""
exit
}
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
# Sync ~/bin and some dot files to remote server using rsync
for SERVER in $*; do
rsync -avrz --progress ~/bin/ -e ssh $SERVER:~/bin
rsync -avrz --progress ~/.vim/ -e ssh $SERVER:~/.vim
rsync -avrz --progress ~/.vimrc -e ssh $SERVER:~/.vimrc
rsync -avrz --progress ~/.aliases $SERVER:~/.aliases
rsync -avrz --progress ~/.aliases $SERVER:~/.bash_aliases
# Ensure remote server has ~/bin in the path
ssh $SERVER '~/bin/path_add_to_path.sh'
done
path_add_to_path.sh
pathadd() {
if [ -d "$1" ] && [[ ":$PATH:" != *":$1:"* ]]; then
PATH="${PATH:+"$PATH:"}$1"
fi
}
# Add to current path if running in a shell
pathadd ~/bin
# Add to ~/.bashrc
if ! grep -q PATH:~/bin ~/.bashrc; then
echo "PATH=\$PATH:~/bin" >> ~/.bashrc
fi
if ! grep -q source ~/.aliases ~/.bashrc; then
echo "source ~/.aliases" >> ~/.bashrc
fi
I wrote an extremely simple tool for this that will allow you to natively transport your .vimrc file whenever you ssh, by using SSHd built-in config options in a non-standard way.
No additional svn,scp,copy/paste, etc required.
It is simple, lightweight, and works by default on all server configurations I have tested so far.
https://github.com/gWOLF3/viSSHous
I think that https://github.com/fsquillace/kyrat does what you need.
I wrote it long time ago before sshrc was born and it has more benefits compared to sshrc:
It does not require dependencies on xxd for both hosts (which can be unavailable on remote host)
Kyrat uses a more efficient encoding algorithm
It is just ~20 lines of code (really easy to understand!)
No need of root access or any installations to the remote host
For instance:
$> echo "alias q=exit" > ~/.config/kyrat/bashrc
$> kyrat myuser#myserver.com
myserver.com $> q
exit