Pull changes from heroku app running wordpress - wordpress

I'm wondering how to pull (or fetch) all the changes that I made on a wordpress installation running on heroku.
Example:
I have a plugin that let me upload an avatar of me on the server (avoiding gravatar). Okay, the avatar is ready and uploaded on the server (in this case, the heroku instance). So, when I do a pull or a fetch to get that change (the image) on my local files, I get nothing else than "Already updated".
I do:
git pull heroku master
And nothing is actually added or changed on the local files. Instead, I get:
From: heroku.com:app-123-456
* branch master -> FETCH_HEAD
Already up-to-date.
So the image I've just upload is not in my local files, but yes in the remote files.
What am I missing here?

Changes made to the dyno filesystem that your app is running on will not be reflected in the git repository associated with your app. Also, the dyno filesystem is ephemeral and changes are lost across deploys and when dynos are recycled (which happens at least daily).
Instead of relying on the dyno filesystem for persistence, you should make sure files are persisted to S3 or similar services.
This custom Wordpress build pack should be a good start: https://github.com/mchung/heroku-buildpack-wordpress

Related

How to access Word Press (Elementor) code on local machine

I'm developing a website using Word Press (Elementor plugin) and I want to create a custom widget. I found a tutorial I want to follow (https://www.youtube.com/watch?v=Ko9i153o_iU), the only problem is that I have no idea how to access the code on my local machine to begin. From what I can tell, everything I'm doing is on the word press website, and the code isn't on my local machine. How do I go about getting the code onto my local machine so I can begin working with it in vscode?
You need a way to spin up local apache or nginx server and mysql, along with Wordpress. I suggest the app Local:
https://localwp.com/
Then, if you want to copy your production environment (your website), you need to get the files for the theme and any plugins onto your local environment. Local pairs with some hosting providers to make this easy. Otherwise you can install by downloading the files. Some hosting providers give you FTP or SFTP access to your site files. Figure out how you can gain access and download them.
Lastly, if you want, you can copy the database over to your local environment. There is a great plugin called WP Sync DB that does this for free. It can also be a way to push your local environment to your production environment, but I definitely suggest keeping backups if you are going to do that.

Wordpress App Services with Path mapping very slow

I'm currently migrating a wordpress installation to azure app services with containers. First I did a normal installation with everything inside the container for testing purpose. The performance was good and and things worked without problems.
Then I wanted to add the wp-content folder to a persistent folder, for this I created a file share and added it under Path mappings. This worked without problems and after the restart Wordpress could access the files.
But now every page load takes about 1-2 minutes and the page as whole is unusable in this stage. I double checked the the file share settings and everything else. Share is optimized for transactions and as soon as I remove the volume, the container is fast as light again.
Does anyone have the same problem? Any ideas how to fix this? This is a deal break for me tbh.
Thanks!
Not answering your question directly but an alternative is to use App Service Persistent Storage that store data in /home folder of the VM where your app is running. It should be a lot faster then using a File Share in a storage account. The ${WEBAPP_STORAGE_HOME} maps to the /home folder.
You need to enable by setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in the application settings or by using the CLI:
az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE

How to update WordPress + Plugins in Elastic Beanstalk

There are many wonderful tutorials describing in great detail how to set up a horizontally scaled WordPress install in AWS' Elastic Beanstalk - that part is no problem. But I haven't found any follow-up advice yet on how to manage plugin updates after the initial setup, let alone updating wordpress-core itself. Does anybody know the most optimal way to do this?
This is the methodology I'm using so far, but I'm not sure if it is the best way:
Download the plugin's update file and unzip it. Remove and replace the relevant folder in /wp-content/plugins (local git repo)
Run the update in the live site like normal - to ensure that any database changes get pushed up to the RDS
eb deploy from the local repo to commit the file changes and make the update persistent
Is that a sane method? Could anything get corrupted down the line?
For updating wp-core, the tutorials I've read seem overcomplicated - basically rebuild the site from scratch every time an update comes out. Below is what I have been using (used it successfully for WP 5.0.2). Is there any chance of files and databases getting out of sync using this method?
Download and unzip the new wordpress version locally
Replace wp-admin, wp-includes, and the root files except for wp-config.php (local git repo)
Run the update in the live environment, so that any database changes get pushed up to RDS.
eb deploy
I've been running with the above methods for a while and feel pretty confident that they are sound. I only have a couple of tweaks thus far.
The following assumes an environment where there is one staging server outside of the horizontally scaling live environment. This could be further improved for a multi-developer environment using AWS Code Commit.
For Plugins:
Run the plugin update normally on the staging server (in wp-admin). Test everything to make sure the update is sound.
Remove the plugin's old folder from your local git repo and download the updated folder from the staging server using SFTP.
In the local repo, run git add -A && git commit -m "updated Plugin Name" && eb deploy
Run the same update in Live (in wp-admin). It will only apply to one server, but should guarantee that any database changes get pushed up to the single RDS.
Roll out the change to the live environment using the Software Versions page in the AWS Console (in Elastic Beanstalk)
Updating WP Core is almost identical except that instead of removing and replacing a single plugin directory, you will need to remove and replace /wp-admin/, /wp-includes/ and all of the files in the root folder except for wp-config.php

AWS CodeDeploy: How to stop it from deleting files?

We have a CodePipeline set up which uses CodeDeploy to deploy the latest updates from our repository on GitHub to an EC2 instance. This works fine, except for one issue: everything we have in our .gitignore file is deleted from the server whenever a deployment is performed.
For instance, this is a WordPress site, so we have wp-config.php and wp-content/uploads excluded from the repository. When a deployment runs, it deletes these files rendering the site unusable.
Our desired behavior is for CodeDeploy to overwrite existing files, but also ignore any files/directories not included in the repository so they can remain untouched. By default there seems to be a step that "clears out" the deployment destination before adding the new files, but we need to skip that.
Is there any setting, either in the console or appspec.yml, which will allow us to make deployments without having anything deleted? It seems like this would be a very common use case...if we can't make deployments like this then I'll have to just do all our updates via SFTP, which is pretty lame.
We have a WordPress implementation, and assume CodeDeploy will remove all files and replace them with the deployment package. This is its standard behavior, and I am pretty certain you cannot change that. It will want to sync the local file system with the deployment package you have provided.
For this reason, consider moving the upload directory outside of the document root to account for this. Check out https://premium.wpmudev.org/blog/change-default-wordpress-uploads-folder/
Regarding files, we moved the upload folder to /var/files, and mounted that as a EFS volume. This provides you with better durability, and makes the file system independent of any given instance.
Also you should check in all files like wp-config.php on to the repo, for the same reasons - if you do not include it, then it will not be deployed.
With this approach we can easily replace instances via autoscalling. You may only have one instance at this time, but at some point you will want to scale.
But to answer the question directly:
Yes CodeDeploy can be configured so that files are retained the way you require.
You would implement a lifecycle hook script, where beforeInstall the reserved files are moved to /tmp, then the afterinstall hook would move them back. extra overhead for a deploy, which is why I suggest the above approach.
See https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-example.html

How do I download my Appfog app's live file system?

Hello I am a Appfog beginner and I want to ask if I upload picture/plugins/themes via the wordpress admin. Because appfog does not currently support a persistent file system, all the plugins/pictes/themes not in the source code will be lost. Is there anyway to backup the current live system and include these files in the source code that I upload? The download source code button or the "af pull" command will only download the last source code I uploaded not changes that where made for example when I install a plugin.
You can add a helper php script to your app like this:
https://gist.github.com/4134750
You can manually download single files using af files <appname> /app/<filename> but this would be painfull for your purposes.
You would be much better served by setting up your Wordpress installation to run locally using Mamp or Xampp. Pull your app as it is from AppFog, host it locally using Mamp, making your file system changes, then pushing those changes to AppFog.
Here are a few reasons why making changes locally then updating AppFog apps is better:
If your running multiple instances of your wordpress app, only one of them will get the installed plugin. Installing the plugin locally and pushing insures all instances get the plugin.
Its much faster to develop and test locally and you can see the results of your changes before impacting your live site.
Your live production site will not go down if your plugin install fails or somehow makes an unintended change. This is also true for Wordpress updates, do them local then push to production.
If your have the changes on your local box you can use version control to track and tag releases before updating production.
blue-green deployments become trivial. Have two production apps, a primary and a slave app. Update your code locally then update the slave and test it then promote it to primary by mapping the domain to it. Then you demote the previous primary to slave by unmapping the domain. The slave is always one update older and you can switch back two it if you discover issue with your primary.
Curating your Wordpress apps this way will allow you to take advantage of power the AppFog platform provides.
I found this script "zipit" even better than the "ls" script Sea Comet provided. This will zip up the entire live app directory and then you download it. This way, you can make changes via the wordpress admin, get it all working the way you want it, then use zipit, unzip the file and push it to your app on appfog and the state is totally saved across restarts.
https://github.com/zeroecco/zipit/blob/master/zipit.php
You can find more info in this blog post over on the old PhpFog blog:
http://blog.phpfog.com/2012/11/16/how-to-download-your-entire-application-not-just-code-from-php-fog/

Resources