My current workflow when building a Wordpress site is to create the project in a dropbox folder (so it gets backed up). I then push to a remote REPO.
My problem is that although I can gitignore the GULP dependencies for the repo, I cannot with Dropbox. Each project can generate 500,000 files (albeit tiny), and this really breaks dropbox - for DAYS!
Am I missing something? possible solutions?
Create the project outside dropbox - rely on the repo as a backup :(
Create the project outside dropbox - then clone repo back into DB (without GULP dependencies)?
Install dependencies globally? (npm install -g) - problem is all projects using the same dependencies isn't always what I want.
Thanks in advance
Related
For the continuous integration and deployment of websites, I am using this pipeline:
But for many CMS like wordpress, prestashop, magento and others, the configuration of the website and the installation of plugins is done in the back-office of the deployed website.
For now, I am building the docker image on top of the CMS base image, then replacing all the /var/html directory with the files in github. Then Kubernetes is deploying the containers and plug a database and a persistent storage
Hence, this is breaking my pipeline: imagine that someone is installing and configuring a plugin in the back-office, then someone else is doing a modification on a file and pushes it to github. The github repo doesn't have the info that a plugin was installed and will build and deploy a new image without it.
How to integrate all the modifications done in the back-office in my github repository?
The solution we use is an override of the DB class.
So we monitor a number of tables (Configuration, module, hook, etc ...) and we store all queries about it in a sql file.
So during commit, we also have a .sql actions to perform on the database side.
Once deployed, either you manually execute the sql, or a script detect that new SQL are present and executes.
In this way we are always up to date.
This solution we developed in the form of Prestashop modules to track all actions.
Regards
My (by any means not ideal) working solution:
Create plugins folder outside docker and symlink this folder in dockered /wp-content/plugins
recreate above in production
Then installing new plugin doesn't break CI flow but requires two independent installations and configurations, if you (or dev team) need to install something new.
So you basically treat plugin files same way as you already do it with DB.
There are many wonderful tutorials describing in great detail how to set up a horizontally scaled WordPress install in AWS' Elastic Beanstalk - that part is no problem. But I haven't found any follow-up advice yet on how to manage plugin updates after the initial setup, let alone updating wordpress-core itself. Does anybody know the most optimal way to do this?
This is the methodology I'm using so far, but I'm not sure if it is the best way:
Download the plugin's update file and unzip it. Remove and replace the relevant folder in /wp-content/plugins (local git repo)
Run the update in the live site like normal - to ensure that any database changes get pushed up to the RDS
eb deploy from the local repo to commit the file changes and make the update persistent
Is that a sane method? Could anything get corrupted down the line?
For updating wp-core, the tutorials I've read seem overcomplicated - basically rebuild the site from scratch every time an update comes out. Below is what I have been using (used it successfully for WP 5.0.2). Is there any chance of files and databases getting out of sync using this method?
Download and unzip the new wordpress version locally
Replace wp-admin, wp-includes, and the root files except for wp-config.php (local git repo)
Run the update in the live environment, so that any database changes get pushed up to RDS.
eb deploy
I've been running with the above methods for a while and feel pretty confident that they are sound. I only have a couple of tweaks thus far.
The following assumes an environment where there is one staging server outside of the horizontally scaling live environment. This could be further improved for a multi-developer environment using AWS Code Commit.
For Plugins:
Run the plugin update normally on the staging server (in wp-admin). Test everything to make sure the update is sound.
Remove the plugin's old folder from your local git repo and download the updated folder from the staging server using SFTP.
In the local repo, run git add -A && git commit -m "updated Plugin Name" && eb deploy
Run the same update in Live (in wp-admin). It will only apply to one server, but should guarantee that any database changes get pushed up to the single RDS.
Roll out the change to the live environment using the Software Versions page in the AWS Console (in Elastic Beanstalk)
Updating WP Core is almost identical except that instead of removing and replacing a single plugin directory, you will need to remove and replace /wp-admin/, /wp-includes/ and all of the files in the root folder except for wp-config.php
Latest meteor version 1.4.0.1. All of a sudden it got stuck on Processing files with ecmascript (for... step. Killing and restarting didn't work, neither did rebooting.
What I tried: meteor reset, rebooting, deleting build and cache folders in project's .meteor folder, deleting and reinstalling npm packages. Removing .meteor in my home folder and reinstalling meteor from scratch. Removing packages, both meteor and npm, that I no longer use.
This is something in my project because creating a new meteor project and running it works fine. The project uses React and has a number of components.
Any ideas on how to fix this?
I solved my issue by process of elimination. I saved a .js file in my client/js/lib folder while working on a feature. At some point later I accidentally saved my meteor application's home page in the same folder! Guess I was too tired. Found the folder and then saw the html and a subfolder with meteor-generated scripts. Removed them and everything clicked. My exact steps:
create a new meteor project, copy over package.json and
.meteor/packages files run meteor npm install and then run the
project with meteor
while the project is running start copying over root app folders one
by one, wait for app to recompile before moving on to each folder
my problem surfaced when I copied over the client folder. Meteor said
refreshing client and got stuck there.
I removed the client folder, killed the process (run ps command, kill
-9 [PID] where PID is the process id with the high CPU time.) and then restarted meteor again
created client folder manually and then started copying client/*
items over one by one
that's when I noticed the app_name.html and app_name folder with a
lot of .js files in it!
I removed them and everything works now. Good luck!
You can try running meteor reset command and then running the project again.
I ran into a similar (but different) problem. I am trying to use plotly.js with meteor. I downloaded the plotly.js file and put it into my client/lib directory. Now, when I try to run meteor, I get the same error being described above. I thought plotly.js worked with meteor? If I remove plotly.js (or rename it to a non js file), meteor works fine...so the issue is clearly with plotly.js
I usually deploy my vendors with a simple composer install in production.
I would prefer not using composer in production, so I'd need to build the vendors from my machine and deploy them in production.
I could copy the vendor directory but I'll certainly have to install other files like app/bootstrap.cache.php or other autoloader.php
2 questions so:
What are these files I could install/update ?
Are there any known practices to deploy pre-built vendors anyway ?
I would say the procedure is pretty straighforward (at least it works for me that way): To deploy your application, you'd create a new directory, export the code from a tag into it (i.e. you don't export all repository-managing data like a .git directory). You then run composer install --no-dev, which will do some work, and should also run anything that is mentioned in the scripts in the composer.json file.
The result in this previously empty directory goes to the production server in whatever way you like, be it SCP, SFTP, rsync... There is no real "magic" going on here, essentially it is copying of files.
You may want to make sure you can roll back quickly, so I'd recommend to deploy every version into a designated directory, and then link the current version with a symlink. As an example: You had deployed your old version in /srv/www/htdocs/app-1.0 and symlinked the directory /srv/www/htdocs/app to point to this directory. The vhost uses the generic app directory to serve the app.
The deployment will create a new directory /srv/www/htdocs/app-1.1, and putting it live will simply delete the old symlink and create a new one to the new directory. This should put your new version live instantly. Rolling back would mean to delete the symlink and create the one pointing to the old version again.
YMMV, because things like caches will affect the outcome, but this is not in the scope of how and where to use Composer to deploy software.
I've been developing with Symfony1.4 'till now and had no problem to deploy a project or update it into a remote hosting. I just used sfFtpPlugin and everything was perfect: http://www.symfony-project.org/plugins/sfFtpPlugin
But now I'm starting with Symfony2 (2.2.0) and the first of all I had this question: how to update it when I make changes?
For the first time deploy I know there are some options: upload full project by FTP or use Maestro (e.g. offered in the ServerGrove.com hostings) With those tools I can upload everything, but in the case where I need to update... ¿50? files, I cannot manually do by FTP, of course.
Thanks everyone for helping!
P.S: Aditional info: I have some SVN knowledge and started learning GIT a few days ago.
The documentation on this is fantastic. The Cookbook provides workflows for both Git and SVN.
http://symfony.com/doc/current/cookbook/workflow/index.html
If you have no shell available, you can use composer on your local machine to update your project and then FTP the entire project over.
This covers how to store settings for different environments:
http://symfony.com/doc/current/cookbook/configuration/environments.html
Personally I use a private Satis repo for deploying all my code.
That way I never have to use FTP, just composer create-project/install/update.