I'm going to write a Wordpress plugin which allows me to automatically migrate between two Wordpress sites. (copying all files + database backup from server A to B)
Now I try to figure out what would be the easiest and efficient way to copy the created backup files from one server to the other. It should transfer these files directly.
Maybe something like a REST call? (All servers with the plugin are waiting for a request and then start with the download of the file).
Are there better ideas?
Related
I would like to have a scalable infrastructure for my wordpress site. We currently have the following:
A cloudfront that serves the website
A load balancer and target group with only one registered target in it
An RDS.
The WP server (on which config, and wp-content is).
We have several thousands of pages in the wordpress instance, and sometimes we need to make changes, invalidate caches in the cloudfront to serve the new content. Doing this on a lot of pages can create a huge load on the server, and make it unreachable or super slow. So we thought about adding an autoscaling group, which would spin up new instances if the load is too high, and remove then when necessary.
To do so, I believe we need to move the wp-content folder to a shared directory (between all the servers). Is it a correct assumption first of all?
So I naturally created an EFS, which I mounted on a copy of my wordpress server, then rsync all the files with permissions in the efs.
Then as suggested all over the net, I added the following in my wp-config.php:
define('WP_CONTENT_DIR', '/mnt/efs/wp-content'); where /mnt/efs/wp-content is directory on the efs.
From this point, the website worked as expected, I could see some traffic on the EFS monitoring page when viewing pages.
To make sure all the files are correctly shared and copied in the wp-content, I deleted /data/app/wp-content/ folder (it shouldn't be used, as I referenced wp-content to be in my efs).
And my site started acting weirdly. Some formatting disappeared, buttons are native and not customized etc. The console shows a lot of 404 also with following errors:
www.mysite.eu/:1 Access to font at 'https://www.mysite.fr/wp-content/themes/mysite/dist/fonts/icomoon/icomoon.ttf' from origin 'https://www.mysite.eu' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
GET https://www.mysite.fr/wp-content/plugins/js_composer/assets/lib/bower/font-awesome/webfonts/fa-solid-900.woff net::ERR_FAILED 200
Looks like there are no fonts, no plugins, no themes anymore.
So, quite a few questions:
Do I need to keep both local wp-content and shared wp-content? If so, if I install a plugin or a theme, would it be available for other servers as well?
Do I really need an EFS? Or data is fully stored in DB, and wp-content can live on its own for each server?
Are there any other steps in moving the wp-content folder? Maybe specific steps for some plugins?
Is my architecture lacking anything for what I would like to achieve (scale up and down based on demand), or does that make sense?
Thank you!
Don't put wp_content on a shared file system (s3 bucket). It contains a lot of theme and plugin code, and running code from s3 can cause performance trouble and crank up IOPS costs. Instead, use a plugin to offload your site's uploaded media files (jpg, etc) to the s3 bucket, then clone the site.
Use a shared persistent object cache if you can. redis is a good choice.
AWS has a tutorial about doing this, without the cache, on Lightsail. https://aws.amazon.com/getting-started/hands-on/launch-load-balanced-wordpress-website/
Is it possible in Drupal to make a copy of existing live website via CMS only and download it to my machine and then restore it in the other server? I read about
backup_migrate module
https://www.drupal.org/project/backup_migrate
but will it work for me when I don't have credentials to FTP? Does this feature include in the copy of the website all the resources/images/theme of the website, the css/scripts/icons files etc.?
You can get database dump for sure, but I'm not 100% sure about the files, so better try it out.
BTW how can you install backup & migrate if you don't have ftp access?
Also, if you have access to hoster's control panel probably there is an option to download the files.
When making back-up of files you can fine tune what files exactly you want to export. I.e. by default cache files are excluded from backup.
I have created my Elastic Beanstalk application with Wordpress to test. However I am struggling to understand the best way of managing changes in dynamic content online and development changes locally.
I upload my initial Wordpress Installation to my AWS Bucket
I run the initial Wordpress Setup
-- Lets presume that I have included a live theme and uploaded some products and time has progress, changes are made online, new products added to WooCommerce etc..
I make a new page template locally and want to upload to the Bucket
I use EB Deploy, but when I do this all content online in my Bucket is overwritten with the local content.
Now I do of course accept this is by design, but how is the problem best addressed?
Does anyone have any advice to offer with managing content of this sort in the AWS EB?
The instances managed by EB have to be considered disposable. This means that they can disappear without notice.
If the changes are dynamic (eg: files are being uploaded) you cannot store those files in the file system of the instances as they are disposable.
In addition, have in mind that if you scale to several instances, you will have different instances managing different data sets (eg: you upload a file to only one instance, not to all of them).
There are several approaches you can try, for example:
Use a Network File System (NFS) server: in a separate instance, setup a NFS server and set up the EB instances to mount a remote mountpoint at startup. With this approach you can centralize the storage for all your EB instances.
Check out the EFS service from AWS. It's like a NFS server but Amazon flavored. Haven't checked it out yet but it looks promising.
To resolve these problems I created a couple of extra S3 buckets, the first for images and the second for css, js and alike.
Since this was a Wordpress, WooCommerce installation, I commissioned the S3 Offload plugin from Delicious Brains (https://deliciousbrains.com/wp-offload-s3/) ehich was a little expensive but this moved and monitored content of this sort and copied off to the other S3 buckets and allowed the 'EB Deploy' to leave the working content untouched.
I just changed the server one of my sites is hosted on. In doing so, I lost all the images. The CCK file upload fields show "ghost" data but contain no actual image data as they did before the site transfer.
All my data is fine, however.
Is there a way to prevent this so all my images are maintained?
Thanks
You can transfer files with rsync directly or use drush to rsync. You'll need to have ssh access to the servers to get this to work.
Here's some info on setting up your drush aliases:
http://www.leveltendesign.com/blog/dustin-currie/synchronize-one-drupal-site-to-another
http://drupal.org/project/drush
If you're performing this task once you could also use scp to copy files from the destination:
http://www.go2linux.org/scp-linux-command-line-copy-files-over-ssh
If you're on a shared hosting platform, just FTP the files over old school style.
I've been messing around a bit with various solutions to what I would see as a fairly common problem, but I've not yet been able to solve it in a satisfactory way.
What I wish to achieve is some kind of functionality where a user can upload new files, or select existing files to reuse them.
What I've been using so far is a combination of the filefield, filefield_sources, imce and ckeditor modules. I guess ckeditor isn't really important for the solution, but I need to be able to embed images from the archive somehow, and this is done with IMCE . Since I do not want everything to be accessible from the filebrowser I created a subdirectory and set full access to it in the IMCE settings, lets call it default/files/site
This worked fine as long as all filehanding was done through IMCE, but when I uploaded files directly from the filefield my files ended up in the default/files root, so I set up folders for my fields, for example default/files/site/movies in a field that allowed the .flv format. This worked fine to, as long as I didn't try to access the files through IMCE. It appears the folders created by filefield are not accessible from IMCE?
I'm also in a position where I need to support large uploads (200MB+), but from my experience in other projects, allowing file uploads through FTP is usually a life-saver, but from what I understand IMCE won't support files not uploaded through Drupal in some way, since they are not present in the database (giving the message: The selected file could not be used because the file does not exist in the database.)
I'm aware that I don't really have a clear question to my problem, but somehow I need to figure this out pretty fast. How would I preferably solve this? I'm aware that I'm not the first to have this problem, but I have not yet been able to find a nice and stable solution. What am I missing?
Also check this thread (http://drupal.org/node/438940) and the reference to John Locke's work at: http://www.freelock.com/blog/john-locke/2010-02/using-file-field-imported-files-drupal-drush-rescue
Well, I'm not personally familiar with IMCE off of the top of my head, but if you need files that have been uploaded via ftp to be added to the files tables, then my impulse would be to write a small module which would then allow the user to click a button and start off a batch process. (This is me assuming that you are using Drupal 6, as the batch api doesn't exist in 5.)
Said batch process would then iterate over all of the files in the appropriate directory, which I would assume you had uploaded the files to, use file_copy() (from Drupal's file API) to copy the files to default/files/site, and then would add said files to the files table, which is actually quite simple with drupal_write_record().
It might not even need to use the batch api - it somewhat matters if you're just uploading 10-30 really big files, or 200-300 MB files.
For using the batch api, I'd look at http://drupal.org/node/180528 - this has a fairly basic example of how the batch api works, which basically consists of telling the api that you want to keep calling function_a, and then inside of function_a setting your progress in the context array until you're done, at which time the batch process finishes. Then you just have whoever uploads the files via ftp to hit a button on the website to move and register the files.