Maintaing Images when switching servers - drupal

I just changed the server one of my sites is hosted on. In doing so, I lost all the images. The CCK file upload fields show "ghost" data but contain no actual image data as they did before the site transfer.
All my data is fine, however.
Is there a way to prevent this so all my images are maintained?
Thanks

You can transfer files with rsync directly or use drush to rsync. You'll need to have ssh access to the servers to get this to work.
Here's some info on setting up your drush aliases:
http://www.leveltendesign.com/blog/dustin-currie/synchronize-one-drupal-site-to-another
http://drupal.org/project/drush
If you're performing this task once you could also use scp to copy files from the destination:
http://www.go2linux.org/scp-linux-command-line-copy-files-over-ssh
If you're on a shared hosting platform, just FTP the files over old school style.

Related

Load-balance wordpress site

I would like to have a scalable infrastructure for my wordpress site. We currently have the following:
A cloudfront that serves the website
A load balancer and target group with only one registered target in it
An RDS.
The WP server (on which config, and wp-content is).
We have several thousands of pages in the wordpress instance, and sometimes we need to make changes, invalidate caches in the cloudfront to serve the new content. Doing this on a lot of pages can create a huge load on the server, and make it unreachable or super slow. So we thought about adding an autoscaling group, which would spin up new instances if the load is too high, and remove then when necessary.
To do so, I believe we need to move the wp-content folder to a shared directory (between all the servers). Is it a correct assumption first of all?
So I naturally created an EFS, which I mounted on a copy of my wordpress server, then rsync all the files with permissions in the efs.
Then as suggested all over the net, I added the following in my wp-config.php:
define('WP_CONTENT_DIR', '/mnt/efs/wp-content'); where /mnt/efs/wp-content is directory on the efs.
From this point, the website worked as expected, I could see some traffic on the EFS monitoring page when viewing pages.
To make sure all the files are correctly shared and copied in the wp-content, I deleted /data/app/wp-content/ folder (it shouldn't be used, as I referenced wp-content to be in my efs).
And my site started acting weirdly. Some formatting disappeared, buttons are native and not customized etc. The console shows a lot of 404 also with following errors:
www.mysite.eu/:1 Access to font at 'https://www.mysite.fr/wp-content/themes/mysite/dist/fonts/icomoon/icomoon.ttf' from origin 'https://www.mysite.eu' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
GET https://www.mysite.fr/wp-content/plugins/js_composer/assets/lib/bower/font-awesome/webfonts/fa-solid-900.woff net::ERR_FAILED 200
Looks like there are no fonts, no plugins, no themes anymore.
So, quite a few questions:
Do I need to keep both local wp-content and shared wp-content? If so, if I install a plugin or a theme, would it be available for other servers as well?
Do I really need an EFS? Or data is fully stored in DB, and wp-content can live on its own for each server?
Are there any other steps in moving the wp-content folder? Maybe specific steps for some plugins?
Is my architecture lacking anything for what I would like to achieve (scale up and down based on demand), or does that make sense?
Thank you!
Don't put wp_content on a shared file system (s3 bucket). It contains a lot of theme and plugin code, and running code from s3 can cause performance trouble and crank up IOPS costs. Instead, use a plugin to offload your site's uploaded media files (jpg, etc) to the s3 bucket, then clone the site.
Use a shared persistent object cache if you can. redis is a good choice.
AWS has a tutorial about doing this, without the cache, on Lightsail. https://aws.amazon.com/getting-started/hands-on/launch-load-balanced-wordpress-website/

Drupal website backup without access to FTP server

Is it possible in Drupal to make a copy of existing live website via CMS only and download it to my machine and then restore it in the other server? I read about
backup_migrate module
https://www.drupal.org/project/backup_migrate
but will it work for me when I don't have credentials to FTP? Does this feature include in the copy of the website all the resources/images/theme of the website, the css/scripts/icons files etc.?
You can get database dump for sure, but I'm not 100% sure about the files, so better try it out.
BTW how can you install backup & migrate if you don't have ftp access?
Also, if you have access to hoster's control panel probably there is an option to download the files.
When making back-up of files you can fine tune what files exactly you want to export. I.e. by default cache files are excluded from backup.

Amazon Elastic Beanstalk Content Management

I have created my Elastic Beanstalk application with Wordpress to test. However I am struggling to understand the best way of managing changes in dynamic content online and development changes locally.
I upload my initial Wordpress Installation to my AWS Bucket
I run the initial Wordpress Setup
-- Lets presume that I have included a live theme and uploaded some products and time has progress, changes are made online, new products added to WooCommerce etc..
I make a new page template locally and want to upload to the Bucket
I use EB Deploy, but when I do this all content online in my Bucket is overwritten with the local content.
Now I do of course accept this is by design, but how is the problem best addressed?
Does anyone have any advice to offer with managing content of this sort in the AWS EB?
The instances managed by EB have to be considered disposable. This means that they can disappear without notice.
If the changes are dynamic (eg: files are being uploaded) you cannot store those files in the file system of the instances as they are disposable.
In addition, have in mind that if you scale to several instances, you will have different instances managing different data sets (eg: you upload a file to only one instance, not to all of them).
There are several approaches you can try, for example:
Use a Network File System (NFS) server: in a separate instance, setup a NFS server and set up the EB instances to mount a remote mountpoint at startup. With this approach you can centralize the storage for all your EB instances.
Check out the EFS service from AWS. It's like a NFS server but Amazon flavored. Haven't checked it out yet but it looks promising.
To resolve these problems I created a couple of extra S3 buckets, the first for images and the second for css, js and alike.
Since this was a Wordpress, WooCommerce installation, I commissioned the S3 Offload plugin from Delicious Brains (https://deliciousbrains.com/wp-offload-s3/) ehich was a little expensive but this moved and monitored content of this sort and copied off to the other S3 buckets and allowed the 'EB Deploy' to leave the working content untouched.

Is doing drush archive-dump enough to transfer a drupal site to another server?

I am relatively new to Drupal. I have a drupal site on my staging but I would like to transfer the site to production server. My question is: Is doing drush archive-dump enough to do this? I tried doing this and it seems like the site is not loading the configurations correctly? I already executed the sql commands from the file generated by the dump.
There are three components to moving a Drupal site to another server:
Database
Code
Files (e.g. files uploaded by content creators; usually sites/all/public)
drush archive-dump is specifically there to grab all three and tar them. So yes, that is all the data you need. There can be other issues (e.g. server permissions; software versions; DB credentials; etc...)
To go live you need to:
Test your site in the same environment as production site have.
Move code to production server.
Move database to production server.
That's all.
Please read:
http://www.slideshare.net/erikwebb/the-basics-of-smart-drupal-deployment
https://www.drupal.org/best-practices
Almost. It seems that archive-dump will not include the Private Files Directory if it resides outside the DocumentRoot.
Some administrators will place the Private Files in a directory such as DOCROOT/sites/default/files/private/ and although Apache 2.x should deny access to this directory directly via .htaccess rules, placing this outside the DocumentRoot entirely ensures that protection regardless of HTTPD service...
So no, archive-dump is coming up short if you have Private Files outside your DocumentRoot directory.

Checking Wordpress core files

Is there a script or something that can check if all core files are installed properly. I am installing a Wordpress site on clients hosting, and for some reason around 100 files were not transferred due to the connection time out. Now I am moving them one by one, but still I would like to check somehow, once I am done, that all files transferred are there and their size is more than 0b.
Thanks.
Since you are using Filezilla, drag and drop all files again into the folder.
Then when the file exists message shows up, pick Overwrite if different size and check apply to current queue only. Then only the ones with different sizes (or the ones that weren't transferred) will be overwritten/updated.
There's an easier way:
If you have access to some kind of control panel like cPanel, you can make a .zip file and upload it only via Filezilla.
Then on cPanel, go to File Explorer and unzip from there. Will be faster and you just have to upload one file (rather than opening tons of connections and giving you timeout).
Or if you have shell access, you can login with your key using Terminal(mac) or Putty(win), browse the folder and run the unzip command.

Resources