I would like to have a scalable infrastructure for my wordpress site. We currently have the following:
A cloudfront that serves the website
A load balancer and target group with only one registered target in it
An RDS.
The WP server (on which config, and wp-content is).
We have several thousands of pages in the wordpress instance, and sometimes we need to make changes, invalidate caches in the cloudfront to serve the new content. Doing this on a lot of pages can create a huge load on the server, and make it unreachable or super slow. So we thought about adding an autoscaling group, which would spin up new instances if the load is too high, and remove then when necessary.
To do so, I believe we need to move the wp-content folder to a shared directory (between all the servers). Is it a correct assumption first of all?
So I naturally created an EFS, which I mounted on a copy of my wordpress server, then rsync all the files with permissions in the efs.
Then as suggested all over the net, I added the following in my wp-config.php:
define('WP_CONTENT_DIR', '/mnt/efs/wp-content'); where /mnt/efs/wp-content is directory on the efs.
From this point, the website worked as expected, I could see some traffic on the EFS monitoring page when viewing pages.
To make sure all the files are correctly shared and copied in the wp-content, I deleted /data/app/wp-content/ folder (it shouldn't be used, as I referenced wp-content to be in my efs).
And my site started acting weirdly. Some formatting disappeared, buttons are native and not customized etc. The console shows a lot of 404 also with following errors:
www.mysite.eu/:1 Access to font at 'https://www.mysite.fr/wp-content/themes/mysite/dist/fonts/icomoon/icomoon.ttf' from origin 'https://www.mysite.eu' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
GET https://www.mysite.fr/wp-content/plugins/js_composer/assets/lib/bower/font-awesome/webfonts/fa-solid-900.woff net::ERR_FAILED 200
Looks like there are no fonts, no plugins, no themes anymore.
So, quite a few questions:
Do I need to keep both local wp-content and shared wp-content? If so, if I install a plugin or a theme, would it be available for other servers as well?
Do I really need an EFS? Or data is fully stored in DB, and wp-content can live on its own for each server?
Are there any other steps in moving the wp-content folder? Maybe specific steps for some plugins?
Is my architecture lacking anything for what I would like to achieve (scale up and down based on demand), or does that make sense?
Thank you!
Don't put wp_content on a shared file system (s3 bucket). It contains a lot of theme and plugin code, and running code from s3 can cause performance trouble and crank up IOPS costs. Instead, use a plugin to offload your site's uploaded media files (jpg, etc) to the s3 bucket, then clone the site.
Use a shared persistent object cache if you can. redis is a good choice.
AWS has a tutorial about doing this, without the cache, on Lightsail. https://aws.amazon.com/getting-started/hands-on/launch-load-balanced-wordpress-website/
Related
I have created my Elastic Beanstalk application with Wordpress to test. However I am struggling to understand the best way of managing changes in dynamic content online and development changes locally.
I upload my initial Wordpress Installation to my AWS Bucket
I run the initial Wordpress Setup
-- Lets presume that I have included a live theme and uploaded some products and time has progress, changes are made online, new products added to WooCommerce etc..
I make a new page template locally and want to upload to the Bucket
I use EB Deploy, but when I do this all content online in my Bucket is overwritten with the local content.
Now I do of course accept this is by design, but how is the problem best addressed?
Does anyone have any advice to offer with managing content of this sort in the AWS EB?
The instances managed by EB have to be considered disposable. This means that they can disappear without notice.
If the changes are dynamic (eg: files are being uploaded) you cannot store those files in the file system of the instances as they are disposable.
In addition, have in mind that if you scale to several instances, you will have different instances managing different data sets (eg: you upload a file to only one instance, not to all of them).
There are several approaches you can try, for example:
Use a Network File System (NFS) server: in a separate instance, setup a NFS server and set up the EB instances to mount a remote mountpoint at startup. With this approach you can centralize the storage for all your EB instances.
Check out the EFS service from AWS. It's like a NFS server but Amazon flavored. Haven't checked it out yet but it looks promising.
To resolve these problems I created a couple of extra S3 buckets, the first for images and the second for css, js and alike.
Since this was a Wordpress, WooCommerce installation, I commissioned the S3 Offload plugin from Delicious Brains (https://deliciousbrains.com/wp-offload-s3/) ehich was a little expensive but this moved and monitored content of this sort and copied off to the other S3 buckets and allowed the 'EB Deploy' to leave the working content untouched.
I just changed the server one of my sites is hosted on. In doing so, I lost all the images. The CCK file upload fields show "ghost" data but contain no actual image data as they did before the site transfer.
All my data is fine, however.
Is there a way to prevent this so all my images are maintained?
Thanks
You can transfer files with rsync directly or use drush to rsync. You'll need to have ssh access to the servers to get this to work.
Here's some info on setting up your drush aliases:
http://www.leveltendesign.com/blog/dustin-currie/synchronize-one-drupal-site-to-another
http://drupal.org/project/drush
If you're performing this task once you could also use scp to copy files from the destination:
http://www.go2linux.org/scp-linux-command-line-copy-files-over-ssh
If you're on a shared hosting platform, just FTP the files over old school style.
So, I have a strange issue with Drupal imagecache.
When I upload an image, the image is correctly stored in a folder, but imagecache doesn't generate the other cached images.
It doesn't work anymore after I moved the website from development server to the production environment.
The production server is IIS7.
All the "files" subfolder have both write/read permissions, imagecache module is supposed to create folders inside "files" folder.
The GD toolkit is correctly enabled (in Drupal report, imagecache module is correctly enabled). It is php 5.2.
thanks
Check out Imagecache's troubleshooting guide in the handbooks. In particular, check the logs for errors explaining where the process is breaking down.
Also, check the paths that are included in your pages and make sure they are still correct, now that you've moved the site to a production server. This issue has some more tips, specific to the path problem.
Finally, if none of that helps, check the issue queue. There are several issues related to IIS.
99% of the time when I have an issue with imagecache not creating images it has something to do with file system permissions. Verify that your web server is able to write to all of the files/imagecache directories recursively.
I have recently moved a drupal site. (both servers run on a debian based LAMP stack) Everything works great here, including the uploading of images via a CCK filefield. Original url:
dev.example.com/foo
Deploying it to a test folder on the production server to a test folder for an environmental shakedown cruise lead it here:
www.example.com/foo
Everything works here too, including image uploads. After adjusting sites/default/settings.php, then making it readonly again, I renamed the folder to its production name:
www.example.com/bar
Everything works fine here except for image uploading. I've adjusted the webroot variable within settings.php .
Things I have tried so far:
Gave php system user write permissions to sites/default/files (images are set to go in sites/default/files/images but imagecache just puts them in sites/default/files)
Enabled file php file uploading for www.example.com/bar/sites/default/files
Are there any other configuration settings I should be looking out for here? I'm running low on relevant solutions.
Edit: I had quite the typo there, I adjusted sites/default/settings.php, not sites/default.settings.php .
Your question is slightly confusingly framed. default.settings.php has no impact on Drupal -- its merely a template. The file that contains the actual database connection information and other configuration is settings.php.
You may also want to look at your .htaccess file in your root Drupal folder and try changing the RewriteBase directive to the folder you are accessing your site on. Usually you should not have to change the $base_url directive in the settings.php file that you may/may not have done. Reverse that change for now if you have (you may need to play around with that later though).
imagecache will always upload the image derivatives in sites/default/files but imagefield will upload the original image in the folder you specify (within sites/default/files). You will get a setting for the imagefield under Manage Fields->[Name of Image field]->Configure under Path Settings.
Please google to understand the difference between imagecache and imagefield. Make sure your sites/default/files (and subfolders) are writable by the apache user (usually www-data).
In such situations, its usually a good idea to pick up a book on apache (if you haven't already) and try to understand how it works. It will be time consuming but will help you out in the future when you encounter configuration issues like this.
This worked for me. When having issues uploading images to a cck field, I gave write permissions to directory:
sites/default/files/field/image
I'm using multisite to host my client sites.
During development stage, I use subdomain to host the staging site, e.g. client1.mydomain.com.
And here's how it look under the SITES folder:
/sites/client1.mydomain.com
When the site is completed and ready to go live, I created another folder for the actual domain, e.g. client1.com.
Hence:
/sites/client1.com
Next, I created symlinks under client1.com for FILES and SETTINGS.PHP that points to the subdomain
i.e.
/sites/client1.com/settings.php --> /sites/client1.mydomain.com/settings.php
/sites/client1.com/files --> /sites/client1.mydomain.com/files
Finally, to prevent Google from indexing both the subdomain and actual domain, I created the rule in .htaccess to rewrite client1.mydomain.com to client1.com, therefore, should anyone try to access the subdomain, he will be redirected to the actual domain.
This above arrangement works perfectly fine. But I somehow feel there is a better way to achieve the above in much simplified manner. Please feel free to share your views and all advice is much appreciated.
Since it seems you want to reuse the files/ directory and settings.php from your development domain, I'd suggest using the default/ directory + symlinks to achieve your goals.
ie, during development
sites/default/settings.php
sites/default/files/
sites/client1.domain.com -> sites/default (symbolic link)
once you're ready to switch over to their domain:
sites/client1.com -> sites/default
You can then remove client1.domain.com from your virtual host (or continue with your rewrite, etc...).
It will accomplish the same as your method, but you get the added "protection" of all requests going to default in case you add an additional domain at a later date as an alias (for example).
If you're simply sharing core and module files between the sites, you can use a different symlink layout.
In my setup I have all of the shared files in a common, non-web-accessible directory:
/var/www/drupal
/var/www/drupal/sites/all/modules
then for each deployment, common files and folders are symlinked to those files.
/var/www/client1/public_html/index.php -> /var/www/drupal/index.php
/var/www/client1/public_html/includes -> /var/www/drupal/includes
...
/var/www/client1/public_html/sites/all -> /var/www/drupal/sites/all
Then you can place the site's settings.php and any modules or themes for only that site in the default sites directory
/var/www/client1/public_html/sites/default
This layout also offers you the flexibility to override any common files as necessary, such as .htaccess.
To move from staging to production, you will just have to modify your virtual-host configuration from the staging to production domain name.
If you don't like a ton of symlinks, another option is using the Aliased Multi-Site Support patch:
http://drupal.org/node/231298#comment-1420180
This will allow you to specify in configuration that any requests for client1.domain.com should actually use /sites/client1.com/ instead of /sites/client1.domain.com/.
Then when you move to production, you can just remove the configuration setting (though it doesn't hurt anything if you don't).
This feature is part of Drupal 7, but as a new feature won't be added to Drupal 6. More good news is that you won't even need to use it in D7 just for file paths, since instead of storing the full path to files in the database, they use a schema such as public:// or private:// which Drupal then maps to the correct file system path, allowing multiple storage types/locations with much better portability.