Scalling Wordpress on Windows Azure: - wordpress

I'm running a Wordpress multisite which in short periods every week experience a big number of users requiring more CPU + RAM.
I therefore wish to make use of Azure autoscale to turn on more instances if the demand are there, however is it possible to make a setup where the different instances share same storage and database? And if yes how could it be done?

It is supported by out-of-the-box:
Goto "Web Sites" and add a new website from "Gallery".
Select "WordPress"
Follow the rest of the wizard.
The wizard allows you to create a MySQL database. The website runs as a cluster and uses a database which also runs on a cluster of servers (hosted by ClearDB).

Related

CD-CM setup with merge replication

I am in the process of trying to make the publishing process quicker and simpler for one of our customers, on their sitecore based website. Through research I stumbled upon Merge Replication which might solve some of our issues, but it introduces other issues.
I need your help and guidance to figure out which way is the best!
We've got a CD & CM setup, with 1 CM server which has its own SQL instance. 2 CD servers with a SQL instance each.
At the moment I have the current setup:
CM (Master-, web- and core-database) Web is shown only internally on a secure admin url for the site, this works like a preview site.
CD1 & CD2 are the servers for visiting users, these each have a publishing-target in Sitecore.
When we deploy a release:
1. Deploy new code for CM. Publish templates and potential content changes for Sitecore to Web. Verify and authenticate that everything is correct.
2. Take out CD1 of the Load Balancer, deploy new code for CD1, publish templates and potential changes to Sitecore, verify and authenticate, then put server back into the load balancer.
3. Repeat step 2 for CD2.
4. Deployment done
this process is working OK for us now, we are up and running at all time without downtime on the site.
We've got a few issues with the current setup:
Our search (Elastic search) are being populated when CM publishes to Web, so atm there is an issue with elastic search potentially can have data which is not yet published to the CD servers.
When publishing, the editors could forget to publish to one of the CD servers, which would cause inconsistencies between the servers, which we would like to avoid.
Everything needs to be published multiple times for same environment, takes up time.
Editors do not know what a CD server is, they just want to have a “preview” and “Live” publishing target.
I've looked into the Merge Replication for Sitecore, and actually also have it working in a test environment. The advantage we want from this is that we only have two publishing targets:
Preview (CM server preview database)
Live (CM server web database, which then gets replicated out into the CD servers web databases)
The Elastic search instance will relay on data from CM’s web database, which is live data.
We have can have a Elastic search instance running on preview as well.
The issue here is, that now I can't deploy only for CD1 or CD2, when doing deployment. What if I have breaking changes towards Sitecore? The site will break if I publish new breaking Sitecore items to a server which hasn't been deployed to yet?
How can I get the best of these two worlds? Any?
Do you have an ES for each CD?
If you publish data to a single CD and have a shared ES you will get inconsistency either way.
Else I would make make changes to the publish dialog where only an admin/developer could see the CD servers individually.
Example of normal user:
Preview
Live
Example of admin user:
Preview
Live
CD1
CD2

Openshift V3 Wordpress on free account - not enough storage

I host a Wordpress site on my Openshift 2 free account and need to migrate to 3 before 30th September before V2 is switched off. I have tried to create a wordpress site following this blog - https://blog.openshift.com/migrating-wordpress-openshift-3/ but have hit a road block --
I add a sql database but cant make it smaller than 1 Gb. I then can't add persistent storage because it says I am using my storage limit. Therefore I can't have themes, plugins, images etc in persistent storage.
Am I missing something or is it no longer possible to host Wordpress on an Openshift free account?
Thansks!
The blog post uses a procedure which requires two separate persistent volumes, one for the database and one for Wordpress. To use a single persistent volume shared between the two is a bit more complicated and involves running both the database and Wordpress in the same pod. This can't easily be done through the web console.
In principal, if using the command line, you would start by using a command similar to:
oc new-app php~https://github.com/WordPress/WordPress \
mysql --group=php+mysql -e MYSQL_USER=wordpress -e \
MYSQL_PASSWORD=wordpress -e MYSQL_DATABASE=wordpress
and then go on to attach a different sub directory of the one persistent volume to each application in the pod.
So technically it can probably be done, but a bit more fiddly to setup.
Do be aware that the Starter tier is not intended for sites which need to be running all the time. Applications will be subject to resource hibernation as explained in:
https://www.openshift.com/pricing/index.html

Sharing data files between users in a Universal Windows Platform application

I am about to embark on the development of a line of business application using the Universal Windows Platform (Windows 10). One of the requirements of the application is the synchronisation of data from a server to a local SQLite database; this is required because the application needs to be usable where there is no network connectivity.
It is likely that multiple (windows domain) users will be accessing the application on the same device, sometimes simply by "swapping users", other times by logging off the first user and logging on as a new user.
I realise that UWP applications are installed at a user level, however I would like to be able to share the SQLite database between these users instead of forcing each user to download their own copy of the data.
Is this possible? I am struggling to find any reference to this kind of sharing within the Microsoft documentation - but of course that documentation is new and far from complete!
I guess at the end of the day I am looking for access to a folder that is accessible by any user running that application on the same device, such as the "x:\Users\Public" folders that are available from the desktop, but without having to ask the user to provide access to that folder via any picker control - instead simply being able to "open" it.
Thanks.
In case anyone runs across this, this functionality is now available as described in this blog post:
We introduced a new storage location Windows 10, ApplicationData.SharedLocalFolder, that allows multiple users of one app to share local data. Obviously this feature is only interesting with devices that will be used by more than one person. For such scenarios, for example in educational uses, it may make sense to place any large downloads in Shared Local. The benefits will be two-fold: any user can access these files without the need to re-download them, also there will be storage space savings
Keep in mind that Shared Local is only available if the machine has the right group policy, otherwise when you call ApplicationData.Current.SharedLocalFolder you will get back a null result.
In order to enable Shared Local the machine administrator should enable the corresponding policy.
Alternatively, the administrator could create a REG_DWORD value called AllowSharedLocalAppData with a value of 1 under HKLM\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\AppModel\StateManager
Note that data store in ShareLocal will only be persisted as long as the app is installed on the device and won’t be backed up by the system.
In Solution Explorer , Right click on Package.appxmanifest then click on ViewCode , end of this file in both projects add below code :
<Extensions>
<Extension Category="windows.publisherCacheFolders">
<PublisherCacheFolders>
<Folder Name="FolderName" />
</PublisherCacheFolders>
</Extension>
</Extensions>
After that in code you can access this folder with below line of code :
StorageFolder sharedDownloadsFolder = ApplicationData.Current.GetPublisherCacheFolder("FolderName");
It`s so important that the folder you will share between two these Apps depend on same publisher info at Certificate File [ProjectName]_TemporaryKey.pfx , if this Certificate File and publisher Info of app is same in both Projects , then you can access the same SharedFolder in both application and use it for create or use dataBase file(like SQLite Database file) or other files that need to be share in both applications.

The right way to create multiple instances for Load Balancer (EC2)

I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/

Installing third-party Drupal modules on Azure

I've just started playing around with the new "Website" feature in Azure that allows you to create websites with just one step - and also allows you to create websites from a "Gallery", including Drupal. And I can get my Drupal site up and running, no problem. But if I try to add a third-party module (for instance, Mindtree's ODataDrupal), then I get this error message:
Installation failed! See the log below for more information.
odata_support
Error installing / updating
File Transfer failed, reason: Cannot chmod /DWASFiles/Sites/theparentsunion/VirtualDirectory0/site/wwwroot/sites/all/modules/odata_support.
More-or-less the same thing happens if I try to update some of the existing modules (which Drupal warns, with big red flashing letters, are out of date), except then my Drupal install is left crippled, with no way to fix it that I've been able to find.
Is this as-designed, or some limitation of the beta website integration? (Because a Drupal installation is kinda worthless if you can't add new modules to it, or update existing ones.) Or am I doing something wrong?
If you are trying to use plugins and 3rd party modules to Drupal based Windows Azure Websites, the results may vary person to person. This is mainly because the kind of configuration needed by specific module or plugin may or may not be supported by Windows Azure Websites model and not all kind of custom configuration will work on Windows Azure Websites and you would need to move to Windows Azure Virtual Machines.
About application specific structure, what you can do is open the websites FTP folder and whatever you could see there is user configurable, so you can configure it the way you want. However if you application will try to make changes to outside its limited scope, you will hit errors as above.
Here is a case study where Azure VM was used for Drupal based migration which shows that for complex application you may need to use AZure VM rather then Azure Websites.

Resources