Azure app service (paas) + Storage + WordPress - wordpress

I would like to run multiple WordPress instances using Azure Application Service and have a dedicated VM running their MySQL databases.
Let's say each Wordpress site was a gig in size (uploaded files not database) and my App Service plan comes with 50gig storage. This means I could theoretically run a max of 50 WordPress sites on that
plan.
Is it possible to link a blob storage plan to my paas plan and have all WordPress sites stored on the additional storage plan?
I know there is an outdated WordPress plugin that supposedly stores uploaded media on Azure storage plans. I'm not interested in that. I want all WordPress files stored and served from a separate storage plan.
If I was to spin up a VM running IIS I imagine I could do this by simply pointing each IIS site to each WordPress install on a storage plan data disk. Just wondering if I can do the same using Paas?

Technically App Service is already doing this. Your 50GB of space is in Blob Storage. But as you suspect, you have no control over this yourself. If you want/need that level of control, then you need to switch to a VM as you suggested.
This is exactly why the storage plugins were built, to move the uploaded media files into Blob Storage to maximize available space and improve performance.
Another way to save "local" space would be to run as a multisite if possible. Then you would only need one copy of WP and any plugins in common for the various sites.

Related

Multi region wordpress deployment on cloud

I have been looking for many blog and site to deploy the Wordpress website multi-region on cloud platform.
I have go through GCP App Engine and Kubernetes but didn't find much.
How to create a database connection from another region and how to manage WordPress media and sync them across the regions. also i am looking for auto-scaling on website.
For database we can use cross region read replica but how to handle the media data and sync them across all the instances in different regions.
To deploy highly available and scalable wordpress architectures on AWS, I would suggest to read this white paper https://aws.amazon.com/blogs/architecture/wordpress-best-practices-on-aws/
The key to multi region deployment is to have a copy of the data in both regions. This comes with a lot of challenges if you do consider to have two database masters, i.e. where the write operations can happen (In wordpress words, a write happens when you author a post or when customers are leaving comments)
Having a cross-region read replica is possible with Amazon RDS since 2013 : https://aws.amazon.com/blogs/aws/cross-region-read-replicas-for-amazon-rds-for-mysql/
For master-master setup, have a look at Amazon Aurora Global Database (compatible with MySQL) : https://aws.amazon.com/rds/aurora/global-database/ But i would seriously question why you want to do that first.
[UPDATE 17 July 2019]
I just found out that the bitnami distribution of Wordpress has a documentation explaining how to use S3 for media files : https://docs.bitnami.com/aws/apps/wordpress-pro/configuration/wordpress-aws-s3/
I will post this answer even though there are better answers. This answer provides additional information on one particular design.
I have deployed multi-region WordPress on both AWS and Google Cloud. WordPress is simply not designed for this. Unless you have money to spend and IT talent, choose a company that offers distributed WordPress managed. You will save money and headaches.
For the last project, the company did not require instant updates/synchronization globally but required high traffic loads that could be predicted. We decided upon separate WordPress systems in each region. We wrote software that ran once per hour to synchronize the WordPress content between systems. This involved syncing the file system wp-uploads directory, moving static assets to cloud storage behind a CDN and copying content changes to each MySQL database. If there was a conflict, an email was sent to an admin to manually review. Once per day software ran that compared the newest posts to verify content and synchronization between servers.
The systems in each region were load-balanced and auto-scaled. The database was hosted separately on managed MySQL servers. The WordPress directory was hosted on an NFS share. We used cloud storage + CDN for static assets (css, js, images, downloads). Except for cloud storage, we did not share assets between regions. Each region was independent. Each region has at least two servers running at all times. During forcasted peak loads (marketing releases, events, etc.) we would scale up/down each group based on timezone via a GUI click to prewarm the systems.

Firebase site - Hosted or Storage for site assets?

I have a small firebase project site that I've been working on and is now public and gaining more traction than I thought. I doubt I'll hit Firebase's 10GB hosting transfer cap, but this got me thinking as to whether or not I'd be better served storing my site assets in Firebase Storage, and if that would help at all. I'm a bit new to these cloud service pricing models, so any help would be greatly appreciated. Boiling it down, here are my questions:
I have 20mb of assets currently stored in a /rsc/ directory on my hosted site. Would it lessen my Hosting "Data Transferred" to move these assets to Firebase Storage (Would the data transferred be logged under my Storage quota)?
Yes, if you put the files in Cloud Storage instead those will not be counted against the Firebase Hosting bandwidth limits. However, you will lose out on the global CDN edge-caching and atomic rollout/rollback provided by Firebase Hosting.

google cloud and wordpress

I have just started playing with Google cloud. I used to work on normal servers so I need advice.
I created my first instance and deployed Wordpress. I installed woocommerce plugin. The shop is quite fast and I am happy (with the lowest settings) but now:
I wanted edit function.php but I can't. The attributes are read only so How can I change it?
How to get access to my all files I can't see them in storage cloud. How to set up ftp?
What about database for my shop? I understand I can create new data base but where to access to current data base of my wordpress.
What should I deploy more to work comfortable with my wordpress?
About ssl
SNI SSL certificate slots are offered for no additional charge for
accounts that have billing activated. Free accounts are limited to 5
certificates.
I have no experience with ssl but I plan run shop so what it means. Free certificates for 5 instances or 5 deployement ? How many certificates do I need to run one shop?
I know there are many questions but I wanted to go further and all advise on internet is outdated because are for older versions of google cloud. Please help me to understand this all.
I assume you're attempting to use WordPress on Google App Engine.
GAE has no real filesystem, so you cannot write to it (unless you juggle with the API GAE offers). Editing happens locally using the GAE SDK development server and you deploy your changes to the App Engine ecosystem using the SDK interface (GUI or CLI). All application writes should go to Google Cloud Storage (which is similar to Amazon S3 and the like).
I'm not certain whether the Google Cloud Storage can be accessed via traditional FTP. There might be some middleware required. You can see and browse the contents of your buckets in the developer project console (https://console.developers.google.com/).
The databases are on a separate "server" when using GAE. MySQL instances are spawned into the Google Cloud SQL ecosystem, which are available for App Engine and Compute Engine instances (and why not other places too). You can define the GCSQL address and port to wp-config.php like normally. You need to create a local MySQL database for your local installation. More: https://cloud.google.com/appengine/docs/php/cloud-sql/
When working with Google App Engine you should deploy the whole WordPress installation (wp-config.php, wp-includes/, wp-admin/, wp-content/, etc.) in order for it to work in the GAE system. For a "better" deployment system you should do some searching or ask a new question dedicated for that issue.
The certificates themselves on GAE are not free, but the "slots" you put the certificates into are. Free projects (no billing enabled) offer 5 free slots where you can put your purchased certificates. SSL SNI means that you can use multiple different domain/host certificates under a single listening IP address (which some years back was not that simple to do). What this all means that GCP offers a way to use certificates with their services, but you still need to get the certificates themselves elsewhere.
Have you seen the GAE starter project offered by Google: https://googlecloudplatform.github.io/appengine-php-wordpress-starter-project/ ? It makes your live a bit easier when developing WP sites for Google App Engine.
If you're working with Google Compute Engine instances, then they should operate just like regular VPS machines, with some Google restrictions applied. I have not used them so I do not know the specifics.

Migrate static content from ASP.NET project to Windows Azure platform

I've got asp.net project. I want publish it in azure platform. My project contains different static content: images, javascript, css, html pages and so on. I want store this content in azure blob storage. So, my questions are:
1) Is there any way to automate the process of migration this content from my application to blob storage?
2) How can I use data retreived from blob storage? Any examples would be great!
Best regards,
Alexander
First off, what you're trying to do could create cross-site scripting (they'll be on different domain names) or security issues (if you're using SSL). So make sure you really want to seperate the static files from the rest of your web site.
That said, the simpliest approach would be to use any one of a number of Windows Azure Storage management utilities (Storage Explorer or Cerebrata's Storage Studio would both work), to upload the static content to a Windows Azure Storage blob container. Then set the permissions on that container to publis read so that anyone with a web browser can access the contents of the container.
Finally, change all referrences to the content to point to the new URI's in blob storage and deploy your ASP.NET web role.
Again though, if I were you, I'd really look at what you're trying to accomplish with this approach. By putting it in blob storage, you do gain access to a few things (like CDN enablement), but as a trade-off, you lose control over many others (like simplified access control via IIS for request logs to tell when someone is downloading your image files a trillion times to try and run up your bill). So unless there's a solid NEED for this, I'd generally recommend against it.
Adding a bit to #Brent's answer: you'll get a few more benefits when offloading static content to blob storage, such as reduction in load against your Web Role instances.
I wrote up a more detailed answer on this similar StackOverflow question.
In light of your comment to Brent, you may want to consider uploading the content into Blob storage and then proxying it through a WebRole. You can use something like an HttpModule to accomplish that fairly seamlessly.
This has 2 main advantages:
You can add/modify files without reloading your web roles or losing them on role refresh.
If you're migrating a site, the files can stay at the same URLs they were pre-migration.
The disadvantages:
You're paying the monetary cost for Blob accesses and the performance cost to your web roles.
You can't use the Azure CDN.
Blob storage is generally slower (higher latency) than disk access.
I've got a fairly simple module I wrote to do exactly this. I haven't gotten around to posting it anywhere public, but if you're going to do this I can send you the code or something.

Large Video Uploads via a website

Some of the problems that can happen are timeouts, disconnections, and not being able to resume a file and having to start from the beginning. Assuming these files are up to around 5gigs in size, what is the best solution for dealing with this problem?
I'm using a Drupal 6 install for the website.
Some of my constraints due to the server setup I have to deal with:
Shared hosting with max 200 connections at a time (unlimited disk space)
Shared hosting.
Unable to create users through an API (so can't automatically generate ftp accounts)
I do have the ability to run cron-type scripts via a Drupal module.
My initial thought was to create ftp users based off of Drupal accounts and requiring them to download an ftp client for their OS of choice. But the lack of API to auto-create ftp accounts and the inability to do it from command line kind of hinder that solution. If there's a workaround someone can think of, let me know!
Thanks
Usually, shared hostings does not support large files uploads through the browser. A solution may be to use another files hosting for your large uploads. A nice an easy solution to integrate is Amazon S3 and its browser based upload with a from POST.
It can be integrated in a custom module that provide an upload form protected using Drupal access control. If you require the files to be hosted on the Drupal server, you can use cron (either Drupal's or an external one) to move the files from S3 to your own hosting.
You're kind of limited in what you can do on a shared host. Your best option is probably to install SWFUpload and hope there aren't a lot of mid-upload errors.
Better options that you probably can't use on a shared host include the upload progress PHP extension (which Drupal automatically uses when it's installed) and, as you said, associating FTP accounts with Drupal accounts.

Resources