This is just an idea in the making right now so no specifics but, I am wondering what the best way to route traffic to a particular server for URLs with the path /post*?
I am running a WordPress/WooCommerce install on ElasticBeanstalk which is all setup already but I am thinking of adding a blog to the site. This would all be under the same application and deployment etc (so normal WP site really, just write a post and publish on the same site) but I want to ensure users viewing the blog area of the site don't consume resources for the eCommerce side.
If the blog runs slow at a given time this is not a huge issue but the eCommerce side shouldn't be hindered by a spike in CPU for example.
My initial thought is to have a separate EC2 that accepts traffic to all blog related paths but how could this link into git/application deployments via AWS etc? Maybe there is an easier approach I'm missing?
The current setup is basically the below:
CloudFlare for DNS
WP on ElasticBeanstalk with autoscaling/loadbalancer
AL2 with NGINX
Worker application for background task (no webserver) also ELB
Shared RDS instance for DB
EFS for ephemeral storage
S3 for storage
You can't do this as a single Elastic Beanstalk deployment. Elastic Beanstalk is for running a single app. You would need to have a 2nd Elastic Beanstalk deployment for your blog application. You would configure path routing rules in Cloudflare to forward requests to /blog/* to the new server.
Related
I am considering deploying multiple instances of WordPress sites to Heroku using this buildpack:
https://elements.heroku.com/buildpacks/mchung/heroku-buildpack-wordpress
I have never worked with Heroku before, so I am confused about the pricing.
my question:
Is it possible to use a single dyno and deploy multiple low-traffic wordpress sites there, or is it going to be one dyno per one site?
One Dyno per site.
When using the Wordpress buildpack Heroku provides a single Heroku dyno: this is because a Web Dyno can only listen to a port.
At deployment time your Dyno gets a url based on your project name (project.herokuapp.com).
Heroku has a free-tier (see limitations) so you can run all sites you need for free.
Alternatively I would suggest DigitalOcean: with 5$ per month you get a droplet where you can configure multiple sites/applications behind a single HTTP server (ie nginx).
I am migrating a project to the cloud with the aim to load-balance the website.
It is currently .NET Core app hosted on IIS, but there is a Virtual Application attached to this site in IIS (for a very old MVC3 application):
www.mysite.com hits the .NET Core app
But
www.mysite.com/blog hits the completely separate MVC app.
I want to dockerise my application and run it behind Elastic Beanstalk, or even just some EC2 instances behind a load balancer, but how can I take care of the "blog" app in this scenario? It 100% doesn't need to be load balanced and I don't want to make it part of the deployment strategy, since it is a simple CMS and the code hasn't been re-deployed for years!
EDIT: I'm thinking the Load Balancer provided by AWS must be the thing I am looking for, since it will be linked to the DNS entry and as it is effectively a reverse proxy, I should be looking to see if I can configure the Load Balancer to add a rule to reverse proxy the request into one of my EC2 boxes?
how can I take care of the "blog" app in this scenario?
You can use Application Load Balancer. You can have different listeners handling different URLs (e.g. one listener handles www.mysite.com and another one handles /blog).
You can add more sophisticated rules for /blog to forward its handling to different target groups.
Target groups can contain ECS, EC2 and whatever you need.
Good day!
Due intense desire to learn new things, I have tried setting up my very own new server which is in Linux and a hosting using Centos Web Panel at home. After the installation process, I then proceeded with the common installation of the necessary configurations including the WebServer. I chose NGINX because I've read that it is lighter and more scalable than Apache, and can be used as a web server or as a reverse proxy. After that I then proceeded with the creation of my new website with the domain I have. After creating my website, it was only then that I have read about Server Blocks in NGINX (Site I read).
My question is how can I implement the Server Block method to my existing website? Or should I simply remove my site and create a new one using the server block method?
Thanks in advance
UPDATE
I created my website by creating an account in User Accounts category on CWP where I declared my domain name and ip address. Then I was given prompted to User Account dashboard(IP:2082).
Uploading of files is through FTP using my ip/username/password/port which is usually located on /home/user_account/public_html. But after seeing the tutorials, everything is set to to /var/www/domain/public_html
I created a WordPress site in AWS EC2. it works fine and I can login to my dashboard, then I created a load balancer. I changed my SiteUrl in wp_options to my loadbalancer's dns.
I created an image of that instance. Now I created an auto-scaling group with that image. I'm able to visit my site using the load balancer dns , but I can't login into my dashboard using dns. when I type dns/site/wp-admin it says:
wp-login.php was not Found on this server.
I don't what is the problem. kindly help me.
Edit: To why this is not working because considering you made an image of instance , that means it has two databases (assuming you have not used RDS) now and two servers with two different set of files and db. This should not be the case and might be that would be reason it is not working.
You are taking the wrong approach , you can take advantage of Auto Scaling and Load Balancing if you have designed your site in that way.
This might be a long answer but I hope it clears your understating of how it works or how ideally it should work on AWS
Stateless server
A stateless server is the pre-requisite for building a highly available and scalable infrastructure on AWS. A stateless server does not store any data expect of temporary data like caches.
By default WordPress is storing data in two different ways:
MySQL database: articles, comments, users and parts of the
configuration are stored in a MySQL database.
File system: media files uploaded by the authors are stored on the
file system.
If the MySQL database is running on the same EC2 instance as the
WordPress application itself, the server is not stateless. Same is
true for the media files stored on the file system.
Why is this a problem? Because if the virtual machine becomes unavailable, the data will be unavailable, too. And if you need to add another EC2 instance to handle more traffic all the data will be missing on the additional server
Components that you need to use are :
RDS: managed MySQL database
S3: media file storage
ELB: synchronous decoupling
Auto Scaling based on usage
You can refer to this sample architecture for reference:
You can Refer this Blog Post or can use this CloudFormation Template too.
I am new to windows azure. I have requirement that I have to achieve. I have searched on the google but it was not useful.
I have to deploy multiple web sites to 1 cloud service. Is there any possibility? I will make them ssl enabled with multiple certificates in one cloud service.
So url's will be like:
https:// mysite/Home/
https:// mysite2/Home/
https:// mysite3/Home/
Actually my requirement is I should be able to deploy multiple web site and able to change the web.config after deployment. I think this we can do by enabling remote machine to the cloud service. After that we can login to remote machine and change the web.config file through iis manager. Am I correct?
Do we have any best way to achieve the requirement? I have to consider the minimum cost for this.
Thanks
This is an old post, but it shows how to run multiple websites in the same web role: http://www.wadewegner.com/2011/02/running-multiple-websites-in-a-windows-azure-web-role/
Regarding the changes in the web.config, you should not do that, because your instances may be changed, and those new instances will not have the modified file. All the configuration you are willing to change after deployment should be stored in the ServiceConfiguration.cscfg file. This way you can modify the configuration without redeploying, and the configuration is shared among all instances of the service.