I am trying to setup a wordpress as part of my learning process.
I have installed wordpress and configured it on two aws instances with a single RDS. It seems to work fine.
Now I am adding a load balancer. How do I make the wordpress that I have installed in two different instances work together under a load balancer? I am not looking for a production level example and would prefer to do it in a simple way without autoscaling or Elastic beanstalk. TIA
Elastic Load Balancing supports three types of load balancers- Application Load Balancer, Network Load Balancer, Classic Load Balancer. In this case, you have to use Application Load Balancer.
Step 1: Select a Load Balancer Type
To create an Application Load Balancer
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation bar, choose a region for your load balancer. Be
sure to select the same region that you used for your EC2 instances.
On the navigation pane, under LOAD BALANCING, choose Load Balancers.
Choose Create Load Balancer.
For Application Load Balancer, choose to Create.
Step 2: Configure Your Load Balancer and Listener
On the Configure Load Balancer page, complete the following procedure.
To configure your load balancer and listener
For Name, type a name for your load balancer.
The name of your Application Load Balancer must be unique within your
set of Application Load Balancers and Network Load Balancers for the
region, can have a maximum of 32 characters, can contain only
alphanumeric characters and hyphens, must not begin or end with a
hyphen, and must not begin with "internal-".
For Scheme and IP address type, keep the default values.
For Listeners, keep the default, which is a listener that accepts
HTTP traffic on port 80.
For Availability Zones, select the VPC that you used for your EC2
instances. For each Availability Zone that you used to launch your
EC2 instances, select the Availability Zone and then select the
public subnet for that Availability Zone.
Choose Next: Configure Security Settings.
Step 3: Configure a Security Group for Your Load Balancer
The security group for your load balancer must allow it to communicate with registered targets on both the listener port and the health check port. The console can create security groups for your load balancer on your behalf, with rules that specify the correct protocols and ports.
Note
If you prefer, you can create and select your own security group instead. For more information, see Recommended Rules.
On the Configure Security Groups page, complete the following procedure to have Elastic Load Balancing create a security group for your load balancer on your behalf.
To configure a security group for your load balancer
Choose to Create a new security group.
Type a name and description for the security group, or keep the
default name and description. This new security group contains a
rule that allows traffic to the load balancer listener port that you
selected on the Configure Load Balancer page.
Choose Next: Configure Routing.
Step 4: Configure Your Target Group
Create a target group, which is used in request routing. The default rule for your listener routes requests to the registered targets in this target group. The load balancer checks the health of targets in this target group using the health check settings defined for the target group. On the Configure Routing page, complete the following procedure.
To configure your target group
For Target group, keep the default, New target group.
For Name, type a name for the new target group.
Keep Protocol as HTTP, Port as 80, and Target type as
instance.
For Health checks, keep the default protocol and ping path.
Choose Next: Register Targets.
Step 5: Register Targets with Your Target Group
On the Register Targets page, complete the following procedure.
To register targets with the target group
For Instances, select one or more instances in your case you should select your two instances.
Keep the default port, 80, and choose Add to registered.
When you have finished selecting instances, choose Next: Review.
Step 6: Create and Test Your Load Balancer
Before creating the load balancer, review the settings that you selected. After creating the load balancer, verify that it's sending traffic to your EC2 instances.
To create and test your load balancer
On the Review page, choose Create.
After you are notified that your load balancer was created
successfully, choose Close.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the newly created target group.
On the Targets tab, verify that your instances are ready. If the
status of an instance is initial, it's probably because the
instance is still in the process of being registered, or it has not
passed the minimum number of health checks to be considered
healthy. After the status of at least one instance is healthy, you
can test your load balancer.
On the navigation pane, under LOAD BALANCING, choose Load
Balancers.
Select the newly created load balancer.
On the Description tab, copy the DNS name of the load balancer (for
example, my-load-balancer-1234567890.us-west-2.elb.amazonaws.com).
Paste the DNS name into the address field of an Internet-connected
web browser. If everything is working, the browser displays the
default page of your server.
Related
I have already setup my domain using the Google Cloud Platform, including a Load Balancer with SSL protection. Everything works on this end.
How do I connect a Marketplace Wordpress click-to-deploy creation to this existing load balancer?
If the marketplace solution is a single VM go to the instance groups menu in GCE, select unmanged groups, create a group and add the vm to the group.
Then go back to the load balancer and add a backend. It will ask you what to use as a backend: endpoint (no), bucket (no) or instance group.
Go for the instance group.
Mind that LB will work only if an attached health check will detect the vm active (usually you want to check for http on listening port)
If I'm running multiple web server instances can a client application (like a user using a web browser) be using the different instances or would they be routed to the same instance every time? Let's say they duplicate a tab or open a new tab are those tabs still using the same instance too?
This would be in Azure with IIS/ASP.NET.
When you are using load balance in any environment, you almost always have the option to set session affinity. It means basically, a client who is directed to server 1 on his first request will always be routed to the same server. Azure does provide this flexibility too without question. Here is the documentation with some details on how to do that configuration.
There are a couple of ways how you could configure session affinity. One prominent way is by source IP. So, using a different tab or a different browser instance will not make any difference. Requests from a client machine will always carry the same IP address and hence will go to the same server.
Here is the Powershell sample to set source IP based affinity:
Set-AzureLoadBalancedEndpoint -ServiceName MyService -LBSetName LBSet1 -Protocol TCP -LocalPort 80 -ProbeProtocolTCP -ProbePort 8080 –LoadBalancerDistribution sourceIP
Here is some detail on a more specific scenario that happens when users access a load balanced site from behind a company;s firewall.
While exploring google cloud platform's Load balancer options
Advanced Menu shows multiple options which are a bit confusing.
there are multiple backends
backend service -> HTTP(S) LB
backend bucket -> HTTP(S) LB
regional backend service -> internal LB
target pools -> TCP LB
Just going through documentations for target pools and backend-service Looks to me they have similar parameters to configure and in the basic menu both are listed as backends.
I understand that target pools are used by TCP forwarding rules where as backend-service used by url map ( http/s Load balancer).
But Are there any other difference between these or is it just names?
A Backend Bucket allow you to use Google Cloud Storage bucket with HTTP(S) load balancing. It can handle request for static content. This option would be useful for a webpage that with static content and it would avoid the costs of resources than a instance would need.
The Backend Service is a centralized service that manages backends, which in turn manage an indeterminate number of instances that handle user requests.
The Target Pools resource defines a group of instances that should receive incoming traffic from forwarding rules. When a forwarding rule directs traffic to a target pool, Google Compute Engine picks an instance from these target pools based on a hash of the source IP and port and the destination IP and port.
This is why they both are listed as backend-services, because at the end they both do the same, but they specify for two different kind of load balancer. The backend service works for HTTP(S) load balancer and target pools are used for forwarding rules.
"A Network load balancer (unlike HTTP(s) load balancer) is a pass-through load balancer. It does not proxy connections from clients." On same note, TargetPools use forwarding rules, backend services use target proxies. Request is sent to instance in target pool "based on a hash of the source IP and port, destination IP and port, and protocol". Backend service has different mechanism to choose an instance group for e.g URL maps.
I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer.
However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route.
Here is exactly what I did, step by step:
Create new instance template/group for the app.
Upload image to google cloud.
Create replication controller and service for the app.
The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs.
Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".
In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing.
I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address.
What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go!
If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes.
As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!
When firing up multiple new EC2 instances, how do I make these new machines automatically accessible publicly on my domain ****.example.com?
So if I fire up two instances that would normally have a public DNS of
ec2-12-34-56.compute-1.amazonaws.com and ec2-12-34-57.compute-1.amazonaws.com
instead be ec2-12-34-56.example.com and ec2-12-34-57.example.com
Is there a way to use a VPC and Route53 or do I need to run my own DNS server?
Lets say you want to do this in the easiest way. You don't need a VPC
First we need to set up an elastic ip address. This is going to be the connection point between the Route53 DNS service (which you should absolutely use) and the instance. Go into the EC2 menu of the management console, click elastic ip and click create. Create it into EC2-Classic (option will pop up). Remember this ip.
Now go into Route53. Create a hosted zone for your domain. Go into this zone and create a record set for staging.example.com (or whatever your prefix is). Leave it as an A record (default) and put the elastic IP in the textbox.
Note you now need to go into your registrar login (e.g. goDaddy) and replace the nameservers with the ones shown on the NS record. They will look like:
ns-1776.awsdns-30.co.uk.
ns-123.awsdns-15.com.
ns-814.awsdns-37.net.
ns-1500.awsdns-59.org
and you will be able to see them once you create a hosted zone.
Once you've done this it will redirect all requests to that IP address. But it isn't associated with anything. Once you have created an instance, go back into the elastic ip menu and associate it with the instance. Now all requests to that domain will go to that instance. To change just re-associate. Make sure your security zones allow all traffic (or at least HTTP) or it will seem like it doesn't work.
This is not good cloud architecture, but it will get the job done. Better cloud architecture would be making the route point to a load balancer, and attaching the instance to the load balancer. This should all be done in a VPC. It may not be worth your time if you are doing development.