I have already setup my domain using the Google Cloud Platform, including a Load Balancer with SSL protection. Everything works on this end.
How do I connect a Marketplace Wordpress click-to-deploy creation to this existing load balancer?
If the marketplace solution is a single VM go to the instance groups menu in GCE, select unmanged groups, create a group and add the vm to the group.
Then go back to the load balancer and add a backend. It will ask you what to use as a backend: endpoint (no), bucket (no) or instance group.
Go for the instance group.
Mind that LB will work only if an attached health check will detect the vm active (usually you want to check for http on listening port)
Related
I am planning to host 2 webapps using Firebase Hosting: example.com and dev.example.com. For corresponding APIs, I have 2 projects on GCP (using managed instance groups and a load balancer) with custom domains: api.example.com and dev-api.example.com.
Is it possible to have a setup where subdomains of the custom domain example.com can be split/used across Firebase and GCP load balancer? I thought this is a popular setup but can't find any documentation/howto around this. I am using Google Domains as the domain provider for example.com and using Google Managed SSL certificates as well. All the projects belong to one account.
Assuming that you are using a Classic HTTPS Load Balancer with your GCP project, you may get your Firebase Hosting linked to your LB as an additional backend through Internet Network Endpoint Group so all of them can be reached through the same Load Balancer IP.
To do this,
Edit the current Load Balancer and go to Backend configuration
Create a Backend Service, under Backend type, select Internet Network Endpoint Group
Under Backends > New Backend, Create Internet Network Endpoint Group. This will take you to Network endpoint groups under Compute Engine
Under New network endpoint > Add through, you may select IP and port or Fully qualified domin name and port. Just supply the correct FQDN or IP of your Firebase hosting and the Port where the Firebase hostings are listening to, then Create.
Finish creating the backend service using the the Internet network endpoint group that you created as Backend Type
Under Host and Path rules. click +Add Host and Path Rule, please fill out the Host field with the domain of your Firebase hosting. For Path, just put /*. Then select the Internet network endpoint group that you created as Backend.
I am also under the assumption that your Google Managed Certificate is also deployed within the Load Balancer. If this is the case, then you may provision another Google Managed SSL certificate and include all 4 domains
example.com
dev.example.com
api.example.com
dev-api.example.com
Once done, you may create A records with the Load Balancer's IP address for each domain. This is to ensure that the requests will be forwarded to the correct backend, as oppose to just creating CNAME's which will always forward the request to the root domain (example.com) and not to their intended backends. The LB should be able to forward requests based on the domain being accessed.
I have a bucket in Google Cloud which I have upload Angular template i.e. http://digitaldevnet.appspot.com
then I have VM instance IP where I have WordPress website i.e.
http://35.200.194.201
I found different tutorials where you can connect domain to Google Cloud hosting
but I want to connect appspot link i.e. http://digitaldevnet.appspot.com to WordPress site
can be connected and once we browse it should work as VM may be offline sometime.
Any recommendation and tutorial, please let me know
You would need to set up a Load balancer in order to direct your traffic between the GCE instance and the bucket, you can find the instructions for this over here.
Nevertheless, it would be a bit complicated as you would need to set up the configuration and some health checks so that the GCE instance is not accessible when it's down.
Hope you find this useful.
I am trying to setup a wordpress as part of my learning process.
I have installed wordpress and configured it on two aws instances with a single RDS. It seems to work fine.
Now I am adding a load balancer. How do I make the wordpress that I have installed in two different instances work together under a load balancer? I am not looking for a production level example and would prefer to do it in a simple way without autoscaling or Elastic beanstalk. TIA
Elastic Load Balancing supports three types of load balancers- Application Load Balancer, Network Load Balancer, Classic Load Balancer. In this case, you have to use Application Load Balancer.
Step 1: Select a Load Balancer Type
To create an Application Load Balancer
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation bar, choose a region for your load balancer. Be
sure to select the same region that you used for your EC2 instances.
On the navigation pane, under LOAD BALANCING, choose Load Balancers.
Choose Create Load Balancer.
For Application Load Balancer, choose to Create.
Step 2: Configure Your Load Balancer and Listener
On the Configure Load Balancer page, complete the following procedure.
To configure your load balancer and listener
For Name, type a name for your load balancer.
The name of your Application Load Balancer must be unique within your
set of Application Load Balancers and Network Load Balancers for the
region, can have a maximum of 32 characters, can contain only
alphanumeric characters and hyphens, must not begin or end with a
hyphen, and must not begin with "internal-".
For Scheme and IP address type, keep the default values.
For Listeners, keep the default, which is a listener that accepts
HTTP traffic on port 80.
For Availability Zones, select the VPC that you used for your EC2
instances. For each Availability Zone that you used to launch your
EC2 instances, select the Availability Zone and then select the
public subnet for that Availability Zone.
Choose Next: Configure Security Settings.
Step 3: Configure a Security Group for Your Load Balancer
The security group for your load balancer must allow it to communicate with registered targets on both the listener port and the health check port. The console can create security groups for your load balancer on your behalf, with rules that specify the correct protocols and ports.
Note
If you prefer, you can create and select your own security group instead. For more information, see Recommended Rules.
On the Configure Security Groups page, complete the following procedure to have Elastic Load Balancing create a security group for your load balancer on your behalf.
To configure a security group for your load balancer
Choose to Create a new security group.
Type a name and description for the security group, or keep the
default name and description. This new security group contains a
rule that allows traffic to the load balancer listener port that you
selected on the Configure Load Balancer page.
Choose Next: Configure Routing.
Step 4: Configure Your Target Group
Create a target group, which is used in request routing. The default rule for your listener routes requests to the registered targets in this target group. The load balancer checks the health of targets in this target group using the health check settings defined for the target group. On the Configure Routing page, complete the following procedure.
To configure your target group
For Target group, keep the default, New target group.
For Name, type a name for the new target group.
Keep Protocol as HTTP, Port as 80, and Target type as
instance.
For Health checks, keep the default protocol and ping path.
Choose Next: Register Targets.
Step 5: Register Targets with Your Target Group
On the Register Targets page, complete the following procedure.
To register targets with the target group
For Instances, select one or more instances in your case you should select your two instances.
Keep the default port, 80, and choose Add to registered.
When you have finished selecting instances, choose Next: Review.
Step 6: Create and Test Your Load Balancer
Before creating the load balancer, review the settings that you selected. After creating the load balancer, verify that it's sending traffic to your EC2 instances.
To create and test your load balancer
On the Review page, choose Create.
After you are notified that your load balancer was created
successfully, choose Close.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the newly created target group.
On the Targets tab, verify that your instances are ready. If the
status of an instance is initial, it's probably because the
instance is still in the process of being registered, or it has not
passed the minimum number of health checks to be considered
healthy. After the status of at least one instance is healthy, you
can test your load balancer.
On the navigation pane, under LOAD BALANCING, choose Load
Balancers.
Select the newly created load balancer.
On the Description tab, copy the DNS name of the load balancer (for
example, my-load-balancer-1234567890.us-west-2.elb.amazonaws.com).
Paste the DNS name into the address field of an Internet-connected
web browser. If everything is working, the browser displays the
default page of your server.
I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer.
However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route.
Here is exactly what I did, step by step:
Create new instance template/group for the app.
Upload image to google cloud.
Create replication controller and service for the app.
The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs.
Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".
In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing.
I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address.
What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go!
If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes.
As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!
When firing up multiple new EC2 instances, how do I make these new machines automatically accessible publicly on my domain ****.example.com?
So if I fire up two instances that would normally have a public DNS of
ec2-12-34-56.compute-1.amazonaws.com and ec2-12-34-57.compute-1.amazonaws.com
instead be ec2-12-34-56.example.com and ec2-12-34-57.example.com
Is there a way to use a VPC and Route53 or do I need to run my own DNS server?
Lets say you want to do this in the easiest way. You don't need a VPC
First we need to set up an elastic ip address. This is going to be the connection point between the Route53 DNS service (which you should absolutely use) and the instance. Go into the EC2 menu of the management console, click elastic ip and click create. Create it into EC2-Classic (option will pop up). Remember this ip.
Now go into Route53. Create a hosted zone for your domain. Go into this zone and create a record set for staging.example.com (or whatever your prefix is). Leave it as an A record (default) and put the elastic IP in the textbox.
Note you now need to go into your registrar login (e.g. goDaddy) and replace the nameservers with the ones shown on the NS record. They will look like:
ns-1776.awsdns-30.co.uk.
ns-123.awsdns-15.com.
ns-814.awsdns-37.net.
ns-1500.awsdns-59.org
and you will be able to see them once you create a hosted zone.
Once you've done this it will redirect all requests to that IP address. But it isn't associated with anything. Once you have created an instance, go back into the elastic ip menu and associate it with the instance. Now all requests to that domain will go to that instance. To change just re-associate. Make sure your security zones allow all traffic (or at least HTTP) or it will seem like it doesn't work.
This is not good cloud architecture, but it will get the job done. Better cloud architecture would be making the route point to a load balancer, and attaching the instance to the load balancer. This should all be done in a VPC. It may not be worth your time if you are doing development.