I created a HTTP Load Balancer in Google Compute Engine and gave it a name. But I am not able to access the application by that name. But it works fine with the corresponding mapped IP. So accessing the application via IP is working but I am not sure what would be the right Load balancer name. ( for example in Azure you get .trafficmanager.net and similarly you get a load balancer name in AWS also.
Does Google Cloud support IPV6 . What would be the IP or dns name to access because I just see IPV4 address.
Thanks,
TNB
I don't know about internal DNS for load balancers, but I do know that Google compute engine does not support IPv6.
Related
I am planning to host 2 webapps using Firebase Hosting: example.com and dev.example.com. For corresponding APIs, I have 2 projects on GCP (using managed instance groups and a load balancer) with custom domains: api.example.com and dev-api.example.com.
Is it possible to have a setup where subdomains of the custom domain example.com can be split/used across Firebase and GCP load balancer? I thought this is a popular setup but can't find any documentation/howto around this. I am using Google Domains as the domain provider for example.com and using Google Managed SSL certificates as well. All the projects belong to one account.
Assuming that you are using a Classic HTTPS Load Balancer with your GCP project, you may get your Firebase Hosting linked to your LB as an additional backend through Internet Network Endpoint Group so all of them can be reached through the same Load Balancer IP.
To do this,
Edit the current Load Balancer and go to Backend configuration
Create a Backend Service, under Backend type, select Internet Network Endpoint Group
Under Backends > New Backend, Create Internet Network Endpoint Group. This will take you to Network endpoint groups under Compute Engine
Under New network endpoint > Add through, you may select IP and port or Fully qualified domin name and port. Just supply the correct FQDN or IP of your Firebase hosting and the Port where the Firebase hostings are listening to, then Create.
Finish creating the backend service using the the Internet network endpoint group that you created as Backend Type
Under Host and Path rules. click +Add Host and Path Rule, please fill out the Host field with the domain of your Firebase hosting. For Path, just put /*. Then select the Internet network endpoint group that you created as Backend.
I am also under the assumption that your Google Managed Certificate is also deployed within the Load Balancer. If this is the case, then you may provision another Google Managed SSL certificate and include all 4 domains
example.com
dev.example.com
api.example.com
dev-api.example.com
Once done, you may create A records with the Load Balancer's IP address for each domain. This is to ensure that the requests will be forwarded to the correct backend, as oppose to just creating CNAME's which will always forward the request to the root domain (example.com) and not to their intended backends. The LB should be able to forward requests based on the domain being accessed.
I have a website and users create their own app. But i can't embed these apps on my website via iframe, because my website has SSL certificate and got this error:
Mixed Content: The page at 'https://domain' was loaded over HTTPS, but requested an insecure resource 'http://IP_ADDR'. This request has been blocked; the content must be served over HTTPS.
My workflow is like that:
Click create button
Deploy EC2 instance from AWS
Get IP EC2 address from AWS
Embed this app via iframe
I want to embed these IPs to my website, IP addresses are dynamic. Anyone can create machine anytime.
What is best practice solution for this issue?
Best practice (and also the only I can think of) solution IMHO would be to use proper HTTPS for the iframe content also. You'd need a possibility to automatically create DNS records though (you can do so with AWS Route 53). Regarding SSL you could use a wildcard certificate (e.g. Let's Encrypt). Nginx could be configured to proxy_pass by DNS name as opposed to IP. Then your workflow would become this:
Click create button
Deploy EC2 instance from AWS
Get IP EC2 address from AWS
Create DNS record
Embed this app via iframe
In my project, we already have an external load balancer. However, there are several teams within the organisation which uses our internal load balancer. I want to know why do we need internal load balancer if we already have a public-facing external load balancer? Please elaborate.
I answer here to your question in the comment because it's too long for a comment
Things are internal, other are external. For examples
You have an external TCP/UDP load balancer
Your external Load Balancer accepts connexion on port 443 and redirects them to your backend with NGINX installed on it
Your backend needs a MongoDB database. You install your database on a compute and your choose to abstract the VM IP and to use your Load Balancer
You define a new backend on your external load balancer on the port 27017
RESULT: Because the load balancer is external, your MongoDB is publicly exposed on the port 27017.
If you use an internal load balancer, it's not the case, and you increase the security. Only the web facing port is open (443), the rest is not accessible from internet, only by your in your project.
You should check the documentation and then decided if your use case requires using internal load balancer or not. Below you can find links to the Google Cloud documentation and an example.
At first, have a look at the documentation Choosing a load balancer:
To decide which load balancer best suits your implementation of Google
Cloud, consider the following aspects of Cloud Load Balancing:
Global versus regional load balancing
External versus internal load balancing
Traffic type
After that, have a look at the documentation Cloud Load Balancing overview section Types of Cloud Load Balancing:
External load balancers distribute traffic coming from the Internet to your Google Cloud Virtual Private Cloud (VPC) network.
Global load balancing requires that you use the Premium Tier of
Network Service Tiers. For regional load balancing, you can use
Standard Tier.
Internal load balancers distribute traffic to instances inside of Google Cloud.
and
The following diagram illustrates a common use case: how to use
external and internal load balancing together. In the illustration,
traffic from users in San Francisco, Iowa, and Singapore is directed
to an external load balancer, which distributes that traffic to
different regions in a Google Cloud network. An internal load balancer
then distributes traffic between the us-central-1a and us-central-1b
zones.
More information you can find at the documentation.
UPDATE Have a look at the possible use cases for internal HTTP(S) load balancer and for internal TCP/UDP load balancer and check if they're suitable for you and if using them could improve your service.
It's not required to use internal load balancer if you don't need it.
I am trying to set up a secondary web site hosted on our local domain controller running IIS-8.
I already have one site working successfully thought our network, the default site.
I have successfully got the second one to work on the localhost (the domain controller Server 2012-R2), but I can't seem to access it from any of the other workstations on our network.
I added the new site.
Set the binding to IP address:192.168.1.1, Port:80, Host Name:dyo.mysite.com
I have modifed C:\Windows\system32\drivers\etc\hosts to show 192.168.1.1 dyo.mysite.com, and I have added an alias to the forward lookup Zone in the DNS Manager. (Name:byo.mysite.com, FQND:byo.mysite.com.mydc.com, Target Host: 192.168.1.1)
I can't seem to access the site from any of the network work stations. I have tried many combinations of addresses, http://byo.mysite.com, 192.168.1.1/byo.mysite.com, \mydc\byo.mysite.com, etc.
I would imagine that I am probably missing something simple. I just don't know it is.
Any insight would be greatly appreciated.
To get your server accessed from other workstation. You have to promise
Your IIS site can be accessed via IP address directly.
the client workstation is using your DNS
Your client workstation is not bypassing your DNS server by .pac proxy
So could you get access the website via IP address by disabling default website and set the site to unassigned IP or 192.168.1.1 with null domain name?
If you want to access the website via byo.mysite.com. Then you shouldn't set FQDN like byo.mysite.com.mydc.com. because Web browser will never consider byo.mysite.com as an alias but a different server. That's why When you set FQDN like byo.myDC.com, you could get work by access http://dyo and you could also access website via byo.mysite.com.mydc.com but fail with byo.mysite.com.
How to set DNS correctly
To get it work, please create an new primary Forward Loopup Zone named mysite.com. Then create a new HOST(A) record to map to your machine name like dc.mysite.com and 192.168.1.1. Then create an Alias(CNAME) called www to map to this A NAME. Then the FQDN will be www.mysite.com.
Finally bind your IIS site and access the website should work.
PS: Please make sure your other workstation is not using a proxy.
I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer.
However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route.
Here is exactly what I did, step by step:
Create new instance template/group for the app.
Upload image to google cloud.
Create replication controller and service for the app.
The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs.
Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".
In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing.
I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address.
What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go!
If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes.
As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!