Use NEGs from other project as Google Cloud Load Balancer backend - networking

How can I use Network Endpoint Groups (NEGs) from other Google Cloud project as a Load Balancer backend? Is there any solution?

You have 2 solutions to achieve this
Use Internet Network Endpoint Group to reach the load balancer of the second project.
Create a peering between the 2 projects and create Zonal Network Endpoint Groups to reach the VM in the second project
All depend on the level of separation of your projects, the which to manage them independently and so on.

Related

Connect GCP Wordpress CTD to existing load balancer?

I have already setup my domain using the Google Cloud Platform, including a Load Balancer with SSL protection. Everything works on this end.
How do I connect a Marketplace Wordpress click-to-deploy creation to this existing load balancer?
If the marketplace solution is a single VM go to the instance groups menu in GCE, select unmanged groups, create a group and add the vm to the group.
Then go back to the load balancer and add a backend. It will ask you what to use as a backend: endpoint (no), bucket (no) or instance group.
Go for the instance group.
Mind that LB will work only if an attached health check will detect the vm active (usually you want to check for http on listening port)

Why to use internal load balancer if we already have an external load balancer?

In my project, we already have an external load balancer. However, there are several teams within the organisation which uses our internal load balancer. I want to know why do we need internal load balancer if we already have a public-facing external load balancer? Please elaborate.
I answer here to your question in the comment because it's too long for a comment
Things are internal, other are external. For examples
You have an external TCP/UDP load balancer
Your external Load Balancer accepts connexion on port 443 and redirects them to your backend with NGINX installed on it
Your backend needs a MongoDB database. You install your database on a compute and your choose to abstract the VM IP and to use your Load Balancer
You define a new backend on your external load balancer on the port 27017
RESULT: Because the load balancer is external, your MongoDB is publicly exposed on the port 27017.
If you use an internal load balancer, it's not the case, and you increase the security. Only the web facing port is open (443), the rest is not accessible from internet, only by your in your project.
You should check the documentation and then decided if your use case requires using internal load balancer or not. Below you can find links to the Google Cloud documentation and an example.
At first, have a look at the documentation Choosing a load balancer:
To decide which load balancer best suits your implementation of Google
Cloud, consider the following aspects of Cloud Load Balancing:
Global versus regional load balancing
External versus internal load balancing
Traffic type
After that, have a look at the documentation Cloud Load Balancing overview section Types of Cloud Load Balancing:
External load balancers distribute traffic coming from the Internet to your Google Cloud Virtual Private Cloud (VPC) network.
Global load balancing requires that you use the Premium Tier of
Network Service Tiers. For regional load balancing, you can use
Standard Tier.
Internal load balancers distribute traffic to instances inside of Google Cloud.
and
The following diagram illustrates a common use case: how to use
external and internal load balancing together. In the illustration,
traffic from users in San Francisco, Iowa, and Singapore is directed
to an external load balancer, which distributes that traffic to
different regions in a Google Cloud network. An internal load balancer
then distributes traffic between the us-central-1a and us-central-1b
zones.
More information you can find at the documentation.
UPDATE Have a look at the possible use cases for internal HTTP(S) load balancer and for internal TCP/UDP load balancer and check if they're suitable for you and if using them could improve your service.
It's not required to use internal load balancer if you don't need it.

Setup a kubernetes cluster with bare metal servers from different subnets

What I am doing right now:
I own many VPS which I use to deploy applications with Docker compose, most of the machines come from different subnets and have a public static IP address.
For each new application I would pick a random VPS, assign the new application's subdomain's DNS with the VPS' IP address and deploy my application in this VPS behind an Nginx proxy (jwilder Nginx).
This approach is in my opinion very comfortable since jwilder's Nginx does almost the work for me and I only have to assign the correct DNS.
What I want to achieve:
For the purpose of learning, I would like to take the machines and make a Kubernetes cluster out of them, so I could learn more about this technology. My idea is that I only have to assign new subdomain's DNS to one single point, which also plays the role of a load balancer and pass the traffic to corresponding pods.
To redirect traffic to a new application I only have to configure the load balancer.
My problem:
I know this question is not very precise since I don't know a lot of Kubernetes. Moreover, my servers are not from a cloud provider like Google or AWS and I, therefore, can not use their solutions. They are not even from a single cloud provider, most of them are of my university and some are from a private cloud provider.
Could anybody tell me how can I achieve this?
I think the answer is kubeadm, you can install it on your own pc or vm.
It is gonna create a single control-plane cluster which could be joined by other of your vms and create a kubernetes cluster.
kubeadm helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices
kubeadm is designed to be a simple way for new users to start trying Kubernetes out, possibly for the first time, a way for existing users to test their application on and stitch together a cluster easily, and also to be a building block in other ecosystem and/or installer tool with a larger scope.
Your cluster pods will communicate via CNI.
CNI was created as a minimal specification, built alongside a number of network vendor engineers to be a simple contract between the container runtime and network plugins

Publishing hystrix metrics to API

I have a web service running as multiple docker containers. The docker hosts are not under my control. I use hystrix to record metrics. I thought of using Turbine to monitor the metrics. But i do not have access to the real hostnames and ports of my we app instances to give Turbine. So I am thinking of a push model where the individual instances of my web app publishes the metrics to another API on which i can run dadhboard tools. I looked at Servo, but it also does not suite my needs as it publishes to JMX. Can I use custom publisher for this? Are there examples for this use case?

Allow requests to SF endpoints only from several ec2 instances

I have a public API running on EC2 instance (through AWS ELB) built with Symfony3. However, I have several background tasks which have to consume this API but only on dedicated endpoints. I have to ensure that it is only the workers that consume these endpoints.
I was wondering how can I implement such a structure with AWS. I'm looking at API gateway, VPCs, but I'm kind of lost.
Do you have an idea?
If both the API server and the API consumers are running on EC2 instances, then you can easily configure the security group assigned to your API server to restrict access to only those API consumer instances. Just create a rule in the security group that opens the inbound port for your API, and use the security group(s) assigned to your API consumer instances as the source.

Resources